An increasing number of employees are leveraging artificial intelligence (AI) tools to enhance productivity and streamline work tasks. However, the prevalent use of these unauthorized platforms, known as shadow AI, poses significant risks to organizations regarding data security and intellectual property protection. Companies often struggle to keep pace with technological advancements, which drives employees to seek third-party AI solutions that are not officially sanctioned.
Kareem Sadek, a partner with KPMG in Canada who specializes in technology risk, identifies shadow AI as more prevalent when employees are looking for ease and speed in their workflow. While this practice may seem harmless in the pursuit of efficiency, it often creates substantial headaches for businesses, jeopardizing intellectual property and sensitive data related to business strategies, client information, and user bases.
Robert Falzon, head of engineering at Check Point Software Technologies Ltd., emphasizes the critical risks that arise from employees interacting with unauthorized AI tools. Users may unknowingly expose confidential information, such as financial statements and proprietary research, to external parties through chatbots which store conversations to enhance their capabilities further. For instance, sensitive sales data may become accessible to strangers using chatbots who conduct similar inquiries, potentially leading to unwanted data leakage.
The risks associated with shadow AI practices are underscored by a report from IBM and the Ponemon Institute, revealing that 20% of surveyed companies reported experiencing data breaches linked to these unauthorized tools. This statistic highlights a notable trend, as it is seven percentage points higher than data breaches related to sanctioned AI applications. Moreover, the report indicated a rise in the average cost of a data breach for Canadian organizations, reaching $6.98 million, reflecting a 10.4% increase from the previous year.
Sadek advocates for the establishment of governance around AI usage within organizations to mitigate these risks. He proposes forming an AI committee comprising personnel from various departments, such as legal and marketing, to evaluate AI tools and promote their adoption under secure guidelines. Effective governance should be rooted in an ethical AI framework addressing concerns around security, data integrity, and bias.
Implementing a zero-trust approach is one strategy suggested by Falzon, which involves not trusting any devices or applications that have not been explicitly authorized by the organization. This approach significantly minimizes risks by limiting what employees can input into chatbots, thereby protecting sensitive information. For example, at Check Point, employees are restricted from entering research and development data into AI platforms, ensuring that they are informed of the associated risks.
Creating awareness about the potential dangers of unauthorized AI use is central to addressing the friction between employers and employees. Sadek suggests that conducting hands-on training sessions can help educate staff about the consequences of using unsanctioned AI tools, thereby fostering a sense of accountability among employees.
Some organizations are now turning to the deployment of proprietary chatbots as a viable solution to counteract unauthorized AI usage. This approach allows companies to secure sensitive data while ensuring internal tools comply with established security protocols. Nevertheless, as highlighted by cybersecurity researcher Ali Dehghantanha, even internal chatbots are not immune to security vulnerabilities. During a recent audit, he noted that it took him just 47 minutes to breach a Fortune 500 company's internal chatbot, gaining access to sensitive client data.
Dehghantanha warns that many prominent industries, including banking and law, are increasingly relying on internal chatbots for essential functions, yet many lack robust security measures. Companies must allocate a budget for adopting AI technologies, taking into account the total cost of ownership, which includes securing these tools. He emphasizes the importance of ensuring that AI technology is protected from potential threats while also delivering the necessary benefits to the organization.
As the integration of AI tools continues to evolve in the workplace, Falzon points out that companies can no longer prevent employees from utilizing AI. Instead, organizations must equip staff with appropriate tools while simultaneously safeguarding against potential risks such as data leakage. Ensuring a balance between productivity enhancements and data protection remains a critical challenge for organizations as they navigate the complexities of AI technology.










