Why IT Leaders Are Deeply Concerned About Shadow AI in the Workplace

3
Shadow AI
Shadow AI

A new report reveals growing concern among IT leaders about the rise of Shadow AI—the unauthorized use of AI tools within enterprise environments. According to a survey of 200 IT directors and executives from large U.S. organizations (1,000+ employees), 46% are extremely worried about Shadow AI, and 90% cite significant security and privacy risks.

The report, released by unstructured data management firm Komprise, highlights the dangers of employees using generative AI tools like ChatGPT, Claude, and others without IT approval. These unsanctioned tools can lead to data leaks, inaccurate results, legal challenges, and even reputational damage.

Krishna Subramanian, co-founder of Komprise, warns that shadow AI poses significant risks, including leaking sensitive information and inappropriately using copyrighted content, causing financial or reputational harm to 13% of organizations.

Beyond Shadow IT: Why Shadow AI Is More Dangerous

Unlike traditional Shadow IT, where departments may use unauthorized software or cloud tools, Shadow AI has broader implications. It can process, learn from, and disseminate sensitive company data, often outside the organization’s secure perimeter.

James McQuiggan of KnowBe4 warns that Shadow AI can create security vulnerabilities, allowing employees to accidentally input confidential data, bypassing corporate safeguards.

Melissa Ruzzi, Director of AI at AppOmni, warns that AI tools often train models based on user input and lack strict data security standards, potentially posing security threats.

Enterprise Vulnerability on the Rise

Shadow AI doesn’t stop at text-based tools. Embedded AI features in common workplace applications can also operate under the radar. These AI features, while improving productivity, may process personal or business-critical data in ways IT teams can’t fully monitor or control.

Unauthorized AI usage could lead to billions in losses, particularly in federal agencies, healthcare, and financial sectors, according to Krishna Vishnubhotla from mobile security firm Zimperium.

Why Shadow AI Is So Widespread

One reason Shadow AI is spreading rapidly is its accessibility. “All it takes is a browser,” Subramanian explains. “Users can unknowingly submit customer data, code, or proprietary information to these AI tools.”

The rapid expansion of AI-enabled features in enterprise tools is posing a threat to security teams, potentially compromising data loss prevention strategies.

Satyam Sinha, CEO of Acuvity, emphasizes the low barrier to entry: “Employees don’t need training to use Gen AI tools—they just start using them. That’s why Shadow AI is more pervasive than Shadow IT.”

A Path Forward: Mitigation, Not Bans

Experts agree that banning AI tools isn’t a viable solution. Instead, companies must educate, regulate, and empower employees with approved tools and policies.

Kris Bondi, CEO of Mimoto, stresses the need for clear communication. “Employees need to understand which AI tools are safe and why. Blanket bans only drive stealthier usage.”

Ruzzi adds that companies should adopt SaaS security tools capable of detecting AI usage across applications, not just chatbots. “Early detection and containment are key to minimizing fallout.”

McQuiggan suggests organizations identify AI usage and develop comprehensive governance strategies, integrating AI governance into their overall security framework for greater control over future developments.