Shadow AI: When teams secretly introduce AI into their organization

“Shadow AI,” the use of AI in work contexts with personal tools represents a reality for many organizations. And a major threat. Let’s deep dive.

Shadow IT is a well-known phenomenon to IT departments. But since the emergence of ChatGPT and other generative AI tools, it has taken on new dimensions with employees using these conversational robots to write emails, create presentations, produce texts, deepen their strategy… often without telling their management.

Management, however, is not fooled: according to a Dataiku & Harris Poll study, 94% of leaders suspect their employees of using generative AI tools without authorization! This figure may be somewhat exaggerated, but it still reflects the potential scope of the issue, while more than one in three executives say they use generative artificial intelligence (AI) tools professionally at least once a week, according to a survey by the Association for Executive Employment in France.

Stealth AI generally develops in contexts of regulatory void or prohibition, or even negligence due to unfamiliarity with this new technology. With a real risk of security breaches, regulatory compliance issues, or data governance problems.

A mishap that happened, among others, to Samsung. In early 2023, the South Korean computing giant realized that engineers had accidentally shared sensitive company data (source code, meeting notes)… by pasting them into ChatGPT to check for errors! It’s worth remembering that these conversational robots improve their future responses by feeding on the data they receive.

So what should organizations do?

In June 2025, a survey conducted in France by the National Institute for Research in Computer Science and Automation (Inria) and datacraft, a group of data scientists, examined this behavior which represents both strategic opportunities and a challenge for securing practices.

We learn that organizations can opt for four different options:

  • Pure and simple deterrence
  • Ignorance and passivity
  • Permissiveness
  • Structured support and supervised innovation

Pioneer organizations, rather than hindering informal uses, make them a lever for transformation. “While all pioneer organizations have experienced different states, it’s the speed with which they transitioned from one state to another that distinguishes them,” the report describes.

A movement that must happen in three phases:

Management – by making uses visible, assessing risks, and establishing a minimal framework

Sharing – by “socializing” practices to extract collective value (workshops, presentations, debates…)

Tooling – by deploying secure environments, usage guides, dynamic charters, and appropriate training.

The objective: to recognize and network know-how that is by nature diffuse. And to move from a tinkering logic to an assumed and secure strategy.

“The response to informal uses cannot be either default prohibition or unlimited openness. It requires establishing a framework of trust, clear enough to secure, flexible enough to evolve,” the report specifies.

Which thus recommends:

  • Defining permitted or prohibited data and uses
  • Clarifying the responsibilities of both employees and managers
  • Providing light but reactive supervision mechanisms
  • Adopting iterative governance, connected to real practices

A guide of good practices necessary at a time when many companies are rushing into integrating AI into their daily operations. As Patrick Opet, security director at American bank JP Morgan Chase, pointed out in direct language:

“We see organizations deploying systems they fundamentally don’t understand. Solution providers must prioritize security over the race for new features.”

AI without conscience is but the ruin of the soul.