A lone figure working in silhouette at a computer while colleagues stand unaware in the background — representing Shadow AI and hidden technology use in the workplace.

Shadow AI: The Hidden Signal Your Culture Is Ready to Transform

March 25, 20269 min read

There's a conversation your employees are having right now that you're not part of.

It goes something like this:

"Have you tried using ChatGPT for this?"

"Just paste it into Claude, it'll write it in two minutes."

"I've been using this tool for our client proposals for three months. Nobody knows."

This is called Shadow AI.

And it's already inside your organization. Not as a rumor. Not as a future risk. Right now.

A recent IBM study found that nearly 60% of employees are using AI tools at work that their company hasn't approved. In companies without a clear AI policy, that number is almost certainly higher.

Your team isn't malicious. They're resourceful:

They found tools that help them do their jobs faster. They started using them quietly — because asking permission felt risky, and waiting for corporate guidance felt impossible given the pace of change.


Three Things That Are Probably Happening Right Now

1. Company data is leaving the building.

Every time an employee pastes a client proposal, an internal process document, or a customer email into a public AI tool, that data exists somewhere outside your control. What your AI policy says about this doesn't matter if employees don't know the policy exists, or if no policy does.

2. Quality is inconsistent in ways you can't see.

Some employees are getting dramatically better outputs with AI. Others are producing work that looks fine but is based on hallucinated data, inaccurate sourcing, or outputs they didn't verify. The gap between those two groups is growing invisibly, and it's affecting your clients' experience without you knowing.

3. Your best people are building habits you didn't design.

The employees who figure out AI tools fastest tend to be your high performers, the resourceful ones, the problem-solvers.

  • They're not waiting for training.

  • They're learning by doing.

  • And they're building workflows and dependencies on tools you haven't evaluated, vetted, or even know about.

Wondering if this is already happening in your organization? That's the conversation worth having.

Book an AI Clarity Call

Why Shadow AI flourishes in "unsafe" cultures

Here's what most leaders miss when they first encounter Shadow AI:

The question should not be "Why are they using unauthorized tools?"

Bur: "Why didn't they feel safe enough to ask?"

This is where Shadow AI becomes a leadership mirror. The behavior itself, capable people solving real problems quietly, without raising their hand, tells you something important about the culture you've built. Or the culture you haven't built yet.

Psychological safety, the belief that you can speak up, try new things, and ask uncomfortable questions without fear of punishment, is not a soft leadership concept. It's a competitive advantage.

Teams with high psychological safety innovate faster, adapt better, and surface problems before they become crises.

Shadow AI flourishes in its absence.

When people don't feel safe to say "I've found something that could help us work better, can we talk about it?", they don't stop using the tool. They just stop telling you.

I've worked with leadership teams for over 20 years. I've watched this pattern play out across industries, sectors, and organizational sizes:

  • When trust is low and hierarchy is rigid, people work around systems rather than through them.

  • When trust is high and speaking up is genuinely rewarded, they bring you the problem before it becomes your crisis.

The Shadow AI crisis isn't just about AI. It's about whether your organization has the culture that can navigate rapid technological change, or one that will be repeatedly surprised by it.


The Leadership Mistake Most CEOs Make

When they discover Shadow AI, most leaders respond in one of two ways:

  • "Shut it down." Issue a policy banning unauthorized AI use. Audit who's been doing what. Treat it as a compliance failure.

  • "Let it run." Assume that since nothing's broken yet, the risk isn't real.

Both are wrong:

"Shut it down" kills the momentum of your most resourceful people and drives the behavior further underground.

  • You don't solve Shadow AI by banning AI.

  • You solve it by replacing the vacuum that created it.

And here's the deeper problem with the punitive response: it actively destroys the psychological safety you need for successful AI transformation. The moment employees believe they'll be penalized for trying something new, even something that wasn't explicitly approved,

  • they stop experimenting.

  • they stop surfacing insights.

  • they stop being the resourceful, adaptive people you need them to be"Let it run" is how small data exposure events become large legal liabilities, how inconsistent quality becomes a client trust issue, how the gap between your intentional AI adoption and your actual AI usage becomes unbridgeable.

The right response is a third option.

It starts with a leadership decision, not an IT directive. And it requires something most technology conversations never include:

an honest reckoning with the culture you've built.


Trust as the Foundation of AI Transformation

Every successful AI transformation I've been part of or studied has one thing in common:

The leaders built trust before they built systems.

Not trust in technology. Trust between people.

Trust that leadership is making decisions in the interest of the team, not just the bottom line. Trust that speaking up about AI use, what's working, what's concerning, what people are quietly doing, will be met with curiosity rather than control. Trust that the organization sees transformation as something people go through together, not something that gets done to them.

This is why my work always starts with organizational readiness before it starts with tools. Because AI doesn't just test your systems. It tests your culture. And every gap in trust, every unspoken fear, every place where people don't feel safe to be honest, AI will find it.

Shadow AI is one of the most honest signals your culture can send you. It tells you:

  • Your people are motivated and adaptive.

  • The organization hasn't created a safe enough channel for that energy.

  • There's a leadership gap between where change is happening and where it's being led.

That gap is yours to close. And the way to close it isn't a policy document. It's a conversation.


What Actually Works: The Visibility Conversation

Before policies. Before audits. Before vendor assessments.

Ask your leadership team, all of them, not just IT, these three questions in a single meeting:

"What AI tools are people on your team currently using?"

Make it safe to answer honestly. You're not looking for names, you're looking for patterns. This requires you to set the tone explicitly:

"I'm asking because I want to understand what's working for people, not because anyone is in trouble."

That sentence matters. Say it before the first answer comes.

"What are they using them for?"

The answer to this is your treasure map. The use cases employees have already self-selected, the tasks they trusted AI enough to try on their own, are your highest-potential starting points for intentional adoption.

Your people have already done the discovery work.

You just need to create the conditions where they can share it.

"What concerns do you have about what you heard?"

This is where the real risks surface. Not theoretical risks. The specific things people are worried about in your specific organization, with your specific clients and data.

This question also signals something important to your leadership team: that their concerns are valid and wanted, not inconvenient.

That conversation takes 90 minutes.

It will tell you a lot about your AI readiness and your cultural readiness.


The Shadow AI Crisis Is Actually an Opportunity

Here's what I've learned after years of working with leadership teams navigating technology disruption:

The presence of Shadow AI is not a failure of your organization. It's evidence that your people are

  • adaptive,

  • resourceful,

  • motivated to do their jobs better.

That's exactly the energy you need for successful AI transformation.

If you respond to this with control, you'll trigger a crisis.

But, if you respond with structure - clear frameworks, intentional conversation, and a path that channels the energy already in motion - you can turn it into a competitive advantage.

And crucially: the leaders who build the culture where people feel safe bringing AI - the tools they've found, the experiments they've run, the questions they have - directly to leadership. That culture doesn't eliminate Shadow AI. It transforms it into something far more valuable:

  • Collective intelligence.

  • Distributed experimentation.

An organization that learns faster than its competitors because its people feel safe enough to share what they're learning.


Before Q2 Begins: Three Actions

1. Have the visibility conversation.

  • Schedule 90 minutes with your leadership team this week.

  • Ask the three questions above.

  • Don't solve anything in that meeting, just listen and map what you learn.

  • Explicitly name that this is a safe conversation. That framing is not optional.

2. Create a holding framework, not a policy.

Something simple:

"While we develop our formal AI guidelines, here's what we ask:

  • No client data in public tools.

  • Verify any AI output before using it externally.

  • If you find something useful — tell your manager so we can evaluate it together."

Three sentences of guidance are infinitely better than silence. And the third bullet, "tell your manager", is the trust signal that matters most.

It says: bringing this forward is welcomed, not penalized.

3. Decide who owns the conversation.

Not IT.

Not legal.

A business leader who understands both the operational opportunity and the risk. Someone your team will actually talk to honestly about what they're doing.

Psychological safety isn't built by policy, it's built by specific people who show up in a specific way. Choose the right person.


The Leadership Decision Only You Can Make

The Shadow AI crisis is invisible right now.

Your competitors' teams are having the same quiet conversations your team is.

The difference, the competitive difference, will be which CEO builds the culture where that energy surfaces openly rather than operating underground. Which leader creates the conditions where people feel safe to say "I found something that could help us" before it becomes a liability.

  • You can't automate trust.

  • You can't deploy psychological safety.

  • You can't buy the culture that makes AI transformation actually work.

But you can build it. One conversation at a time. Starting this week.

That's a leadership decision.

And it's yours to make.

If this raised questions about what's already happening in your organization, that's exactly the right starting point. I work with a small number of SMB CEOs at a time to turn that uncertainty into strategic clarity — and results within 90 days. If you're ready to find out where you actually stand, let's talk.

Book an AI Clarity Call

Birgit Gosejacob is an AI Transformation Architect, systemic coach, and published author with over 25 years of experience guiding leaders through complex change. She works with CEOs and founders of mid-sized businesses who need to move through AI transformation without leaving their people behind.
Most AI consultants speak tech. Most leadership coaches speak culture. Birgit speaks both, and translates seamlessly between them.
She has lived through every technology shift since the 1970s. She knows what overwhelm feels like. And she knows how to move through it.

Birgit Gosejacob

Birgit Gosejacob is an AI Transformation Architect, systemic coach, and published author with over 25 years of experience guiding leaders through complex change. She works with CEOs and founders of mid-sized businesses who need to move through AI transformation without leaving their people behind. Most AI consultants speak tech. Most leadership coaches speak culture. Birgit speaks both, and translates seamlessly between them. She has lived through every technology shift since the 1970s. She knows what overwhelm feels like. And she knows how to move through it.

LinkedIn logo icon
Youtube logo icon
Instagram logo icon
Back to Blog