
Your Most Resistant Employee Might Be Your Most Important One.
44% of your youngest workers are sabotaging AI. Here's what that actually means.
The numbers are striking enough to stop you. According to a recent survey by AI platform Writer and research firm Workplace Intelligence, 29% of all employees admit to actively sabotaging their company's AI rollout. Among Gen Z workers specifically, that number jumps to 44%.
The sabotage takes many forms.
Some workers are feeding proprietary data into unauthorized tools. Others are flat-out refusing to use the platforms their employer mandates. Some are deliberately producing low-quality work to make AI look ineffective. Some are even manipulating performance reviews to skew results against the technology.
The predictable response from most organizations is to treat this as a compliance issue. Tighten the policies. Communicate the consequences. Get people in line.
That would be a mistake. Not because the behavior is acceptable - some of it clearly isn't - but because treating it as a discipline problem means you stop listening exactly when the signal is at its loudest.
Resistance is data
In 25 years of working with leaders and organizations through significant change, I have never seen meaningful resistance that didn't carry a message.
The message is rarely the one people say out loud - 'I don't like this tool' or 'this slows me down' - but it's almost always real, and it almost always contains information about something that needs attention before any tool rollout can succeed.
What the Writer survey reveals when you look past the headline is this:
30% of the workers who admitted to sabotage cited fear of losing their job as the primary motivator. That's not stubbornness. That's not Luddism. That's a human being watching the same headlines you're reading, drawing rational conclusions about what they mean, and deciding that their best defense is to make the threat look smaller than it is.
That fear is not irrational.
Anthropic's CEO publicly stated that AI could eliminate half of all entry-level white-collar jobs within five years. Stanford's Digital Economy Lab has documented a substantial decline in employment for early-career workers in AI-exposed fields. Entry-level job postings fell from 44% of all postings in 2023 to 38.6% by March of this year. The data these workers are reacting to is not imaginary.
So before you escalate the policy conversation, ask the diagnostic question: what does this resistance tell you about how your people understand their own position in the future of your organization? And a harder follow-on: what have you communicated - or failed to communicate - that has left that understanding in place?
What personality science actually shows
This is where the conversation gets more useful than the headlines allow - and where most leadership responses to AI resistance fall short:
The standard reply is to design better training, communicate more clearly, or build in accountability. All of those things have their place. None of them address the underlying dynamic that determines whether any of them stick.
The Big Five personality model - the most rigorously validated framework in personality psychology, built on decades of cross-cultural research - identifies five core dimensions that shape how people engage with novelty, uncertainty, and change. Understanding these dimensions is not about putting your people in boxes. It's about understanding the specific kind of support each person needs if adoption is going to be genuine rather than performed.
Extraversion
Extroverted people process change socially. They talk about it, work through it in conversation, and form their views in dialogue rather than in private reflection. In an AI adoption context this creates a dynamic that leaders consistently underestimate: an extroverted resistant employee is not one resistant person.
They are a multiplier.Their anxiety - or their enthusiasm - moves through the team at a speed that an introvert's never would.
By the time an AI rollout formally begins, an extroverted sceptic may have already seeded a narrative in the informal team culture that is very difficult to reverse. Conversely, an extroverted early adopter can pull three colleagues with them before any training session has taken place.The leadership implication is timing.
Identify your extroverted employees early in the process and have the honest conversation with them before the rollout is announced to the wider team. Not because you need their permission, but because you need their narrative working with you rather than ahead of you.
An extroverted person who understands what's coming, why it's happening, and what it means for them specifically, becomes one of your most powerful change carriers. The same person left to fill in the gaps themselves becomes your most effective resistance organizer.
Agreeableness
Highly agreeable people are motivated by harmony, by meeting expectations, and by maintaining positive relationships with the people around them. In an AI rollout, this creates the most difficult adoption pattern to diagnose: they will appear to comply while quietly bypassing the tool for anything that actually matters.
They use the platform when observed. They complete the training. They say the right things in the team meeting. And then they revert to their existing workflow for real work, because the tool feels uncertain and uncertain output feels like a risk to the quality of their relationships - with clients, with colleagues, with you.
This is not dishonesty. It is the predictable behavior of someone whose primary professional motivation is to maintain the goodwill of the people around them, and who has concluded - often correctly - that producing AI-assisted work that isn't quite right will damage that goodwill more than producing slower, human work that they can stand behind.
Understanding this pattern helps leaders distinguish between genuine adoption and managed appearance. The question to ask a high-agreeableness person is not "are you using the tool?" but "where does the tool feel most unreliable to you?" That question creates space for the honest conversation that compliance tracking never will.
Conscientiousness
Highly conscientious people are methodical, quality-oriented, and thorough. They apply rigorous standards to everything they do - including new tools. Before they trust an AI output, they want to understand its limitations, test its reliability, and know where it fails. This looks like resistance in a rollout that measures adoption by speed. It is not resistance. It is the same quality orientation that makes these people exceptional performers in the first place.
Once a high-conscientiousness person has done their due diligence and committed to a tool, they often become your most rigorous AI users. They catch errors that others miss. They build reliable workflows that can be shared across the team. They are the people who will make your AI investment sustainable rather than just impressive in the first quarter.
The leadership response for high-conscientiousness people is patience and structure: give them time to test, provide clear documentation of what the tools can and cannot do, and avoid public pressure to adopt before they're ready. That investment pays back in quality and reliability.
Emotional Stability
The inverse of this dimension - high neuroticism, or emotional reactivity - is the trait most associated with visible resistance and sabotage behavior. High-reactivity people are not weak or dramatic. They are people whose nervous systems are genuinely more activated by threat signals. When the ambient threat signal in the environment is 'your job may not exist in five years,' elevated reactivity is a predictable and understandable response to real information, not a character flaw to be managed.
The critical insight here is that telling a high-reactivity person not to worry - or worse, dismissing their concern as disproportionate - does not reduce their anxiety. It adds to it, because now they feel both threatened and unheard. What reduces reactivity is genuine safety: honest communication about what is changing and what isn't, a visible commitment from leadership to navigate the transition with them rather than around them, and concrete evidence that their contribution is understood and valued.
This is why AI rollouts that lead with efficiency arguments and cost savings generate the most resistance. The implicit message is: 'AI does this faster and cheaper.' The person hearing that message fills in the unstated conclusion themselves - and they're often right to.
Extraversion and Agreeableness
These two dimensions matter less for initial adoption than the first three, but they shape how resistance manifests and spreads through a team.
High extraversion means people process change socially - they talk about it, which means their anxiety or enthusiasm both move through the team faster than an introvert's would. An extroverted resistant employee is not just one resistant person. They are a multiplier. Identifying and addressing their concerns early prevents the spread of a narrative that can be very hard to reverse once it's established in the informal team culture.
High agreeableness means people are motivated to maintain harmony and meet expectations. In an AI rollout, highly agreeable people will often appear to adopt - they'll use the tool because you've asked them to - while quietly bypassing it for anything that matters. They produce the surface compliance that looks like success in a dashboard and produces nothing in practice. Understanding this pattern helps leaders distinguish between real adoption and managed appearance.
Beyond the Big Five: where LINC takes you
The Big Five gives you the terrain. The LINC Personality Inventory from the University of Luneburg - one of the most sophisticated tools I work with in my leadership practice - goes considerably further. LINC maps how personality traits interact with values, core motivations, and self-concept: the story a person tells about who they are and what they're for professionally.
That level of granularity reveals a distinction that generic AI readiness surveys consistently miss: the gap between what a person can do technically and what they can do psychologically given their current self-concept. A person can have all the technical capability in the world and still struggle with AI adoption if integrating the tools requires them to revise an identity built on being the expert - the person who knows, who doesn't need assistance, whose value lies in mastery rather than process.
Conversely, a person with modest technical credentials but a flexible self-concept - someone who identifies with being a good problem-solver rather than a subject matter expert - can adapt to AI rapidly, because using AI doesn't threaten the story they tell about themselves. It fits it.
This is the level of insight that makes it possible to build genuinely individualized roadmaps for each person on your team - not a one-size rollout with different communication styles layered over the top, but a transition plan that is actually designed around who each person is and what they need to make the crossing successfully.
Where Redundance to Relevance (R2R) comes in
Understanding your people is only half the picture. The other half is understanding the roles they're in - and whether those roles are changing in ways that directly affect the people holding them.
The R2R framework I use with leadership teams is built for exactly this.
It starts with a Role Risk Radar - a structured assessment of five signals that indicate whether a role is at risk in an AI-shifted environment: Task Overlap, Technology Bypass, Strategic Drift, Skill Gap Widening, and Quiet Disengagement.
That fifth signal - Quiet Disengagement - is the direct behavioral bridge between the role assessment and the personality conversation. When a person scores high on Quiet Disengagement in a Role Risk Radar, it tells you two things simultaneously: the role is at risk from AI, and the person in it knows it, whether they've said so or not. That combination - high role risk plus visible disengagement - is exactly where the sabotage behavior the Writer survey identified is most likely to be found.
The R2R framework then moves through Role Redesign - mapping which tasks should be automated, which should be augmented with AI, and which must remain human (or even are required in the future) - and through a Bridge Plan that sequences the development of the new skills the redesigned role requires. When you layer a LINC profile of the person alongside the role redesign, you can map the development path specifically to how that person learns, what motivates them, and where their resistance is most likely to show up so you can address it before it does.
This is the combination that produces genuine transformation rather than compliance theater: a role that has been honestly assessed and redesigned, a person who has been genuinely understood, and a roadmap that connects the two in a way that is specific enough to actually follow.
What the super-users reveal - and why it matters for everyone
The same survey that documented the sabotage also profiled the other end of the adoption curve. Workers who have genuinely embraced AI tools are saving nearly nine hours per week on average - 4.5 times more than their resistant counterparts. They are three times more likely to have received a promotion and a pay raise in the past year.
The gap between the two groups is widening fast, and leadership is noticing.
69% of executives surveyed said they are considering laying off employees who refuse to adopt AI.
77% said workers who don't develop AI proficiency will simply not be considered for advancement.
Here's the brutal irony that the data makes visible: the resistance meant to protect against job loss is accelerating exactly the outcome people are trying to avoid. Workers who are sabotaging AI rollouts to buy themselves time are using that time to fall further behind the colleagues who are using AI to pull ahead.
This is not a reason to shame resistant employees. It is a reason to understand them and intervene before the gap becomes irreversible. The leader who treats the 44% as a signal rather than a problem is the one who gives those people a genuine path forward.
The conversation that actually changes things
Genuine adoption doesn't start with a policy update or a training mandate. It starts with a conversation honest enough to address the fear directly rather than talking around it.
What that conversation sounds like, practically:
"I know there's a lot of noise about AI and jobs right now. Here's what I actually think is changing in our organization, what I think it means for your role specifically, and how I'm planning to develop the people on this team through it."
And then you say those things - not a rehearsed reassurance, but a real account of where you are and what you believe.
That conversation requires you
to have done your own thinking first. It requires you to have actually looked at your roles through the R2R lens and to know which ones are genuinely at risk and which ones are more stable than the headlines suggest.
to understand your people well enough to know who needs time, who needs structure, who needs the honest conversation about what's coming, and who will move fastest if you simply clear the obstacles and get out of the way.
You cannot shame or coerce people into genuine adoption. You can mandate compliance, and you will get surface compliance - people going through the motions while their relationship to the technology stays adversarial.
That produces exactly the low-quality AI-assisted work that makes tools look ineffective, which reinforces the case that AI isn't working, which is precisely what the most resistant employees were hoping for.
The 44% are not your problem.
They are carrying a signal about fear, identity, and trust inside your organization. Leaders who hear that signal clearly - and respond to it with the combination of personal understanding and structural honesty that the situation requires - come out of this period with stronger teams, not just better tools.
FREQUENTLY ASKED QUESTION
Why are employees resisting AI adoption at work?
Employee AI resistance is primarily fear-driven, not capability-driven. Research shows 30% of workers who sabotage AI rollouts cite job loss anxiety as the primary motivator - a fear grounded in real data about declining entry-level employment. Personality science, particularly Big Five research, shows resistance patterns correlate strongly with conscientiousness and emotional reactivity rather than stubbornness or unwillingness. For leaders, resistance is diagnostic data about psychological safety, identity threat, and the quality of change communication inside the organization. The R2R framework and LINC personality profiling offer a structured way to translate that data into individualized roadmaps for people and roles.
If you want to understand what your team's AI readiness actually looks like beneath the surface - at the role level and the person level - an AI Clarity Call is where that conversation starts. Bring your real situation and we'll work through it together.
If you want to bring this conversation to your leadership audience: bookbirgit.com/speak
