
Before AI: Why Trust is Your First Step Towards Technological Transformation
AI Doesn't Fix Organizational Dysfunction. It Amplifies It.
I want to start there because it's the thing most leaders don't hear until it's too late.
An organization with poor communication, siloed teams, and a leadership layer that doesn't have the trust of its people will not become a better organization when AI arrives.
It will become a faster, more efficient version of its dysfunction.
The problems that were manageable will scale. The trust deficit that everyone had quietly learned to work around will become impossible to ignore.
This is the conversation I wish more AI consultants were having - because the organizations that are struggling most with AI adoption are rarely struggling with the technology.
They're struggling with the conditions the technology landed in.
What AI Actually Requires
Before any AI tool earns its investment, three things need to be in place:
trust between leadership and teams,
genuine collaboration across functions,
psychological safety that makes experimentation possible.
These aren't cultural nice-to-haves. They are the operating conditions without which AI either stalls in pilot stage or delivers results that no one can build on.
Trust between a leader and their team is foundational in a way that becomes visible the moment a significant change is introduced.
When people don't trust that leadership has their interests at heart, every new technology - especially one as loaded as AI - gets filtered through the question: is this going to be used against me? That question doesn't get asked out loud. It shapes behavior quietly:
People comply without engaging.
Adoption numbers look fine.
Real capability never develops.
The trust conversation has to happen before AI enters the room.
It starts with honest, specific communication about what is changing and why - not a town hall slide deck about the exciting future, but real answers to the questions people are actually sitting with.
"Will my role change?"
"Could I be replaced?"
"What happens to the people whose jobs look most like what AI can do?"
Leaders who answer those questions directly, even when the answers are uncomfortable, build the foundation that makes everything else possible.
The Silo Problem
AI's effectiveness multiplies when information flows freely across an organization. Most organizations are not built that way.
Departments that guard their own data,
teams that compete rather than collaborate,
incentive structures that reward individual performance over shared outcomes
These patterns don't disappear when AI is introduced.
They become the constraint that limits what AI can actually do. A tool that could transform how an organization makes decisions is only as good as the quality and completeness of the information it can access.
Breaking down silos is leadership work, not technology work.
It requires changing how people are measured and rewarded, not just how they are tooled. It requires leaders who model knowledge-sharing rather than information-hoarding - who are visibly willing to bring other functions into their territory before they're asked to.
That kind of culture doesn't emerge from a new software rollout.
It has to be built deliberately, and it has to come from the top.
Psychological Safety Is Not a Soft Requirement
Fear is one of the most reliable predictors of failed AI adoption, and it operates at every level of an organization.
Fear of being replaced.
Fear of making a mistake with a tool that is new and not yet understood.
Fear of looking incompetent in front of colleagues who seem more comfortable with the technology.
Fear of being held accountable for a decision made partly on the basis of an AI recommendation.
None of these fears go away on their own.
They go away when leaders create an environment where it is genuinely safe to experiment, ask basic questions, and be wrong without consequence.
It means
treating early failures as data rather than performance issues.
leaders visibly engaging with AI tools themselves, including the moments when the tool gets it wrong.
building enough psychological safety that people raise concerns before they turn into resistance.
One of the clearest examples of what this looks like in practice is Morgan Stanley's implementation of their AI-based wealth management system, Next Best Action.
Before any rollout, the company engaged extensively with its financial advisors - multiple rounds of consultation, real conversations about concerns, genuine incorporation of feedback into how the system was designed and positioned.
Advisors felt the technology was being built with them rather than deployed at them.
The result was adoption rates above 90% - not because the technology was irresistible, but because the conditions for trust had been built first.
That process took deliberate leadership time and attention.
It required leaders who understood that the human response to the technology was as important as the technology itself. Most organizations skip that work. Morgan Stanley didn't, and the results reflect it.
Source: Columbia Business School, December 2024 (Todd Jick and Stephan Meier)
What This Means in Practice
The question I ask leaders before we talk about AI tools, AI strategy, or AI capability is simpler than most of them expect:
"Do your people trust you?"
Not trust you to make the right strategic calls. Trust you to tell them the truth when things are uncertain, to consider their interests when decisions affect their roles, and to create enough safety that they can tell you when something isn't working.
If the answer is a genuine yes, AI transformation is hard but navigable.
If the answer is uncertain, or if the leader hasn't really thought about it, that's where the work starts - because no implementation methodology, no matter how well-designed, will compensate for a trust deficit at the foundation.
The organizations that will use AI well over the next decade are not necessarily the ones moving fastest right now. They are the ones building the right conditions first - the ones whose teams will actually engage with the technology, experiment with it honestly, and help the organization learn what works.
AI is a powerful amplifier. What it amplifies is up to you.
If you're asking yourself where your organization actually stands on this - that's the conversation the AI Clarity Call is designed for. Thirty minutes to get specific about the conditions you're working with and what would need to change.
