
What If the Real AI Revolution Is About You?
Two People Who Know More Than Most Have Been Saying the Same Thing
Eric Schmidt, former CEO of Google, spoke at TED2025 with a specific kind of urgency. Not panic - urgency. His argument: the AI revolution is underhyped. Not overhyped. Under.
This from the man who scaled Google into what it is. When Schmidt says we're not paying enough attention, that's worth stopping for.
Sam Altman, CEO of OpenAI, was at the same event.
His tone was different - more measured, more philosophical - but the underlying message landed in the same place:
We are building systems that can reason, plan, and act.
We are doing it faster than anyone predicted.
And the gap between what most organizations are doing with AI and what they could be doing is not closing on its own.
Between the two of them, they laid out one of the clearest leadership wake-up calls I've come across. Not because the technology is terrifying (it isn't), but because the window for deliberate, thoughtful engagement is shorter than most leaders are acting like it is.
What Schmidt Is Actually Warning About
Schmidt's concern isn't that AI will arrive someday. His concern is that it's already here and most leaders are still treating it as an experiment rather than a strategic reality.
He made a specific point that stayed with me:
He talked about acquiring a rocket company without knowing anything about rockets - because the founder had used AI to make highly specialized aerospace knowledge accessible, learnable, and actionable for someone without the technical background.
That's not a future scenario. That's happening now, across industries, in ways that are quietly changing the competitive landscape.
The leaders who are building with that reality in mind are making different decisions than those who are waiting for clearer signals. And Schmidt's warning is essentially that by the time the signals are unmistakable, the gap will be very hard to close.
He's not a doomsday prophet. His message is urgent, but it's addressed to people who are willing to move - not to people who are supposed to feel frightened.
The move he's describing is from "AI as an innovation project" to "AI as a strategic decision about how this organization competes."
What Altman Sees That Most Leaders Don't
Altman's perspective sits slightly further out on the horizon, but it connects directly to the decisions leaders are making today.
He describes AI not as a tool but as a new kind of cognitive infrastructure: systems that can reason, plan, and act on your behalf. Not automation in the traditional sense.
Delegation to something that learns faster than we do, retains everything, and improves with each interaction.
Most executives, he observes, still think of AI as a sophisticated chatbot or a productivity add-on. That's like looking at the early internet and seeing only email. The leaders who understood in 1995 that the internet was infrastructure and not a feature made different investments, built different organizations, and ended up in very different positions than those who waited for the picture to clarify.
Altman is also the most candid I've seen a senior AI figure be about the stakes at the top end.
He talks about superintelligence not as a sci-fi concept but as a technical trajectory.
He talks about the need for governance, coordination, and genuine international cooperation.
He talks about the upside: AI accelerating medical breakthroughs, solving energy problems, and distributing access to knowledge that has historically been available only to the privileged. And the downside, which he doesn't minimize.
His conclusion isn't pessimistic. It's conditional: "There is a path where this goes well. But it doesn't happen by accident."
That conditional is directed at leaders. At the people who shape organizations, build cultures, make decisions about how technology gets used and what values guide it. It's directed at you.
The Three Beliefs That Are Holding Leaders Back
Schmidt and Altman both identify the bottleneck without naming it directly. It isn't the technology. It isn't employee resistance.
It's leadership hesitation rooted in beliefs that made sense before this moment and are becoming expensive now.
"I need to be the expert."
Most senior leaders built their credibility on deep expertise in something. The instinct to stay in expert territory, to defer engagement with AI until they understand it thoroughly, feels like prudence but functions like avoidance.
The value of leadership is shifting from knowing answers to asking better questions, from directing tasks to designing systems that learn. That's not a diminishment.
It's a different kind of mastery. But it requires letting go of the identity that expertise built.
"Technology is IT's department."
In a world where AI is a productivity tool, that's a reasonable delegation. In a world where AI is cognitive infrastructure, where it shapes how decisions get made, how work flows, how the organization learns, delegating it entirely is like delegating your strategy to someone else's judgment.
Altman's point is direct: develop a personal relationship with AI. Not to become an engineer. To understand what you're actually working with.
"I don't have time for this."
Schmidt's response to this is essentially: if AI can reclaim your time, but only after you invest some time first, the math is straightforward:
The leaders who are using AI to offload low-value work are recovering hours per week for strategic thinking, relationship-building, and the decisions that actually require a human. The ones who don't have time to learn AI are spending that time on work that shouldn't require them.
What This Actually Looks Like
Neither Schmidt nor Altman is asking leaders to become technical experts. What they're describing is a different kind of engagement - curious, deliberate, and ongoing.
In practice, for the SMB CEOs I work with, this means a few concrete things.
It means having a personal relationship with at least one AI tool - not delegating all AI interaction to someone else on the team.
It means creating space for the organization to experiment, document what works, and share what they're learning.
It means putting AI on the leadership agenda as a strategic question, not just an operational one.
It means being honest about the ethical dimension - what will you automate and what won't you, and why.
The leaders who navigate this era well won't necessarily be the most technically sophisticated. Instead, they'll be the ones who engaged early enough to develop genuine judgment - about where AI creates value, where it creates risk, and how to build an organization that uses it with both effectiveness and integrity.
Schmidt put it simply: "It's not AI or humans. It's AI with humans."
That's the leadership decision in front of you right now. Not whether to engage with AI - that question is already answered. But how deliberately you choose to lead that engagement, and whether you do it on your own terms or wait until events force the choice.
If this landed somewhere - a question it surfaced, something you've been sitting with about where your organization is - the AI Clarity Call is a focused conversation about what it looks like for your specific situation.
Sources:
- Eric Schmidt, TED2025: [The AI Revolution Is Underhyped](https://youtu.be/id4YRO7G0wE?si=NsdiBlld-l3d8Vzz)
- Sam Altman, TED2025: [OpenAI's Sam Altman Talks ChatGPT, AI Agents and Superintelligence](https://www.youtube.com/live/5MWT_doo68k?si=jBQBMWgIF6dtAiP-)
