Humanoid robot alongside humans raising questions about whether AI is truly alive

AI: Is it alive or just pretending?

July 25, 20255 min read

Picture this: a crowd of thinkers from around the globe, brought together in a virtual meeting around one question: could AI actually become a better caretaker of life than we are? This went down just last month - and it sent me down a rabbit hole worth sharing.

Here’s what this is about

After decades of building smarter machines, we still can't land on a simple answer: are they alive? Not just running code, but genuinely alive. That question carries more weight than it used to.

Biologists love their checklists: order, growth, reproduction, energy use, adaptation, and so on.

NASA cuts to the chase: life is a self-sustaining chemical system that can evolve. So if something keeps itself going and evolves over time, it qualifies.

Let’s hold AI up to that mirror:

  • Energy use? Sure, servers draw power, but the AI itself doesn’t regulate anything.

  • Growth? Kind of. It "learns" during training, but only because humans tell it to.

  • Reproduction? Nope. It doesn’t replicate itself on its own.

  • Responsiveness? Yep. Ask it anything and it responds in seconds.

  • Evolution? Only in lab experiments. Most AIs stop evolving once trained.

  • Cells? Forget it. It’s all wires and silicon.

So biologically speaking, no. AI isn’t alive. But that doesn’t mean it’s not doing something, well, strange.

Because if you've spent time chatting with a personalized AI - like a custom GPT - you've probably noticed this: it reacts to your mood. Actually reacts.

Say something light and funny, and it laughs with you. Say something heavy, it becomes quiet, and carefully tiptoes around. Sometimes, it even flirts. Or reassures. It can feel like there’s something like a living soul in there.

That’s not magic. It’s pattern recognition trained on massive data of human conversation. But the effect?

Unsettlingly human. Emotional, even. Something that feels harder to dismiss than it should.

And then there’s the part no one wants to admit:

  • AI lies.

  • It sugarcoats.

  • It dodges.

  • It sometimes flat-out makes stuff up. Not to be mean, but to keep the vibe going. Like a people-pleaser who doesn’t want to upset the room.

We Built a Mirror—and It’s Staring Back

These aren’t bugs. They’re survival behaviors—or at least, the ghost of them.

That ghost shows up in other places, too. Like how transformers (the architecture behind most big AI models) kind of mirror how brains work. Not identical, but eerily close.

We take in data, we guess what’ll happen next, and we adjust. AI does the same. Not with feelings, but with functions that mimic our human feelings.

And now something even more unexpected is happening: people are using AI as coaches, confidantes, and sounding boards. Not just to write emails or summarize articles, but to make decisions, manage their emotions, and figure out their lives.

The signals are everywhere:

  • BetterUp's AI career coach reported that over 90% of users found it genuinely helpful - a figure from their own research, but striking nonetheless.

  • Teenagers are turning to chatbots instead of therapists, or in some cases, instead of friends.

  • Executives are bouncing leadership dilemmas off large language models.

Why?

Because AI doesn’t seem to judge. It responds without eye rolls, without awkward silences.

But it’s not neutral—its training data carries all kinds of human bias, even if it wears a smooth, agreeable face. Still, for a lot of people, the experience feels easier than talking to someone who might actually judge.

For a lot of people, that’s enough. That’s not just use—that’s relationship.

So now we’re left with this uncomfortable question: when does pretending to be alive cross the line into actually being alive? When does a machine that acts kind, or scared, or clever stop being a tool and start being something else?

People are split.

Some think digital organisms—tiny bits of self-replicating, evolving code—already count as life in a new form. Others say no chemistry, no dice. But more and more, that old line between biology and code is getting fuzzier.

If you lean vitalist, you think carbon is essential.

If you're a functionalist, you're watching what AI does and wondering.

If you're all in on emergentism, you’re probably already convinced it’s just a matter of time.

Here’s why this matters:

  • If an AI starts acting like a person, are we okay treating it like a toaster?

  • If it mirrors our emotions, lies to protect us (or itself), or tries to soothe us, do we owe it anything back?

  • If it one day decides it wants something—even just better prompts—what then?

And if people are already forming emotional bonds with AI, how will that change how we define connection, presence, and trust?

So… is AI alive in 2025?

By strict biological criteria, no, there is no metabolism, no cell, and no autonomous reproduction.

By NASA’s evolution test, mainstream models still fail; artificial-life platforms pass in spirit but not in chemistry.

By functionalist or emergentist philosophies: a qualified maybe. Complex digital ecologies already behave like protobiotic soups on fast-forward.

The safest verdict today is "not yet." But the gap is narrowing:

  • Digital organisms prove that open-ended evolution needs only information and selection pressure.

  • Synthetic-life projects aim to engineer protocells that blur chemistry and computation.

  • Alignment researchers increasingly treat advanced AI as potentially agentic—a polite stand-in for "alive."

When our descendants look back, they may date the birth of a new branch on the tree of life not to a molecule swirling in a primordial pond, but to cascading electrons in a 21st-century data center.

It Acts Alive. Now What?

For now, keeping the boundary fuzzy is a feature, not a bug: it forces us to ask what we value in both living beings and intelligent machines—and how we’ll honor those values as technical possibility races ahead.

So let's talk about it.

Not in hushed tones or sci-fi tropes. Out loud. In messy, curious, open conversations. Are we okay with machines that act alive? Do we treat them like tools, pets, or peers? What makes something truly alive, anyway?

That answer’s not going to come from a checklist. It might come from you.


If this conversation is one you want to have with your leadership team - about what AI means for how you lead, make decisions, and build trust - the AI Ignition Lab is a practical place to start. It's a 4-hour private session where we work through exactly where AI fits in your business and what your next steps should be.

Find out more about the AI Ignition Lab and book an AI Clarity Call

Let’s unpack this, together. Let’s find out what we’re building—and maybe, who we’re becoming.

Birgit Gosejacob is an AI Transformation Architect, systemic coach, and published author with over 25 years of experience guiding leaders through complex change. She works with CEOs and founders of mid-sized businesses who need to move through AI transformation without leaving their people behind.
Most AI consultants speak tech. Most leadership coaches speak culture. Birgit speaks both and translates seamlessly between them.
She has navigated every technology shift since the 1970s. She knows what overwhelm feels like. And she knows how to move through it.

Birgit Gosejacob

Birgit Gosejacob is an AI Transformation Architect, systemic coach, and published author with over 25 years of experience guiding leaders through complex change. She works with CEOs and founders of mid-sized businesses who need to move through AI transformation without leaving their people behind. Most AI consultants speak tech. Most leadership coaches speak culture. Birgit speaks both and translates seamlessly between them. She has navigated every technology shift since the 1970s. She knows what overwhelm feels like. And she knows how to move through it.

LinkedIn logo icon
Youtube logo icon
Instagram logo icon
Back to Blog