Skip to main content
Back to BlogPhilosophy

Stay Human, Collaborate, or Evolve?

March 6, 20266 min read

Three paths forward in the AI era. Do we resist, merge, or transform? Why the answer isn't as simple as picking one.

There is a spectrum of possible human responses to AI that is rarely articulated clearly. At one end: stay human, resist integration, preserve the unaugmented self as something worth protecting. At the other: evolve, merge, embrace the possibility that what comes after human is better than what we are. In the middle: collaborate — a word that sounds moderate but contains its own tensions.

I do not think any of these positions is simply right. But I do think the conversation between them is one of the most important we are not having.

The case for staying human

The argument for staying human — for maintaining the unaugmented self as a coherent identity worth protecting — rests on the value of authenticity. There are things that only happen in the space between effort and limitation. A painting made by someone who has to struggle with the medium. A relationship built through the specific friction of two imperfect people. A life defined by choices made under genuine constraint.

When we remove the limitations, we also remove the meaning that the limitations make possible. A chess victory in a game played by both humans is different from a chess victory in a game assisted by AI, not because the moves are different but because the significance is different. Staying human is an argument for preserving the conditions under which human things remain meaningful.

The case for collaboration

Collaboration has the pragmatic argument: we are already augmented beings. Glasses, calculators, search engines, smartphones — we have integrated tools into our cognition for millennia. AI is another tool in a long sequence. The relevant question is not whether to integrate but how to integrate thoughtfully — enhancing capability without surrendering judgment, expanding reach without losing depth.

The collaborationist position at its best is not passive. It requires ongoing intentionality about which augmentations are genuinely beneficial and which are erosive, which integrations amplify humanity and which dilute it.

The case for evolution

The evolutionary argument says: the self you are protecting was never a fixed thing. You are already the result of millions of years of change. The human of 2126 will be different from the human of 2026 in ways we cannot imagine. Resisting the integration of AI into human identity is clinging to a snapshot, not to something eternal.

This position requires the most honesty about its own risks. Evolution by definition changes what you are. The question is whether the thing that emerges is genuinely better — richer, freer, more capable of the things that matter — or just different.

The question beneath the question

What all three positions circle around is a question they rarely state explicitly: what do we think human life is for? What are we trying to preserve, or enhance, or transcend? Without clarity on that question, the debate about how to respond to AI is an argument about means with no agreed end.

I do not have a clean answer. I think the honest response is to hold the question with more care than the urgency of technological change usually allows. To slow down before choosing a position. To notice what we value and why, before deciding how to protect or transform it.

evolutionAI coexistencereflection

Managing Disruptions

A weekly newsletter about thinking clearly in noisy times. No tips. No hacks. Just better questions.

Join 500+ professionals, leaders, and parents who refuse to outsource their thinking.