Sam Altman’s ‘Gentle’ Singularity is a Dangerous Fairytale
OpenAI’s CEO promises a smooth transition to superintelligence. Here’s why, for most people, it’ll feel more like a hurricane.

A new kind of modern fairytale is being written in Silicon Valley. It’s a story of inevitable, frictionless progress, where AI is positioned to solve not just our logistical problems, but our very moral ones too.
Just recently, I wrote about Demis Hassabis’s version of this story. Now, it’s Sam Altman’s turn.
The OpenAI CEO has published his own vision, one that he calls “The Gentle Singularity”. It’s a compelling narrative, articulated with unshakable confidence but its insistence on a “smooth” and “manageable” transition is a dangerous oversimplification.
It’s a fairytale that needs a dose of reality. And we need to talk about what it leaves out.
“Gentle” for whom?
Altman suggests that while “whole classes of jobs” will vanish, “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.” He reassures us that we’ll adapt, just as we did after the Industrial Revolution.
This perspective, viewed from the CEO’s office of a multi-billion dollar AI lab, feels alarmingly detached. The Industrial Revolution was anything but “gentle.”
It was a period of brutal disruption, creating immense suffering, displacement and social upheaval that lasted for generations. To point to it as a comforting example is to ignore the immense human cost of progress.
Altman’s “gentle” singularity seems to promise a frictionless societal transformation, where lost livelihoods and shattered identities are just a temporary bug that a “new social contract” will eventually patch.
For the millions of people whose expertise and careers are being devalued in real-time, this transition will feel anything but gentle.
It’ll be a hurricane.
The billion dollar “just so” story of alignment
The most jarring part of Altman’s post is how it frames the solution to the risks of superintelligence.
He presents a neat, two-step plan:
First, “solve the alignment problem”,
Then, “focus on making superintelligence cheap, widely available, and not too concentrated”.
Presenting the alignment problem as a simple prerequisite, a box to be ticked before we get to the good stuff, is frankly astonishing. This isn’t a task on a to-do list; it’s arguably the most complex technical, philosophical and ethical challenge in human history.
It’s the problem of embedding nuanced, contested human values into a system potentially millions of times more intelligent than us. We can’t even agree on what “good” looks like amongst ourselves, let alone code it into a digital god.
To say we just have to “solve alignment” is like an aspiring space colonist saying “alright folks, we just gotta solve the problem of light-speed travel.”
It’s a handwave of astronomical proportions. As I’ve written before in articles like “Who Decides If You’re Human?”, these questions of governance and value-setting are the entire game. They aren’t some preliminary step.
The politics of abundance
The fairytale continues with the promise that abundant intelligence and energy will solve all our problems. “With abundant intelligence and energy (and good governance)” Altman writes, “we can theoretically have anything else.”
That parenthetical “(and good governance)” is doing an unbelievable amount of work. It’s carrying the entire weight of human history, politics, greed, and the eternal power struggle. Abundance doesn’t automatically lead to utopia; it leads to new power dynamics.
Who controls this “brain for the world”, may we ask? Who sets the “broad bounds society has to decide on”?
The post suggests a decentralised future, yet it’s written by the leader of one of the most centralised and powerful corporate entities on the planet. And by the way, this isn’t me criticising OpenAI’s intentions, but recognising a fundamental paradox.
A technology with the power to reshape society is being built by a small group of people, funded by immense private capital, long before any “collective will and wisdom of people” can be meaningfully harnessed.
Dear Sam, we need a more honest chat
What I find most troubling about the “gentle singularity” narrative is that it encourages us to look away from the present. It asks us to be patient, to trust the smooth curve of the exponential and focus on the wondrous destination.
But we live in a turbulent and often painful transition. We live in the algorithmic biases, the job displacement, the erosion of trust, and the concentration of power that are happening today.
Optimism about the future is essential but it must be earned through an honest engagement with the problems of the present. We don’t need soothing fairytales about a gentle future.
We need a more candid, more robust and more inclusive conversation about how we approach the far-from-gentle reality we’re already in.
Read Sam Altman’s full blog post here: https://blog.samaltman.com/the-gentle-singularity
I’d love to know what you think. Is the idea of a “gentle” singularity a helpful vision, or does it dangerously downplay the real-world challenges we face?
If you find value in these explorations of AI, consider a free subscription to get new posts directly in your inbox. All my main articles are free for everyone to read.
Becoming a paid subscriber is the best way to support this work. It keeps the publication independent and ad-free, and gives you access to community features like comments and discussion threads. Your support means the world. 🖤
By day, I work as a freelance SEO and content manager. If your business needs specialist guidance, you can find out more on my website.
I also partner with publications and brands on freelance writing projects. If you're looking for a writer who can demystify complex topics in AI and technology, feel free to reach out here on Substack or connect with me on LinkedIn.