- Absolutely Agentic
- Posts
- WTF is the Gentle Singularity?
WTF is the Gentle Singularity?
Does Sam Altman's recent blog underestimate the challenge of AI alignment?
Last week Sam Altman published a techno-utopian blog post titled The Gentle Singularity.
It all reads a little like All Watched Over by Machines of Loving Grace. That’s a poem referenced by Dario Amodei without irony, and by filmmaker Adam Curtis with plenty of it. Rather than 'Gentle Singularity', you might be tempted to go with 'Out of Touch With Reality' for those running the bigger AI companies.
‘In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.'
You what? It reads like he’s been taking doses of Yuval Noah Harari.
Let’s face it, many people alive today would feel a certain revulsion at merging with AI, which is what a 'Singularity' really implies in this context. Others feel a less abstract revulsion that AI is currently scything through creative copyright, flooding the internet with synthetic sludge, and threatening the displacement of large chunks of the workforce. There is nothing especially gentle about this.

All Watched Over by Machines of Loving Grace, Richard Brautigan (1967)
This sci-fi language borrows heavily from the language of Black Holes - Stephen Hawking’s A Brief History of Time being the definitive guide. In physics, the Singularity is the point at the centre of a Black Hole where matter and spacetime are thought to be crushed into infinity. Not, in short, a particularly hospitable destination.
Altman opens his post with: ‘We are past the event horizon’. It’s an interesting metaphor.
The event horizon is the outer boundary of a Black Hole - once crossed, not even light can escape. Any matter that passes this invisible threshold is dragged towards the Singularity. Slowly at first. Than with accelerating, irreversible force. Until, well, obliteration.
So yes, if we have indeed crossed the AI 'event horizon' it’s not exactly the most comforting thought.
Before we go any further, a little more unfurling of this techno-babble is needed.

Sam Altman
What is the Singularity, anyway?
The term was popularised by Ray Kurzweil in his 2005 book The Singularity is Near, and again in The Singularity is Nearer, where he forecasts how and when humans might merge with AI, and what that could mean.
To Kurzweil, the Singularity is a positive inevitability: an intelligence explosion in which recursively self-improving AIs radically surpass human intelligence, catalysing utopian outcomes in health, creativity, economics, and beyond.
This idea has its roots in a 1960s concept from British mathematician I.J. Good: the ‘intelligence explosion’. That’s when AI gains the ability to improve itself, setting off a runaway feedback loop of capability gains.
Let’s imagine such a scenario. A project reaches self-recursive capability. From that point on, the pace of development accelerates beyond anything in human history.
We’ve seen huge inflection points before - wheels, stirrups, gunpowder, steam, electricity, computing. But even the Industrial Revolution took 80–100 years to play out in Britain alone. And its social effects - urbanisation, labour upheaval, political unrest -were anything but gentle.
Now imagine compressing all that change into a decade or less. That’s the intelligence explosion. The endpoint is the Singularity.
And the problem is: we don’t know how to control it.
The alignment problem: Can we guide the beast?
Altman ends his post with a vague but familiar refrain: we must 'solve the alignment problem'. This means ensuring AI learns what we want it to learn, behaves in ways we find acceptable, and doesn’t produce outcomes we didn’t ask for and can’t reverse.
The idea sounds simple. It’s anything but.
Whose values are we aligning to? Humans don’t even agree with each other on ethics, on fairness, on basic facts. How do you encode that into a machine?
Intelligence doesn’t imply empathy. An AI could be superintelligent and still ruthlessly indifferent to human welfare. This is the essence of Bostrom’s famous Paperclip Maximiser thought experiment.
Value drift is real. Human values evolve. Machines that learn to serve today’s ideals might be catastrophically misaligned with tomorrow’s.
The 'control' illusion. Once systems surpass us, how do we enforce any kind of oversight or limits?
Solving alignment might be the most important challenge of our era but there’s very little evidence we’re close to doing it. And while some research is promising, other approaches seem more like hope than strategy.
If you subscribe to Eliezer Yudkowsky’s deeply pessimistic takes on AGI and the alignment problem you probably tend to conclude that by the time we figure out what values to align AI with, we’ll be taking instructions from it.
So why is Altman framing it as ‘gentle’? It’s possible that Altman’s post is partly strategic. A way to calm investors, regulators, and the broader public. Maybe it’s an effort to position OpenAI as a benevolent steward rather than a dangerously fast-moving lab.
Or maybe it’s just the latest entry in Silicon Valley’s long-running genre of utopian manifestos.