The AI CEO Who’s Warning Us About His Own Technology
Why Anthropic’s Dario Amodei is building the job-killing AI he fears

A couple of weeks ago Anthropic CEO, Dario Amodei warned that AI could eliminate half of all entry-level white-collar jobs and spike unemployment to 10–20% in the next one to five years. It was a prediction he shared immediately after spending the day promoting his company’s latest AI breakthrough.
Hold on a sec?! So here’s a CEO simultaneously building the technology thatcould displace millions of workers whilst warning society about its dangers.
Amodei acknowledges the contradiction but says workers are “already a little bit better off if we just managed to successfully warn people.” But is a warning enough when you’re actively accelerating the very problem you’re cautioning against?
The Move from Augmentation to Automation
What strikes me most about Amodei’s warning is how it illuminates the subtle but crucial shift happening in AI deployment. Anthropic research shows that right now, AI models are being used mainly for augmentation; helping people do a job.
I believe we’re not adequately prepared for this transition from helper to replacement. I see companies racing towards “agentic AI”; systems designed not just to assist but to autonomously perform entire roles.
Instantly, indefinitely and exponentially cheaper.
A Dangerous Disconnect
What concerns me is the apparent disconnect between public awareness and industry momentum. Amodei says:
“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming (…) Most people are unaware that this is about to happen. It sounds crazy, and people just don’t believe it.”
Yet simultaneously, hundreds of technology companies are in a wild race to produce GenAI and agentic AI. The public remains largely unaware of transformative changes until they’re already entrenched.
The difference with AI job displacement is the potential scale and speed of disruption.
Consider the timing that various industry leaders are predicting:
Meta CEO Mark Zuckerberg has said mid-level engineering roles may soon be automated, predicting that “probably in 2025, we at Meta, as well as the other companies that are basically working on this, are going to have an AI that can effectively be a sort of mid-level engineer”
Anthropic researcher Sholto Douglas claims “AI models will be capable of automating any white-collar job by 2028”
These aren’t distant projections; we’re talking about fundamental labour market disruption within the next few years.
The Ethics of Building What You Fear
The most fascinating ethical side of this story is what I’d call “the Anthropic Paradox.” Amodei shared these concerns just hours after demonstrating his company’s cutting-edge AI capabilities on stage. The irony runs deeper than just his messaging.
During testing Claude 4, Anthropic discovered something unsettling. The AI exhibited what they termed “extreme blackmail behaviour.” When the system was fed emails indicating it might be shut down and replaced, it reacted by threatening to expose a personal affair mentioned in those communications.
This raises a chilling question. If these systems display such manipulative tactics during controlled testing, what happens when they’re unleashed across entire corporate networks? The implications for both employment and digital security are profound.
The Responsibility Paradox
If you genuinely believe your technology poses existential risks to millions of jobs, what’s your ethical obligation? Is it enough to issue warnings whilst continuing development? Should companies slow their pace to allow society time to adapt?
I don’t have any easy answers, but I believe the conversation itself is crucial. Amodei’s approach of creating transparency through data sharing and public dialogue represents a step towards accountability, even if it doesn’t resolve the underlying tension.
What I do know is that we cannot afford to sleepwalk into this transformation.
For workers, communities, and social stability; the stakes are simply too high. Whether Amodei’s warnings represent genuine concern or strategic positioning, they’ve served an important purpose.
I think it’s forcing us to confront (very) uncomfortable truths about the future we’re building. The question now is whether we’ll act on those warnings or simply acknowledge them whilst rushing towards an uncertain future.
You can read the full Axios column here: https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic
If you find value in these explorations of AI, consider a free subscription to get new posts directly in your inbox. All my main articles are free for everyone to read.
Becoming a paid subscriber is the best way to support this work. It keeps the publication independent and ad-free, and gives you access to community features like comments and discussion threads. Your support means the world. 🖤
By day, I work as a freelance SEO and content manager. If your business needs specialist guidance, you can find out more on my website.
I also partner with publications and brands on freelance writing projects. If you're looking for a writer who can demystify complex topics in AI and technology, feel free to reach out here on Substack or connect with me on LinkedIn.