Dario Amodei, CEO of the AI company Anthropic, dropped a 20,000 word essay on Monday called The Adolescence of Technology in which he warned that AI was about to “test who we are as a species” and that “humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
The essay, which was published on Amodei’s personal blog, has generated a tremendous amount of buzz on social media. But it is worth pointing out what is and isn’t new here.
Amodei has been concerned about the catastrophic risks of AI for years. He has warned about the risks of AI helping people develop bioweapons or chemical weapons. He has warned about powerful AI escaping human control. He has warned about potential widespread job losses as AI becomes more capable and is adopted by more industries. And he has warned about the dangers of concentrated power and wealth as AI adoption grows.
In his latest essay, Amodei reiterates all of these concerns—although sometimes in starker language and sometimes with shorter timelines for when he believes these risks will materialize. Headlines about his essay have, somewhat understandably, focused on Amodei’s blunt delineation of AI risks.
Among AI companies, Anthropic is known for having perhaps the greatest focus on AI safety—a focus that it has found has actually helped it gain commercial traction among big companies, as Fortune detailed in its January cover story on Amodei’s company. This is because many of the steps Anthropic has taken to make sure its models don’t pose catastrophic risks to humanity have also made these models more reliable and controllable—features that most businesses value.
So in many ways, Amodei’s essay is as much a novella-length marketing message, as it is an impassioned prophecy and call to action.
Which is not to say that Amodei is being insincere. It is merely to point out that his essay works on multiple levels, and that what he thinks is needed to secure humanity’s future as AI advances also aligns well with Anthropic’s existing brand positioning in the market. It is telling, for example, the number of times Amodei mentions the “constitution” it has developed for its AI model Claude as an important factor mitigating various risks—from bioterrorism to the risk that the model will escape human control. This constitution, which Anthropic just updated, is one thing that differentiates Anthropic’s AI models from those offered by its competitors, such as OpenAI, Google, Meta, and Elon Musk’s xAI.
More newsworthy than some of the risks Amodei points to in the essay are the specific remedies he calls for. He says, for instance, that wealthy individuals have an obligation to help society cope with the potential economic effects AI may have, including helping those who may lose their livelihoods to AI. He says that all of Anthropic’s cofounders have committed to donating 80% of their wealth to philanthropy. He also says that Anthropic’s employees have individually pledged billions of dollars of Anthropic shares to charities, and that Anthropic is matching those donations.
He criticizes others in Silicon Valley for not doing likewise, saying “it is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless.”
Amodei says that AI companies, such as his own, should work with enterprise customers to steer them towards AI deployments that derive value from new business lines and revenue growth, not merely through labor savings. “Enterprises often have a choice between ‘cost savings’ (doing the same thing with fewer people) and ‘innovation’ (doing more with the same number of people),” Amodei writes. “The market will inevitably produce both eventually, and any competitive AI company will have to serve some of both, but there may be some room to steer companies towards innovation when possible, and it may buy us some time. Anthropic is actively thinking about this.”
He also says that businesses have an obligation to be creative about how to reassign employees whose existing jobs are being disrupted by AI, rather than simply firing them. He broaches the idea that “in the long term, in a world with enormous total wealth, in which many companies increase greatly in value due to increased productivity and capital concentration, it may be feasible to pay human employees even long after they are no longer providing economic value in the traditional sense.” And he says that Anthropic is considering several “possible pathways” for its own employees that it will share publicly in the future.
Finally, Amodei calls for government intervention to redistribute wealth. He says the most obvious way to do this would be with a progressive tax system, one that could be general or targeted specifically at the outsize profits he thinks AI companies will soon be making. (Of course, currently, Anthropic and most other companies focused only on building AI models are hugely unprofitable. But Anthropic told its investors last year that it was on track to break even by the end of 2028.)
To those wealthy interests who would oppose such a tax, Amodei says he has a “pragmatic argument to the world’s billionaires that it’s in their interest to support a good version of [the tax]: if they don’t support a good version, they’ll inevitably get a bad version designed by a mob.”
Headlines about Amodei’s essay have inevitably focused on his prediction that 50% of entry level white collar jobs will be eliminated within one to five years. Amodei made the same prediction on stage at the World Economic Forum in Davos last week, but his remarks were eclipsed by much of the coverage of U.S. President Donald Trump’s speech at the conference.
Amodei also writes in his essay that AI which is as capable as all humans will arrive within the next two years, which may be his most explicit prediction yet of when this major milestone in the history of both computers and humans will occur. (Amodei says that it will take a while for this human-level AI capability to diffuse throughout society, which is why he thinks it will only displace 50% of entry level knowledge workers within five years.)
The science fiction writer William Gibson famously quipped that “the future is already here, it is just not evenly distributed.” And it is worth noting that in the past Amodei has not always been entirely prescient about AI’s impacts, even though he’s been largely accurate in predicting the arrival of certain AI capabilities.
For instance, early last year, Amodei said that within six to nine months, AI would be writing up to 90% of software code. Well, it turned out that this was largely true in the case of Anthropic itself—the company recently said its Claude CoWork tool was almost completely written by Claude itself—but not accurate for code overall. In most other businesses, the amount of code written by AI has been about 20% to 40%—but that figure is increasing and is up from basically 0% just three years ago. So Amodei may not be an infallible seer, but he is worth paying attention to.
Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.