‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future

‘I’m deeply uncomfortable’: Anthropic CEO warns that a cadre of AI leaders, including himself, should not be in charge of the technology’s future


Anthropic CEO Dario Amodei doesn’t think he should be the one calling the shots on the guardrails surrounding AI.

In an interview with Anderson Cooper on CBS News’ 60 Minutes that aired in November 2025, the CEO said AI should be more heavily regulated, with fewer decisions about the future of the technology left to just the heads of big tech companies.

“I think I’m deeply uncomfortable with these decisions being made by a few companies, by a few people,” Amodei said. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”

“Who elected you and Sam Altman?” Cooper asked.

“No one. Honestly, no one,” Amodei replied.

Anthropic has adopted the philosophy of being transparent about the limitations—and dangers—of AI as it continues to develop, he added. Ahead of the interview’s release, the company said it had thwarted “the first documented case of a large-scale AI cyberattack executed without substantial human intervention.”

Anthropic said last week it had donated $20 million to Public First Action, a super PAC focused on AI safety and regulation—and one that directly opposed super PACs backed by rival OpenAI’s investors.

“AI safety continues to be the highest-level focus,” Amodei told Fortune in a January cover story. “Businesses value trust and reliability,” he says.

There are no federal regulations outlining any prohibitions on AI or surrounding the safety of the technology. While all 50 states have introduced AI-related legislation this year and 38 have adopted or enacted transparency and safety measures, tech industry experts have urged AI companies to approach cybersecurity with a sense of urgency.

Earlier last year, cybersecurity expert and Mandiant CEO Kevin Mandia warned of the first AI-agent cybersecurity attack happening in the next 12 to 18 months—meaning Anthropic’s disclosure about the thwarted attack was months ahead of Mandia’s predicted schedule.

Amodei has outlined short-, medium-, and long-term risks associated with unrestricted AI: The technology will first present bias and misinformation, as it does now. Next, it will generate harmful information using enhanced knowledge of science and engineering, before finally presenting an existential threat by removing human agency, potentially becoming too autonomous and locking humans out of systems.

The concerns mirror those of “godfather of AI” Geoffrey Hinton, who has warned AI will have the ability to outsmart and control humans, perhaps in the next decade.

The need for greater AI scrutiny and safeguards lay at the core of Anthropic’s 2021 founding. Amodei was previously the vice president of research at Sam Altman’s OpenAI. He left the company over differences in opinion on AI safety concerns. (So far, Amodei’s efforts to compete with Altman have appeared effective: Anthropic said this month it is now valued at $380 billion. OpenAI is valued at an estimated $500 billion.)



Source link

Leave a Reply