TIME

INNOVATORS

ANTHROPIC

FOR DARIO AMODEI, AI SAFETY IS ABOUT MORE THAN BUSINESS

BY BILLY PERRIGO/SAN FRANCISCO

In the summer of 2022, Dario Amodei had a difficult decision to make. Anthropic, the AI company where he is co-founder and CEO, had just finished training a new chatbot that was far more powerful than anything he had seen before. The team working on it at Anthropic’s San Francisco headquarters were in awe of their creation, which they christened Claude.

Releasing Claude, Amodei knew, could earn fame and fortune for Anthropic, a roughly 50-person startup that had only launched the previous year. But he was worried about the potential consequences of unleashing the AI upon the world—so worried, in fact, that he ultimately decided not to, opting instead to continue internal safety testing. Some three months later, a rival lab, OpenAI, launched a similar product called ChatGPT. It set off a frenzy of hype and investment that has reshaped the entire tech industry.

Many Silicon Valley entrepreneurs would see that kind of missed opportunity as the regret of a lifetime. But for Amodei, it was about more than business: he wanted to avoid triggering a race to build bigger, and perhaps more dangerous, AI systems. “I suspect it was the right thing to do,” says Amodei, 41, twirling a lock of curly dark hair between his fingers during a two-hour interview in May. “But it’s not totally clear-cut.”

His uncertainty is understandable, given that a race began anyway and that his decision likely cost Anthropic billions of dollars. But ChatGPT woke regulators up to something Amodei had been worrying about for years: that advanced AI could, if handled poorly, be catastrophically risky. Last July, Amodei testified in front of Senators in Washington, D.C.—arguing that systems powerful enough to “create large-scale destruction” and change the balance of power between nations could exist as soon as 2025.

Others, including OpenAI CEO Sam Altman, had made similar warnings. But many in the AI-safety community felt Amodei had greater credibility, viewing Anthropic’s decision to withhold Claude as a signal of its commitment to prioritizing safety over money and acclaim. The lab was an underdog: the smallest of all the companies building “frontier” AI systems, the youngest, the least well-financed, and the most expressly committed to safety. This reputation has mostly endured, even as Anthropic has raised more than $7 billion from investors including Amazon and Google, expanded to 500 employees, and launched three generations of its Claude chatbot. (Salesforce, where TIME co-chair and owner Marc Benioff is CEO, has also invested.)

Claude 3, which Anthropic released in March, was by some measures the most capable publicly available AI system at the time, outperforming OpenAI’s GPT-4 and Google’s Gemini. That put Anthropic in the curious position of having a reputation as the most cautious AI company, while also owning—and selling access to—one of today’s most advanced

You’re reading a preview, subscribe to read more.

More from TIME

TIME1 min read
Return To North Gaza
Palestinians wind their way through the rubble of Jabalia in the northern Gaza Strip on May 31, following the withdrawal of Israeli forces from the town and its 75-year-old refugee camp. For nearly three weeks, Israeli soldiers waged intense battle a
TIME4 min read
Guilty On All Counts
For six weeks, the former most powerful man in the world sat like an ordinary citizen in a drab courtroom, a criminal defendant being judged by a jury of his peers. In proceedings both historic and tawdry, Donald Trump glowered, complained, bloviated
TIME10 min read
Strait Talk
As political transitions go, the ascent of Lai Ching-te to the presidency of Taiwan had pretty much everything. On May 15, the outgoing President signed off amid a riot of yellow spandex and feather boas as Nymphia Wind, winner of the latest season o

Related Books & Audiobooks