Inside the $380 billion AI lab, hard decisions are discussed openly – even when the world is watching

Twice a month, 2,500 people pause their work to dial into a meeting called Dario Vision Quest.

On their screens appears Dario Amodei, the physicist-turned-founder behind Anthropic, now one of the most valuable private tech companies in the world. There are no slides or polished decks, just a dense multi-page memo he has written himself.

For about an hour, he talks through it plainly and at length, walking his team through the company’s biggest product bets, geopolitical shocks, and any internal missteps that inevitably come with building AI at speed.

It would be reasonable to assume that a CEO in his position would be consumed by product reviews and back-to-back investor calls. Anthropic has raised tens of billions in capital and seen its flagship chatbot Claude adopted by major Fortune 10 companies.

By early 2026, its annualised revenue had reportedly climbed to roughly $14 billion – just a few years after it began selling software.

But Amodei approaches the job differently. On a recent Dwarkesh Podcast, he said that nearly 40% of his time now goes into “making sure the culture of Anthropic is good.” For him, that means spending time with employees, addressing difficult questions, and tending to a culture that underpins everything else.

Why Dario Amodei spends 40% of his time on culture

The instinct to slow down and treat culture as something you build on purpose tracks with the kind of person Amodei has been for a long time.

Amodei grew up in San Francisco, the son of an Italian-American leather craftsman and a Jewish American project manager. He was a physics prodigy, competing in the US Physics Olympiad team, before studying at Stanford and earning a PhD in biophysics at Princeton.

After stints at Baidu and Google, he joined OpenAI in 2016 and rose to vice president of research, helping steer the early GPT era. But by 2021, he and a small group of senior colleagues walked out over “directional differences” about the organization’s future.

They founded Anthropic that same year, with Amodei as CEO and his sister Daniela president. Their plan was to build a frontier AI lab that treated safety and governance as core design constraints. The ambition was to move deliberately, pairing technical progress with caution about how and where models were deployed.

As large enterprises began experimenting more seriously with generative AI, that positioning carried weight. Claude developed a reputation as a reliable, enterprise-ready system that executives could integrate into workflows without feeling like they were signing up for volatility.

It wasn’t long before adoption accelerated. By early 2026, annualized revenue for what’s now a $380 billion company had reached roughly $14 billion, growing more than tenfold in each of the previous three years.

Trying to keep a human team aligned at the edge of what technology can do

For a lab built on caution, success has inevitably introduced a different kind of exposure. The conversations that once happened in research papers and policy forums have now unfolded in headlines.

In early March, a reported dispute with the Pentagon over the conditions and safeguards governing how its models could be deployed spilled into public view – and, in the surge of attention that followed, Claude climbed to No. 2 in the App Store.

Perhaps moments like these help explain why transparency within Anthropic culture has become something of a habit. Because when deployment decisions become headline material, ambiguity inside the company becomes a liability.

In interviews, Amodei has described an aversion to “corpo speak” and a preference for direct, sometimes uncomfortable clarity. Employees say he regularly addresses strategy shifts and trade-offs in depth, often in long written Slack posts that invite equally long replies.

Those exchanges create a running archive of the company’s internal debates, building shared context across thousands of team members.

Amodei has also written publicly about a future in which data centers could house what he calls “a country of geniuses” – clusters of AI with better brainpower of some human experts. In that world, he argues, the challenge isn’t simply building intelligence, but governing and coordinating it responsibly.

For Amodei, the race isn’t only to build the smartest model, but to ensure the humans around it can stay aligned about what that kind of intelligence should and shouldn’t do. And perhaps that’s why the CEO of a $380B AI lab now spends so much of his time on making sure they’re in the loop.