Dario Amodei is, in his telling, the accidental CEO of an accidental business—one that just happens to be among the fastest-growing on the planet. “When we first started Anthropic, we didn’t have any idea about how we would make money, or when, or under what conditions,” he says.
Anthropic is the San Francisco–based AI company that Amodei cofounded and leads. And it hasn’t taken long for it to start pulling in lots of money, under lots of conditions. The startup has emerged as one of the leading rivals to OpenAI and Google in the race to build ever-more-capable artificial intelligence. And while Anthropic and its Claude family of AI models don’t have quite the same brand recognition as crosstown rival OpenAI and its ChatGPT products, over the past year Claude has quietly emerged as the model that businesses seem to like best.
Anthropic, currently valued at $183 billion, has by some metrics pulled ahead of its larger rivals, OpenAI and Google, in enterprise usage. The company is on track to hit an annualized run rate of close to $10 billion by year-end—more than 10 times what it generated in 2024. It also told investors in August that it could bring in as much as $26 billion in 2026, and a staggering $70 billion in 2028.
Even more remarkably, Anthropic is generating such growth without spending nearly as much as some rivals—at a time when massive capital expenditures across the industry are stoking anxiety about an AI bubble. (OpenAI alone has signed AI infrastructure deals worth more than $1 trillion.) That’s in part because Anthropic says it has found ways to train and run its AI models more efficiently. To be sure, Anthropic is nowhere near profitable today: It was pacing to end 2025 having consumed $2.8 billion more cash than it took in, according to recent news accounts citing forecasts provided to investors. But the company is also on track to break even in 2028, according to those projections—two years ahead of OpenAI.
On the AI infrastructure spending race, Amodei can be sardonic. “These announcements are kind of frothy,” he says. “Business should care about bringing in cash, not setting cash on fire, right?” Of his rivals, he quips: “Can you buy so many data centers that you over-leverage yourself? All I’ll say is, some people are trying.”
Anthropic’s commercial traction is in some ways profoundly ironic. The company was founded in 2020 by Amodei, his sister Daniela, and five other former OpenAI employees who broke away from that company, in part, because they were concerned it was putting too much emphasis on commercial products over “AI safety,” the effort to ensure AI doesn’t pose significant risks to humanity. At Anthropic, safety was going to be the sine qua non.
“AI safety continues to be the highest-level focus,” Amodei says of Anthropic as we sit in his office—in a building adjacent to Salesforce Tower that once housed the offices of Slack, and whose 10 stories are now completely occupied by Anthropic. But the company soon found what Amodei calls “synergies” between its work on safety and building models that would appeal to enterprises. “Businesses value trust and reliability,” he says.
In another twist, the emphasis on trust and caution that has helped Anthropic gain traction with big business has entangled the company in conflicts with influential figures in politics and business. Key Trump administration officials range from skeptical to downright hostile to Anthropic’s positions on AI safety and its advocacy for regulation. The company has clashed with Nvidia CEO Jensen Huang—over Anthropic’s support for limiting exports of AI chips to China—and with Salesforce CEO Marc Benioff over Amodei’s warnings about AI-induced job losses.
“Business should care about bringing in cash, not setting cash on fire, right? Can you buy so many data centers that you over-leverage yourself?”
Dario Amodei, CEO and Cofounder, Anthropic
The opprobrium of these influential figures is just one obstacle Anthropic is navigating. It has also faced lawsuits over its use of copyrighted books and music to train Claude. It agreed to settle one class action lawsuit with authors over its use of pirated libraries of books for $1.5 billion in September. That’s cash the company would rather spend on growth, but had it lost in court, Anthropic might have been bankrupted.
It’s a lot for a young company to manage, especially one undergoing hypergrowth. Anthropic had fewer than 200 employees in late 2023. Today it has approximately 2,300. It’s hiring an army of salespeople, customer support engineers, and marketing professionals, even as it staffs up on researchers to push the frontier of AI development. It’s also expanding internationally at a rapid clip. Just since September, it has opened Paris and Tokyo offices, and announced ones in Munich, Seoul, and Bengaluru, adding to its existing global footprint of Dublin, Zurich, and London.
Having established itself as “the AI company for business,” Anthropic’s challenge is to keep that title in an industry where performance leaderboards can shift overnight and market gains can quickly disappear. As its rocket ship burns through the stratosphere, the question is, Can Anthropic achieve escape velocity? Or will powerful forces—the gravitational pull of the immense costs associated with cutting-edge AI models, the buffeting winds from political turbulence and intense competition, and the internal pressures inherent in managing an organization growing at supersonic rates—send it spinning back down to earth?
Safety leads to sales
Dario Amodei has a head of curly-brown hair, and as he speaks, he absent-mindedly twirls a lock of it around his finger, as if reeling back in the thoughts unspooling from his lips as he muses about AI security and trust issues like “prompt injection” and hallucinations. Many of the mysteries Anthropic is most interested in unlocking through its research—how to make sure models adhere to human intentions and instructions (what’s known in the AI field as “alignment”) and how to peer inside the brains of large language models to figure out why they generate certain outputs (or “interpretability”)—are things businesses care about too.
The 42-year-old Amodei has a penchant for dressing in what might best be described as academic chic. (The day we meet, he’s wearing a shawl-necked navy sweater over a white T-shirt, with blue trousers, and dark Brooks running shoes rounding off the look.) It’s perhaps a sartorial vestige of his former life: Prior to his current role, he had always been a scientist, first in physics, then computational neuroscience, and finally AI research. “When I started this company, I’d never run a company before, and I certainly didn’t know anything about business,” Amodei says. “But the best way to learn, especially something practical like that, is just doing it and iterating fast.”
While Dario focuses on vision, strategy, research, and policy, sister Daniela, who is nearly four years younger, serves as Anthropic’s president, handles day-to-day operations, and oversees the commercial side of the business. “We’re like yin and yang” in roles and responsibilities, Daniela says of herself and Dario, but “extremely aligned” on values and direction. She allows that one of the benefits of working with your sibling is that there’s always someone around who can call you on your bulls–t. “You have sibling privileges,” she says. “Sometimes I’m like, ‘Hey, I know this is what you meant, but people didn’t hear it that way.’ Or he’ll just be, like, ‘You’re coming across, um, you’re cranky.’”

Under the Amodeis, Anthropic’s emphasis on business has helped it to differentiate itself from OpenAI, which has 800 million weekly users and has increasingly catered to them by rolling out consumer products—from viral video-creation tool Sora to an Instant Checkout feature for e-commerce. Claude has tens of millions of individual users, according to news accounts (Anthropic hasn’t disclosed those numbers), but Anthropic says most of these people are using Claude for work and productivity, not for entertainment or companionship.
Amodei says focusing on enterprises better aligns Anthropic’s incentives around safety with those of its customers. Consumer-focused businesses, he says, tend to wind up trying to monetize users’ attention through advertising, which gives them an incentive to make products addictive. That, he says, leads to apps that serve up “AI slop” or chatbots designed to serve as “AI girlfriends.” (Amodei doesn’t mention OpenAI by name, but that company has made controversial moves toward doing both those things.) “For each of these things, it’s not that I object in principle,” he says. “There could be some good way to do them. But I’m not sure the incentives point toward the good way.”
More important, for enterprise customers, safety is a persuasive selling point. Many feel that, as a result of Anthropic’s innovations, it is harder for users to push Claude to jump its guardrails and produce problematic outputs, whether that’s giving someone instructions for making a bioweapon, revealing company secrets, or spewing hate speech.
Whatever their motives, business customers are eagerly signing up. The company says it has more than 300,000 enterprise customers, and that the number of those on pace to spend more than $100,000 annually with the company has risen sevenfold in the past year. Menlo Ventures, an Anthropic investor, has released survey data showing it with about a third of the enterprise market, compared with 25% for OpenAI and about 20% for Google Gemini. OpenAI disputes the reliability of these numbers, noting that it has more than 1 million business customers. But data that the companies shared with investors this summer showed that Anthropic had pulled ahead of its much larger rival in revenue derived from their respective APIs—the interfaces through which enterprises access their models when building AI-enabled products and services. Anthropic reported $3.1 billion from its API compared with $2.9 billion for OpenAI.
Nick Johnston, who leads the strategic technology partnerships team at Salesforce, says Salesforce’s own customers, especially in finance and health care, pushed his company to forge a closer relationship with Anthropic because they felt the model was more secure than competitors. (Public safety benchmarks run by independent organizations bear this out.)
Some of Claude’s better performance is down to a technique Anthropic pioneered called “constitutional AI.” This involves giving Claude a written constitution—a set of principles—that is used to train the model. Anthropic drew the principles for Claude’s current constitution from sources as varied as the UN Universal Declaration of Human Rights, Apple’s terms of service, and rules that Anthropic competitor Google DeepMind developed and published for Swallow, a chatbot it created in 2022.
Dave Orr, Anthropic’s head of safeguards, says there’s much more that goes into making Claude secure. The company screens out certain information—such as scientific papers on potentially dangerous viruses—from Claude’s initial training data. It also applies what it calls “constitutional classifiers,” other AI models that screen users’ prompts for jailbreaking attempts and monitor Claude’s outputs to ensure they comply with the constitution. Anthropic employs “red-teamers” to probe for vulnerabilities that Orr’s teams then try to fix. It also has a “threat intelligence” group that investigates users whose prompts raise red flags. That team has uncovered Chinese hackers using Claude to penetrate critical infrastructure networks in Vietnam, and North Korean fraudsters using Claude to land IT jobs at U.S. companies.
Anthropic executives stress that Claude’s reliability as a business tool is essentially inextricable from its emphasis on safety. Kate Jensen, who heads Anthropic’s Americas operation and was until recently head of sales and partnerships, says that a lot of customers prefer Claude because they trust it to just work. Does the model rarely hallucinate? Can it follow instructions reliably? “Does the model do what you asked it to do? Yes or no?” she asks, rhetorically. “That shouldn’t really be a massive enterprise differentiator, but right now in AI, it is. And for us, it’s always been table stakes.”
Winning at coding
Indeed, Claude has been winning enterprise customers largely because it performs better than rivals at tasks businesses care about. This has been particularly true for coding, where Claude has, until recently, dominated almost all the public performance benchmarks. Claude drafts about 90% of Anthropic’s own code, although human software developers check it and edit it. “Claude Code”—a tool specifically for software developers that debuted in February—supercharged Claude’s adoption.
David Kossnick, head of AI products at design software company Figma, says his company built many of its early generative AI features using OpenAI’s models. But when Figma decided to create Figma Make, a product that lets users design and build functional prototypes and working apps from typed instructions, it chose Claude to power it. “Anthropic’s code generation was consistently impressive,” he says. (Figma still uses OpenAI and Google models for other features.)
Figma is one of many companies whose relationship with Claude was boosted by Anthropic’s close partnership with Amazon</a> and its cloud-computing arm, AWS. Amazon has committed to invest $8 billion in Anthropic, and it has integrated Anthropic’s models deeply into AWS, making it easy for customers to use Claude with their data. Given that AWS is the world’s largest cloud provider, that has helped drive business to Anthropic.
Anthropic has relationships with Google Cloud and Microsoft Azure too. And recently IBM, whose AI strategy had been built around open-source models, made an exception and struck a strategic partnership with Anthropic to integrate Claude into select products, even though Claude isn’t open-source.
Rob Thomas, IBM’s chief commercial officer, says IBM was excited about Claude’s ability to work with its proprietary libraries of coding data, particularly in older languages such as Java and COBOL. The Latin of programming languages, COBOL powers Big Blue’s mainframes, which are still used in banking, insurance, health care, and the U.S. government. But skilled COBOL coders have largely retired. IBM has used Claude, in conjunction with other AI models, to create Project Bob, an agentic tool it plans to release in 2026 that can carry out various software tasks, including modernizing COBOL-written programs.
If coding is the gateway drug for many Anthropic customers, a growing number are discovering Claude’s uncanny abilities at other tasks. Novo Nordisk, the pharmaceutical giant best known these days for its blockbuster diabetes and weight-loss drug Ozempic, evaluated a host of AI models in an effort to reduce the time it takes to prepare the reams of paperwork involved in clinical trials. Waheed Jowiya, the company’s digitalization strategy director, says Novo Nordisk built a system around Claude that has taken the time required to compile clinical trial reports down from 12 to 15 weeks to just 10 to 15 minutes.
Microsoft, a major investor in OpenAI, had been using OpenAI’s models exclusively to power its Copilot in office productivity software—but it found Claude was better at handling Excel spreadsheets and PowerPoint presentations, and switched accordingly. Both Deloitte and Cognizant have adopted Claude companywide and are helping Anthropic co-sell Claude to their own clients—another revenue-scaling opportunity, since big companies rely on such firms’ consulting work to get value from generative AI.
Anthropic has begun rolling out tailored versions of Claude for specific professions. But it’s cautious about launching too many “verticals”: Mike Krieger, the Instagram cofounder who is now Anthropic’s chief product officer, says it will create tailored products only if they will help either solve some confounding aspect of general-purpose intelligence or create what he calls “a flywheel effect” that accelerates progress toward superhuman AI.
Krieger says Claude Code checked the second box (offering the prospect of AI models writing code for future models). Claude for Financial Services, which launched in July, checked the first one, since building accurate financial models requires lots of reasoning steps. The company has a “frontier prototyping team” that builds internal products designed to push the envelope of what Claude can do, with an eye toward commercializing them if they succeed.
For all its abilities, plenty remains beyond Claude’s grasp. When Anthropic teamed up with Andon Labs, an AI safety testing outfit, to see if Claude Sonnet 3.7 could run the vending machines in Anthropic’s San Francisco headquarters, it fared disastrously. The model failed to raise prices on in-demand items, told employees to remit payments through an account that didn’t exist, offered all Anthropic staff a 25% discount (not realizing the impact that would have on profits in an office in which pretty much everyone worked for Anthropic), and decided to stock tungsten cubes, an expensive but useless novelty item. (Tungsten cubes briefly became an Anthropic office meme.)
While Anthropic works to up Claude’s vending machine game, its rivals aren’t standing still. OpenAI is reportedly working on a product to directly challenge Claude for Financial Services. Its newest coding product, GPT-5 Codex, has narrowly bested Anthropic on some software development benchmarks. Google’s new Gemini 2.5 Pro model also has decent coding skills and performs competitively with Claude on many reasoning tasks. Each of those models is considerably cheaper than Claude, and a number of Chinese AI companies have produced powerful coding models that they’ve released for free.
Right now, most enterprises are willing to pay more for AI models to gain even a slight advantage in accuracy on essential tasks. But that could change as the gap between the performance of different AI models narrows.
“Pricing in the industry is like an acid trip. Everyone in the industry is still doing some form of price discovery, because it’s just evolving so quickly.”
Daniela Amodei, President and Cofounder, Anthropic
That means price could become Anthropic’s Achilles’ heel. IBM’s Thomas says, “I don’t think Bob would hit the mark for users if Anthropic wasn’t there, but if we’d only built on Claude we’d probably miss the mark on price.” In June, Anysphere, the startup behind the AI-powered software development platform Cursor, angered many users when it jacked up prices. Anysphere blamed the increase partly on Anthropic, because Cursor relies heavily on Claude under the hood. Around the same time, Anthropic reduced the number of requests its own paid subscribers could make for a given subscription tier—in essence, a stealth price hike.
Daniela Amodei acknowledges that Anthropic’s price changes were not communicated well. But she adds that “pricing in the AI industry is like an acid trip,” and that “everyone in the industry, including us, is still doing some form of price discovery, because it’s just evolving so quickly.” She also says that Anthropic has created smaller, less-expensive models, such as its Claude Haiku series, which perform certain tasks just as well as its massive Claude 4.1 Opus, at a fraction of the price. “Depending on the use case you might not need the Ferrari,” she says. Left unsaid: If you do need the Ferrari, don’t expect Chevy prices.
Tense relationships
If Anthropic’s safety emphasis has won it customers, it’s also alienated policymakers in Trump’s Washington. The week Amodei and I meet, the company is scrambling to respond to a series of highly critical social media posts from White House AI and crypto czar David Sacks, who is also a prominent venture investor and podcaster.
Sacks, who has repeatedly attacked the company for being “Trump haters” and a cog in the AI “doomer industrial complex,” was exercised about remarks that Anthropic’s cofounder and head of policy Jack Clark gave at an AI conference, where Clark likened AI models to mysterious, unpredictable, and at times scary creatures. Sacks accused Clark and Anthropic of engaging in a cynical attempt at “regulatory capture,” playing up threats in order to drum up public support for rules with which Anthropic was best-positioned to comply.
Other top White House figures with interest in tech, including Vice President JD Vance, have voiced skepticism of AI safety efforts, worrying that they will hobble U.S. efforts to compete with China. White House policymakers were also displeased that Anthropic endorsed California’s new AI law, which requires labs building powerful AI models to disclose the actions they are taking to avert potentially catastrophic risks. The administration has advocated for a 10-year moratorium on state-level AI regulation. Dario Amodei was notably absent from a White House dinner in September attended by leaders of top U.S. AI and tech companies, nor was he among the tech CEOs accompanying the president on his state visit to the U.K. later that month.
It’s true that Amodei is not a fan of Trump. He once likened the president to a “feudal warlord” in a now deleted preelection Facebook post urging friends to vote for Kamala Harris. He also decided Anthropic would cut ties with two law firms that struck settlements with Trump.
But Amodei insists the company has “lots of friends in the Trump administration” and is more aligned with the White House than Sacks and others give it credit for. He points, for example, to a shared belief that the U.S. must rapidly expand energy generation capacity to power new data centers. Amodei notes that he traveled to Pennsylvania to attend an energy and innovation summit where he met Trump. He also attended a dinner during Trump’s recent state visit to Japan, where he again met the president. In a blog post widely interpreted as a response to Sacks’ criticisms, Amodei went out of his way to say Anthropic concurred with Vance’s recent remarks that AI will have both benefits and harms, and that U.S. policy should try to maximize the benefits and minimize the harms.
These tensions haven’t prevented Anthropic from winning multiple key government contracts. Most recently, in July, the Pentagon handed the company a $200 million, two-year contract to prototype “frontier AI capabilities” that would advance U.S. national security. But Amodei says he won’t kowtow to the president. “The flip side of that is when we disagree, we’re gonna say so,” he says. “If we agreed with everything that some government official wanted us to, I’m sure that could benefit us in business in some way. But that’s not what the company is about.”
As for the California AI law, Clark, the policy director, says Anthropic would prefer federal regulation, but that “the technology isn’t sitting around waiting for a federal bill to get written.” He says the California bill “was developed carefully and in a very consultative manner with industry and other actors.” Clark also tells me that Anthropic has been testing Claude to weed out any political bias in the responses it gives to questions that involve ideological framing or policy positions more closely aligned with either major party.
One area where Anthropic sees mostly eye to eye with the Trump administration is on restricting China’s access to AI technology. But Amodei’s advocacy for export controls has put it on a collision course with Nvidia’s Huang. Huang has said he “disagrees with pretty much everything [Amodei] says”; he has also said that Anthropic’s position is that AI is so dangerous, only Anthropic should build it. (Amodei has called Huang’s comments “an outrageous lie.”)
Amodei tells me he has great respect for Huang and admires him for coming to America as an immigrant and pulling himself up by his bootstraps to create the world’s most valuable company. “We always want to work with them; we always want to partner with them,” he says of Nvidia. Comparing the race to create superpowerful AI to the Manhattan Project, Amodei says, “Just like we worry when an authoritarian government gets nuclear weapons, I think we should worry when they get powerful AI, and we should worry about them being ahead in powerful AI.”
The infrastructure race
AI has increasingly become an infrastructure race, with companies like OpenAI, Meta, Elon Musk’s xAI, Microsoft, Google, and Amazon announcing billions of dollars in spending on vast AI data centers that consume as much electricity as sizable American cities. Overall, the hyperscalers are expected to spend as much as $400 billion on AI infrastructure in 2025, with that figure ramping up to close to $800 billion by 2029, according to data from IDC.
In many ways, Amodei himself helped create this race. In 2020, when he was still a senior researcher at OpenAI, he helped formulate what are known as the “AI scaling laws”— an empirical observation that increasing an AI model’s size, feeding it more data, and training it on more computing power produces a predictable gain in performance. Belief in these scaling laws has driven AI companies to build ever larger models and bigger data center clusters. Today, there’s debate among AI researchers about the extent to which this premise still holds. But Amodei says he doesn’t think scaling is ending. “We see things continuing to get better,” he says. “Every three to four months, we release a new model, and it’s a significant step up every time.”
Still, Amodei says observers shouldn’t expect Anthropic to announce infrastructure deals of quite the same magnitude as OpenAI or Meta. A $50 billion deal Anthropic announced with cloud company Fluidstack in mid-November to build customized data centers for the company in Texas and New York is its largest to date. And it is unlikely to be its last. But, by comparison, OpenAI has announced multiple deals in the hundreds of billions.
Daniela Amodei says that Anthropic has discovered ways to optimize model training and inference that wring more out of fewer AI chips. “Anthropic is a minor player, comparatively, in terms of our actual compute,” she says. “How have we arguably been able to train the most powerful models? We are just much more efficient at how we use those resources.” Leaked internal financial forecasts from Anthropic and OpenAI bear this out. Anthropic projects that between now and 2028 it will make 2.1 times more in revenue per dollar of computing cost than what OpenAI forecasts, according to a story in The Information that cited figures the companies shared with investors.
And while the $78 billion Anthropic told investors it forecast spending on compute through 2028 under an optimistic scenario is a massive figure, it’s only a third of the $235 billion OpenAI was budgeting over that time frame, according to information it had given its own investors.
Like its competitors, Anthropic is turning to different partners for computing power. AWS has built Project Rainier, a network of data centers, including a gigantic $11 billion facility in rural Indiana that houses some 500,000 Trainium 2s—Amazon’s own AI chips—for Anthropic to use to train and run its models. By year-end, Anthropic will have more than 1 million Trainium 2s at its disposal.
Google, meanwhile, has invested $3 billion into Anthropic, and in October Anthropic said it would begin using 1 million of Google’s specialized AI chips, called TPUs, in addition to Amazon’s. Amodei acknowledges that the relationship with Google “is a little different” than Anthropic’s with AWS, since Google’s frontier AI model, Gemini, competes directly against Claude. “‘Coopetition’ is very, very common in this industry,” Amodei adds, “so we’ve been able to make it work.”
Even as Anthropic keeps spending relatively restrained, the company has had to continually raise money. Press reports have circulated that it may be in the process of raising its third venture capital round in 18 months, even though it just completed a $13 billion fundraise in August. If it does raise again, the company would likely seek a valuation between $300 billion and $400 billion. This summer, Wired published a Slack message from Amodei in which he explained to employees why he was reluctantly seeking financing from Persian Gulf states. “‘No bad person should ever profit from our success’ is a pretty difficult principle to run a business on,” Amodei wrote.
Holding on to the culture
Amodei’s pained message points to one of the most pressing challenges facing Anthropic—how to hold on to its “AI for the good of humanity” culture as its growth skyrockets.
“I have probably been the leader who’s been the most skeptical and scared of the rate at which we’re growing,” Daniela Amodei tells me. But she says she’s been “continually, pleasantly surprised” that the company hasn’t come apart at the seams, culturally or operationally.
She says the fact that all seven cofounders still work at Anthropic helps, because it seeds cultural hearth-tenders across different parts of the company. She also says that the company’s AI safety mission tends to draw a certain type of person. “We’re like umami,” she says. “We have a very distinct flavor. People who love our umami flavor are very attracted to Anthropic, and Anthropic is very attracted to those people.” Anthropic’s mission has also made it easier to retain talent at a time when Meta has reportedly been offering experienced AI researchers pay packages worth hundreds of millions of dollars.
Dario reinforces the company’s values at regular companywide addresses called DVQs—short for “Dario Vision Quests.” He uses the sessions to explain strategy and policy decisions but also Anthropic’s mission. “When the company was small, we all had a common understanding of the potential of AI technology,” he says. “And now a lot of people are coming in, so we have to impart that understanding.”
Both Dario and Daniela say they’ve had to stretch into their roles as senior executives as Anthropic has grown. Dario says he’s had to remind himself to stop feeling bad when he doesn’t recognize employees in the elevators or, as happened recently, when he discovers that Anthropic employs an entire five-person team that he didn’t realize existed. “It’s an inevitable part of growth,” he concedes. When the company was smaller, Dario was directly involved in training Anthropic’s models alongside head of research Jared Kaplan. “Now he’s injecting high-level ideas, right?” Daniela says. “‘We should be thinking more about x.’ That’s such a different way of leading.”
Daniela says she, too, has had to learn to be more hands-off. Before, when someone came to her with a problem, she would leap in, saying, “‘I am going to help you figure it out.’ Now I’m like, ‘What is the one thing I want them to take back to their teams?’”
The two siblings have also had to be intentional about separating work from family life. Daniela says Dario comes to her house most Sundays to hang out with her family. They’ll play video games and play with her kids, but work talk is verboten. “This is a separate time that’s just for us, because we were siblings before we were cofounders,” she says.
Dario Amodei tells me he remains convinced that AGI—humanlike artificial general intelligence—and then AI superintelligence loom on the horizon. And he denies being a “doomer.” Yes, he’s worried about potential dangers, from models that will make it easier for someone to engineer a bioweapon to large-scale job displacement. But he thinks AGI will help cure many diseases—and wants those cures to arrive as soon as possible. And he firmly believes AI could supercharge the economy. “The GDP growth is going to inflect upwards quite a lot, if we get this right,” he says.
Another thing he’s optimistic about: Anthropic’s continued revenue acceleration. He’s a scientist. He gets the law of big numbers. Companies don’t keep growing at 10x for long. “I’m an AI optimist, but I’m not that crazy,” he says. Still, he thinks Anthropic could surpass OpenAI as the world’s largest AI company by revenue. “I would argue it’s maybe even the most likely world in which our revenue passes theirs a year from now,” he says. Then he pauses before adding, “I think I’d rather have the largest revenue than the largest data center, because one is black [on an income statement], and the other is red. Again, things I’ve had to learn about business: It’s better to make money than just to lose money.”
This article appears in the December 2025/January 2026 issue of Fortune with the headline “Anthropic is still hung up on ‘AI safety.’ Turns out big business loves that.”
Anthropic by the numbers
The AI startup is growing rapidly, and its path to profit looks shorter than that of some rivals
$10 billion
Projected revenue “run rate” for year-end 2025
$183 billion
Private-market valuation as of August 2025
2028
Year the company projects being profitable
