Welcome to Eye on AI. In this edition…Anthropic is winning over business customers, but how are its own engineers using its Claude AI models…OpenAI CEO Sam Altman declares a “code red”…Apple reboots its AI efforts—again…Former OpenAI chief scientist Ilya Sutskever says “it’s back to the age of research” as LLMs won’t deliver AGI…Is AI adoption slowing?
OpenAI certainly has the most recognizable brand in AI. As company founder and CEO Sam Altman said in a recent memo to staff, “ChatGPT is AI to most people.” But while OpenAI is increasingly focused on the consumer market—and, according to news reports declaring “a code red” in response to new, rival AI models from Google (see the “Eye on AI News” section below)—it may already be lagging in the competition for enterprise AI. In this battle for corporate tech budgets, one company has quietly emerged as the vendor big business customers seem to prefer: Anthropic.
Anthropic has, according to some research, moved past OpenAI in enterprise marketshare. A Menlo Ventures survey from the summer showed Anthropic with a 32% market share by model usage compared to OpenAI’s 25% and Google’s 20%. (OpenAI disputes these numbers, noting that Menlo Ventures is an Anthropic investor and that the survey had a small sample size. It says that it has 1 million paying business customers compared to Anthropic’s 330,000.) But estimates in a HSBC research report on OpenAI that was published last week also give Anthropic a 40% marketshare by total AI spending compared to OpenAI’s 29% and Google’s 22%.
How did Anthropic take the poll position in the race for enterprise AI adoption? That’s the question I set out to answer in the latest cover story of Fortune magazine. For the piece, I had exclusive access to Anthropic cofounder and CEO Dario Amodei and his sister Daniela Amodei, who serves as the company’s president and oversees much of its day-to-day operations, as well as to numerous other Anthropic execs. I also spoke to Anthropic’s customers to find out why they’ve come to prefer its Claude models. Claude’s prowess at coding, an area Anthropic devoted attention to early on, is clearly one reason. (More on that below.) But it turns out that part of the answer has to do with Anthropic’s focus on AI safety, which has given corporate tech buyers some assurance that its models are a less risky than competitor’s. It’s a logic that undercuts the argument of some Anthropic critics, including powerful figures such as White House AI and crypto czar David Sacks, who see the company’s advocacy of AI safety testing requirements as a mistaken policy that will slow AI adoption.
Now the question facing Anthropic is whether it can hold on to its lead, raise enough funds to cover its still massive burn rate, and manage its hypergrowth without coming apart at the seams? Do you think Anthropic can go the distance? Give the story a read here and let me know what you think.
How is AI changing coding?
Now, back to Claude and coding. Back in March, Dario Amodei made headlines when he said that by the end of the year 90% of software code within enterprises would be written by AI. Many scoffed at that forecast, and, in fact, Amodei has since walked back the statement slightly, saying that he never meant to imply there wouldn’t still be a human in the loop before that code is actually deployed. He’s also said that his prediction was not far off as far as Anthropic itself is concerned, but he’s used a far looser percentage range for that, saying in October that these days “70, 80, 90% of code” is touched by AI at his company.
Well, Anthropic has a team of researchers that looks at the “societal impacts” of AI technology. And to get a sense of how exactly AI is changing the nature of software development, it examined how 132 of its own engineers and researchers are using Claude. The study used both qualitative interviews with the employees as well as an examination of their Claude usage data. You can read Anthropic’s blog on the study here, but we’ve got an exclusive first look at what they found:
Anthropic’s coders self-reported that they used Claude for about 60% of their work tasks. More than half of the engineers said they can “fully delegate” up to between none and 20% of their work to Claude, because they still felt the need to check and verify Claude’s outputs. The most common uses of Claude were debugging existing code, helping human engineers understand what parts of the codebase were doing, and, to a somewhat lesser extent, implementing new software features. It was far less common to use Claude for high-level software design and planning tasks, data science tasks, and front-end development.
In response to my questions about whether Anthropic’s research contradicted Amodei’s prior statements, an Anthropic spokesperson noted the study’s small sample size. “This is not a reflection of concertedly surveying engineers across the entire company,” the spokesperson said. Anthropic also noted that the research did not include “writing code” as a distinctive task, so the research could not provide an apples-to-apples comparison with Amodei’s statements. It said that the engineers all defined the idea of automation and “fully delegating” coding tasks to Claude differently, further muddying any clear reflection on Amodei’s remarks.
Nevertheless, I think it’s telling that Anthropic’s engineers and researchers were not exactly ready to hand a lot of important tasks to Claude. In interviews, they said they tended to hand Claude tasks that they were fairly confident were not complex, that were repetitive or boring, where Claude’s work could be easily verified, and, notably, “where code quality isn’t critical.” That seems a somewhat damning assessment of Claude’s current abilities.
On the other hand, the engineers said that without Claude, about 27% of the work they are now doing simply would not have been done at all in the past. This included using AI to build interactive dashboards that they just would not have bothered building before, and building tools to perform small code fixes that they might not have bothered remediating previously. The usage data also found that 8.6% of Claude Code tasks were what Anthropic categorized as “papercut fixes.”
Not just deskilling, but devaluing too? Opinions were divided.
The most interesting findings of the report were how using Claude made the engineers feel about their work. Many were happy that Claude was enabling them to handle a wider range of software development tasks than previously. And some said using Claude freed them to think about higher level skills—considering product design concepts and user experience more deeply, for instance, instead of focusing on the rudiments of how to execute the design.
But some worried about losing their own coding skills. “Now I rely on AI to tell me how to use new tools and so I lack the expertise. In conversations with other teammates I can instantly recall things vs now I have to ask AI,” one engineer said. One senior engineer worried particularly about what this would do to more junior coders. “I would think it would take a lot of deliberate effort to continue growing my own abilities rather than blindly accepting the model output,” the senior developer said. Some engineers reported practicing tasks without Claude specifically to combat deskilling.
And the engineers were split about whether using Claude robbed them of the meaning and satisfaction they took from work. “It’s the end of an era for me—I’ve been programming for 25 years, and feeling competent in that skill set is a core part of my professional satisfaction,” one said. Another reported that “spending your day prompting Claude is not very fun or fulfilling.” But others were more ambivalent. One noted that they missed the “zen flow state” of hand coding but would “gladly give that up” for the increased productivity Claude gave them. At least one said they felt more satisfaction in their job. “I thought that I really enjoyed writing code, and instead I actually just enjoy what I get out of writing code,” this person said.
Anthropic deserves credit for being transparent about what it knows about how its own products are impacting its workforce—and for reporting the results even if they contradict things their CEO has said. The issues the Anthropic survey has brought up around deskilling and the impact of AI on the sense of meaning that people derive from their work are issues more and more people will be facing across industries soon.
Ok, I hope to see many of you in person at Fortune Brainstorm AI San Francisco next week! If you are still interested in joining us you can click here to apply to attend.
And with that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
Five years on, Google DeepMind’s AlphaFold shows why science may be AI’s killer app—by Jeremy Kahn
Exclusive: Gravis Robotics raises $23M to tackle construction’s labor shortage with AI-powered machines—by Beatrice Nolan
The creator of an AI therapy app shut it down after deciding it’s too dangerous. Here’s why he thinks AI chatbots aren’t safe for mental health—by Sage Lazzaro
Nvidia’s CFO admits the $100 billion OpenAI megadeal ‘still’ isn’t signed—two months after it helped fuel an AI rally—by Eva Roytburg
AI startup valuations are doubling and tripling within months as back-to-back funding rounds fuel a stunning growth spurt—by Allie Garfinkle
Insiders say the future of AI will be smaller and cheaper than you think—by Jim Edwards
AI IN THE NEWS
OpenAI declares “code red” over enthusiasm for Google Gemini 3 and rival models. OpenAI CEO Sam Altman has declared a “Code Red” inside OpenAI as competition from Google’s newly strengthened Gemini 3 model—and from Anthropic and Meta—intensifies. Altman told staff in an internal memo that the company will redirect resources toward improving ChatGPT and delay initiatives like a planned roll-out of advertising within the popular chatbot. It’s a striking reversal for OpenAI, coming almost three years to the day after the debut of ChatGPT, which put Google on a backfoot and caused its CEO Sundar Pichai to reportedly issue his own “code red” inside the tech giant. You can read more from Fortune’s Sharon Goldman here.
ServiceNow buys identity and access management company Veza to help with AI agent push. The big SaaS software vendor is acquiring Veza, a startup that bills itself as “an AI-native identity-security platform.” The company plans to use Veza’s capabilities to bolster its agentic AI offerings and grow its cybersecurity and risk management business, which is one of ServiceNow’s fastest growing segments, with more than $1 billion in annual contract value. The financial terms of the deal were not announced, but Veza was last valued at $808 million when it raised a $108 million Series D financing round in April and news reports suggested that ServiceNow was paying an amount north of $1 billion to buy the company. Read more from ServiceNow here.
OpenAI suffers data breach. The company said some customers of its API service—but not ordinary ChatGPT users—may have had profile data exposed after a cybersecurity breach at its former analytics vendor, Mixpanel. The leaked information includes names, email addresses, rough location data, device details, and user or organization IDs, though OpenAI says there is no evidence that any of its own systems were compromised. OpenAI has ended its relationship with Mixpanel, has notified affected users, and is warning them to watch for phishing attempts, according to a story in tech publication The Register.
Apple AI head steps down as company’s AI efforts continue to falter. John Giannandrea, who had been heading Apple’s AI efforts, is stepping down after seven years. The move comes as the company faces criticism for lagging rivals in rolling out advanced generative AI features, including long-delayed upgrades to Siri. He will be replaced by veteran AI executive Amar Subramanya, who previously held senior roles at Microsoft and Google and is expected to help sharpen Apple’s AI strategy under software chief Craig Federighi. Read more from The Guardian here.
OpenAI invests in Thrive Holdings in the latest ‘circular’ deal in AI. OpenAI has taken a stake in Thrive Holdings—an AI-focused private-equity platform created by Thrive Capital, which is itself a major investor in, you got it, OpenAI. It is just the latest example of the tangled web of interlocking financial relationships OpenAI has woven between its investors, suppliers, and customers. Rather than investing cash, OpenAI received a “meaningful” equity stake in exchange for providing Thrive-owned companies with access to its models, products, and technical talent, while also gaining access these companies’ data, which will be used to fine-tune OpenAI’s models. You can read more from the Financial Times here.
EYE ON AI RESEARCH
Back to the drawing board. There was a time, not all that long ago, when it would have been hard to find anyone who was as fervent an advocate of the “scale is all you need” hypothesis of AGI than Ilya Sutskever. (To recap, this was the idea that simply building bigger and bigger Transformer-based large language models and feeding them ever more data and training them on ever larger computing clusters would eventually deliver human-level artificial general intelligence and, beyond that, superintelligence greater than all humanity’s collective wisdom.) So it was striking to see Sutskever sit down with podcaster Dwarkesh Patel in an episode of the “Dwarkesh” podcast that dropped last week and hear him say he is now convinced that LLMs will never deliver human-level intelligence.
Sutskever now says he is convinced LLMs will never be able to generalize well to domains that were not explicitly in their training data, which means they will struggle to ever develop truly new knowledge. He also noted that LLM training is highly inefficient—requiring thousands or millions of examples of something and repeated feedback from human evaluators—whereas people can usually learn something from just a handful of examples and can also fairly easily analogize from one domain to another.
As a result, Sutskever, who now runs his own AI startup, Safe Superintelligence, tells Patel that its “back to the age of research again”—looking for new ways of designing neural networks that will achieve the field’s Holy Grail of AGI. Sutskever said he has some intuitions on how to achieve this, but that for commercial reasons he wasn’t going to share them on “Dwarkesh.” Despite his silence on those trade secrets, the podcast is worth listening to. You can hear the whole thing here. (Warning, it’s long. You might want to give it to your favorite AI to summarize.)
AI CALENDAR
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
Jan. 6: Fortune Brainstorm Tech CES Dinner. Apply to attend here.
Jan. 19-23:World Economic Forum, Davos, Switzerland.
Feb. 10-11: AI Action Summit, New Delhi, India.
BRAIN FOOD
Is AI adoption slowing? That’s what a story in The Economist argues, citing a number of recently released figures. New U.S. Census Bureau data show that employment-weighted workplace AI use in America has slipped to about 11%, with adoption falling especially sharply at large firms—an unexpectedly weak uptake three years into the generative-AI boom. Other datasets point to the same cooling: Stanford researchers find usage dropping from 46% to 37% between June and September, while Ramp reports that AI adoption in early 2025 surged to 40% before flattening, suggesting momentum has stalled.
This slowdown matters because big tech firms plan to spend $5 trillion on AI infrastructure in the coming years and will need roughly $650 billion in annual revenues—mostly from businesses—to justify it. Explanations for the slow pace of AI adoption range from macroeconomic uncertainty to organizational dynamics, including managers’ doubts about current models’ ability to deliver meaningful productivity gains. The article argues that unless adoption accelerates, the economic payoff from AI will come more slowly and unevenly than investors expect, making today’s massive capital expenditures difficult to justify.












