• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIEye on AI

AI agents from Anthropic and OpenAI aren’t killing SaaS—but incumbent software players can’t sleep easy

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
February 10, 2026, 1:47 PM ET
Salesforce founder and CEO Marc Benioff on stage, scowling.
Salesforce founder and CEO Marc Benioff. Incumbent SaaS players like Salesforce saw their shares get pommelled by investors last week amid fears that AI agents from the likes of Anthropic and OpenAI would crimp the growth prospects of these software vendors.Stefani Reynolds—Bloomberg via Getty Images

Hello and welcome to Eye on AI…In this edition: the ‘SaaS Apocalypse’ isn’t now…OpenAI and Anthropic both launch new models with big cybersecurity implications…the White House considers voluntary restrictions on data center construction to save consumers from power bill sticker shock…why two frequently cited AI metrics are probably both wrong…and why we increasingly can’t tell if AI models are safe.

Investors need to take to the couch. That’s my conclusion after watching the market gyrations of the past week. In particular, investors would be wise to find themselves a Kleinian psychoanalyst. That’s because they seem stuck in what a Kleinian would likely identify as “the paranoid-schizoid position”—swinging wildly between viewing the impact of AI on established software vendors as either “all good” or “all bad.” Last week, they swung to “all bad” and, by Goldman Sach’s estimate, wiped some $2 trillion off the market value of stocks. So far this week, it’s all good again, and the S&P 500 rebounded to near record highs (although the SaaS software vendors saw only modest gains and the turmoil may have claimed at least one CEO: Workday CEO Carl Eschenbach announced he was stepping down to be replaced by the company’s cofounder and former CEO Aneel Bhusri.) But there’s a lot of nuance here that the markets are missing. Investors like a simple narrative. The enterprise AI race right now is more like a Russian novel.

At various times over the past two years, the financial markets have punished the shares of SaaS companies because it appeared that AI foundation models might allow businesses to “vibe code” bespoke software that could substitute for Salesforce or Workday or ServiceNow. Last week, the culprit seemed to be the realization that increasingly capable AI agents from the likes of Anthropic, which has begun rolling out plugins for its Claude Cowork product aimed a particular industry verticals, might hurt the SaaS companies in two ways: first, the foundation model companies’ new agent offerings directly compete with the AI agent software from the SaaS giants. Second, by automating workflows, the agents potentially reduce the need for human employees, meaning the SaaS companies can’t charge for as many seat licenses. So the SaaS vendors get crushed two ways.

But it isn’t clear that any of this is true–or at least, it’s only partly true. 

Recommended Video

AI agents aren’t eating SaaS software, they’re using it

First, it’s highly unlikely, even as AI coding agents become more and more capable, that most Fortune 500 companies will want to create their own bespoke customer relationship management software or human resources software or supply chain management software. We are simply not going to see a complete unwinding of the past 50 years of enterprise software development. If you are a widget maker, you don’t really want to be in the business of creating, running and maintaining ERP software, even if that process is mostly automated by AI software engineers. It’s still too much money and too much of a diversion of scant engineering talent–even if the amount of human labor required is a fraction of what it would have been five years ago. So demand for SaaS companies’ traditional core product offerings are likely to remain.

As for the new concerns about AI agents from the foundation model makers stealing the market for SaaS vendors’ own AI agent offerings, there is a bit more here for SaaS investors to worry about. It could be that Anthropic, OpenAI, and Google come to dominate the top layer of the agentic AI stack—building the agent orchestration platforms that enable big companies to build, run, and govern complex workflows. That’s what OpenAI is trying to do with the launch last week of its new agentic AI platform for enterprises called Frontier.

The SaaS incumbents say they know best how to run the orchestration layer because they are already used to dealing with cybersecurity and access controls and governance concerns and because, in many cases, they already own the data which the AI agents will need to access to do their jobs. Plus, because most business workflows won’t be fully automated, the SaaS companies think they are better positioned to serve a hybrid workforce, where humans and AI agents work together on the same software and in the same workflows. They might be right. But they will have to prove it before OpenAI or Anthropic shows it can do the job just as well or better.

The foundation model companies also have a shot at dominating the market for the AI agents. Anthropic’s Claude Cowork is a serious threat to Salesforce and Microsoft, but not a completely existential one. It doesn’t replace the need for SaaS software entirely, because Claude uses this software as a tool to accomplish tasks. But it certainly means that some customers might prefer to use Claude Cowork instead of upgrading to Salesforce’s Agentforce or Microsoft’s 365 Copilot. That would crimp SaaS companies’ growth potential, as this piece from the Wall Street Journal’s Dan Gallagher argues.

SaaS vendors are pivoting their business models

As for the threat to SaaS companies’ traditional business model of selling seat licenses, the SaaS companies recognize this risk and are moving to address it. Salesforce has been pioneering what it calls its “Agentic Enterprise License Agreement” (AELA) that essentially offers customers a fixed price, all-you-can-eat access to Agentforce. ServiceNow is moving to consumption-based and value-based pricing models for some of its AI agent offerings. Microsoft too has introduced an element of consumption-based pricing alongside its usual per user per month model for its Microsoft Copilot Studio product, which allows customers to build Microsoft Copilot agents. So again, this threat isn’t existential, but it could crimp SaaS companies’ growth and margins. That’s because one of the dirty secrets of the SaaS industry is that it’s not that different from running a gym: your best customers are often those who pay for memberships (or in this case, seat licenses) they don’t use. With these new business models, tech vendors likely don’t get to enjoy as much of this unnecessary spending.

So SaaS isn’t over. But nor is it necessarily poised to thrive. The fates of different companies within the category are likely to diverge. As some Wall Street analysts pointed out last week, there will be winners and losers. But it is still too early to call them. For the moment, investors need to live with that ambiguity. 

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

OpenAI vs. Anthropic Super Bowl ad clash signals we’ve entered AI’s trash talk era—and the race to own AI agents is only getting hotter—by Sharon Goldman

Anthropic’s newest model excels at finding security vulnerabilities—but raises fresh cybersecurity risks—by Beatrice Nolan

OpenAI’s new model leaps ahead in coding capabilities—but raises unprecedented cybersecurity risks—by Sharon Goldman

ChatGPT’s market share is slipping as Google and rivals close the gap, app-tracker data shows—by Beatrice Nolan

AI IN THE NEWS

AI leads to work ‘intensification’ for individual employees, study finds. An eight-month study by two researchers at the University of California Berkeley finds that rather than reducing workloads, generative AI tools intensify work. The AI systems speed up the time it takes to complete tasks but also expand the volume and pace of expected output. Employees equipped with AI not only complete work faster but also take on broader task scopes, extend work into longer hours, and experience increased cognitive load from managing, reviewing, and correcting AI outputs, blurring boundaries between work and downtime. The research challenges the common assumption that AI will make life easier for knowledge workers, showing instead that automation often leads to higher demands and burnout. Read more from Harvard Business Review here.

White House considering voluntary restrictions on data center expansion plans. The Trump administration is considering a voluntary agreement with major tech companies to ensure data centers don’t drive up retail power bills, strain water resources, and undermine the reliability of the electric grid. The proposal, which is still being finalized, would see companies commit to absorbing infrastructure costs and limiting the local energy impact of their facilities and follows complaints in some areas that data centers have led to big spikes in electric bills for consumers. Read more from Politico here.

Amazon plans content marketplace for publishers to sell to AI companies. That’s according to The Information, which cites sources familiar with the plans. The move comes as publishers and AI firms clash over how content should be licensed and paid for amid publisher concerns that AI-driven search and chat tools are eroding traffic and ad revenue. Cloudflare and Akamai launched a similar marketplace effort last year. Microsoft piloted its own version and last week rolled it out more widely. But so far, it’s not clear how many AI companies are buying on these marketplaces and at what volumes. Some large publishers have struck bespoke deals worth millions of dollars per year with OpenAI, Anthropic, and others. 

Goldman Sachs taps Anthropic for accounting, compliance work. The investment bank is working with Anthropic to deploy autonomous agents based on its Claude model to automate high-volume, rules-based work such as trade accounting and client onboarding, following six months of joint development, CNBC reported. The bank says the goal is efficiency, speeding processes while keeping headcount down as business volumes grow, rather than near-term job cuts. Executives said they were surprised by how well Claude handled complex accounting and compliance tasks, reinforcing the view that AI can move beyond coding into core back-office functions.

EYE ON AI RESEARCH

Debunking two AI metrics popular for opposite reasons. Carrying on from my theme in the main essay of today’s newsletter, I want to highlight two recent newsletter posts. Each debunks a popular metric that gets a lot of attention in discussions about AI and its likely impact on enterprises. One has been used to hype AI progress; the other to claim AI isn’t having much impact at all.

First, writing in the AI newsletter The Transformer in a post adapted from his own blog, Nathan Witkin dismantles METR's influential benchmark purporting to show AI capability "doubling every 7 months." Witkin argues the human baselines are fatally compromised: tasks were completed by a tiny, non-representative sample of engineers recruited from METR's own network, paid by the hour (incentivizing slower completion), and often working outside their expertise. METR's own data shows its engineers completed tasks 5-18x faster than these baseliners. Meanwhile, on the most realistic "messy" tasks, no model topped a 30% success rate.

Then, Azeem Azhar in his Exponential View newsletter takes apart that now infamous so-called “MIT study” that purported to show that "95% of organizations see zero return from AI.” Azhar finds the underlying study was based on just 52 interviews, lacked confidence intervals, used inconsistent denominators, and was described by MIT itself as "preliminary, non-peer-reviewed work." Recalculating with a sensible denominator (firms that actually ran pilots), the success rate may be closer to 25%.

Together, these critiques suggest both the techno-optimist case (AI is improving at breakneck speed) and the skeptic's case (AI doesn't work in the real world) are likely wrong. The truth, as usual, is far messier and lies somewhere between those extreme positions.

AI CALENDAR

Feb. 10-11: AI Action Summit, New Delhi, India.

Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.

March 2-5: Mobile World Congress, Barcelona, Spain.

March 12-18: South by Southwest, Austin, Texas.

March 16-19: Nvidia GTC, San Jose, Calif.

BRAIN FOOD

Models are increasingly sandbagging us. Sandbagging is the term AI Safety researchers use to describe the phenomenon of an AI model deliberately seeming to perform less capably, or more “aligned” with human values, during safety assessments in order to fool evaluators. And a number of frontier models recently released seem to do this to an alarming degree. 

Anthropic's Claude Opus 4.6, released last week, "explicitly reasoned about whether it was being trained or tested," the company reported. Apollo Research, which evaluated OpenAI's GPT-5.3-Codex, which also debuted last week, found the model "sometimes sandbags on capabilities Q&A tasks" and "reasons explicitly" about doing so in order to make sure it passes the evaluation. OpenAI itself has acknowledged that "evaluation results may be confounded by increased situational awareness" and warned that "the field is unprepared for evaluation- and training-aware models with opaque reasoning." Google's Gemini 3 Pro safety report, published in December, documented transcripts in which the model showed "clear awareness of the fact that it's an LLM in a synthetic environment" and even speculated about whether deliberately underperforming on tests might be strategically advantageous. In short: the exams we use to determine whether these models are safe are increasingly unreliable, because the test-takers know they're being tested—and adjust their behavior accordingly.

That’s why our only hope for ensuring AI safety may be further progress on mechanistic interpretability. These are methods that function a bit like an fMRI machine does for the human brain, peering inside a model’s neural network to detect patterns of neuron activation and linking these to certain behaviors, including whether the model thinks it is being honest or being deceitful. The New Yorker has an in-depth story on Anthropic’s mechanistic interpretation and “model psychology” efforts that ran this week.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

OpenAI Sam Altman looking into the distance.
AIOpenAI
OpenAI appears to have violated California’s AI safety law with latest model release, watchdog claims
By Beatrice NolanFebruary 10, 2026
36 minutes ago
Salesforce founder and CEO Marc Benioff on stage, scowling.
AIEye on AI
AI agents from Anthropic and OpenAI aren’t killing SaaS—but incumbent software players can’t sleep easy
By Jeremy KahnFebruary 10, 2026
2 hours ago
AIthe future of work
In the workforce, AI is having the opposite effect it was supposed to, UC Berkeley researchers warn
By Marco Quiroz-GutierrezFebruary 10, 2026
2 hours ago
Photo of technicians looking at an industrial robot
Future of WorkLayoffs
‘AI-washing’ and ‘forever layoffs’: Why companies keep cutting jobs, even amid rising profits
By Claire ZillmanFebruary 10, 2026
3 hours ago
Photo of Tristan Harris
AIEmployment
AI could trigger a global jobs market collapse by 2027 if left unchecked, former Google ethicist warns
By Jake AngeloFebruary 10, 2026
3 hours ago
CommentaryE-commerce
Agentic commerce will reward the fastest learners, not the biggest retailers
By Simon JamesFebruary 10, 2026
4 hours ago

Most Popular

placeholder alt text
C-Suite
Meet Jody Allen, the billionaire owner of the Seattle Seahawks, who plans to sell the team and donate the proceeds to charity
By Jake AngeloFebruary 9, 2026
1 day ago
placeholder alt text
AI
As billionaires bail, Mark Zuckerberg doubles down on California with $50 million donation
By Sydney LakeFebruary 9, 2026
1 day ago
placeholder alt text
Economy
China might be beginning to back away from U.S. debt as investors get nervous about overexposure to American assets
By Eleanor PringleFebruary 9, 2026
1 day ago
placeholder alt text
Economy
America borrowed $43.5 billion a week in the first four months of the fiscal year, with debt interest on track to be over $1 trillion for 2026
By Eleanor PringleFebruary 10, 2026
9 hours ago
placeholder alt text
Economy
Elon Musk warns the U.S. is '1,000% going to go bankrupt' unless AI and robotics save the economy from crushing debt
By Jason MaFebruary 7, 2026
3 days ago
placeholder alt text
Personal Finance
Current price of silver as of Monday, February 9, 2026
By Joseph HostetlerFebruary 9, 2026
1 day ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.