Hello and welcome to Eye on AI. In this edition…Anthropic sues the Pentagon over supply chain risk designation…Yann LeCun raises $1 billion for his new startup…Some reassuring and not so reassuring news about AI agents’ propensity for illicit scheming…and why it may be too soon to turn all coding over to AI agents.
Two of the questions I get most frequently when I tell people that I cover AI and wrote a book on the subject is: am I going to lose my job? And, what should my kids study?
These questions are difficult to answer. I often fall back on saying that I doubt there will be mass unemployment, which is not the same thing as saying your particular job is safe. And I say that it is important to teach kids to be lifelong learners, which isn’t a very satisfying response.
So far, few people have lost their jobs directly due to AI. Even some of the layoffs that companies have ascribed to AI, such as the recent draconian layoffs at the payments firm Block, seem to be, at least partly, “AI-washing”—attributing layoffs to AI, because it makes a company look tech savvy, when the real reason is due to business headwinds or unrelated bad decisions. Block, for example, tripled its workforce during the pandemic, and many suspect it is simply trying to slim down a bloated workforce. (Block’s CFO Amrita Ahuja told my Fortune colleague Sheryl Estrada that this was not true and that AI was rapidly improving employee productivity.)
Every previous technology has, in the long-run, created more jobs than it has destroyed. But still, some insist that AI is different because it is being adopted so broadly and so quickly across different industries, and because it is hitting at the core of our competitive advantage over machines—our intelligence. As to the second question, about what kids should study, that’s tough too because while previous technologies have created more jobs than they’ve eliminated, exactly what those new jobs will be has always been difficult to predict in advance. It wasn’t obvious, for instance, when smartphones first appeared, that social media influencers would be a viable career.
A new research paper from economists Maxim Massenkoff and Peter McCrory at the AI company Anthropic assesses how exposed various professions are to AI by looking at the percentage of tasks in that field that the technology could potentially automate. They also try to gauge the gap between this total possible exposure, and the extent to which AI is currently being used to automate those tasks, a measure they call “observed exposure.”
Potential AI exposure vs. ‘observed exposure’
The paper got a lot of attention on social media because the researchers included an eye-catching radar plot-style chart that highlights just how jagged AI’s impacts are, especially when it comes to observed exposure. That chart is here:

For instance, AI is having relatively large impacts on fields involving office administration and computers and math, but relatively little on things like life sciences and social sciences or healthcare, even though those two areas have relatively high potential exposures. Then there are those areas with very low potential exposure, such as construction and agriculture, where, in fact, Anthropic finds the observed exposure is, indeed, almost nil. Comparing the observed exposure findings to projections of job growth from the U.S. Bureau of Labor Statistics, the Anthropic researchers found that there was a correlation between higher observed AI exposure and lower BLS job growth forecasts for those fields.
I somewhat question the agriculture finding given that predictive AI and robotics are potentially quite disruptive to agriculture and these technologies are already making inroads into farming. It’s just that this tech is different from the large language model-based systems that Anthropic is focused on. That said, maybe it isn’t bad advice for your kids to apprentice to a plumber, become an electrician, or try their hand at farming. The Anthropic paper notes that about 30% of American workers are not covered by the study because “their tasks appeared too infrequently in our data to meet the minimum threshold. This group includes, for example, Cooks, Motorcycle Mechanics, Lifeguards, Bartenders, Dishwashers, and Dressing Room Attendants.”
Even in fields where the total potential exposure is high, such as those involving computers and math, where theoretical exposure is 94%, the actual number of tasks being automated today is far lower, in this case 33%. Office administration had the highest observed exposure at about 40%, against a total theoretical exposure of 90%. (Although it is important to note that these are average figures across broad categories. When it comes to more specific job titles, the observed exposure is a lot higher: 75% for computer programmers, 70% for customer service representatives, and 67% for data entry jobs and for medical record specialists.)
How fast will the gap close?
The big question now is: how fast will the gap between observed AI exposure and theoretical AI exposure close? I think the answer is that it will vary a lot between different professions. The idea that the same levels of automation that has hit software developers in the past six months is about to hit every other knowledge worker in the next 12 to 18 months seems off to me. I think it is going to take substantially longer. The Anthropic paper notes that so far, there’s very little evidence of job losses, even in the fields where the observed AI exposure is greatest, such as software development, although they do highlight a study from Stanford University that we’ve discussed in Eye on AI before, that showed there were some signs of a hiring slowdown among younger software programmers and IT professionals. (Still, even that study could not entirely disentangle that slowdown from the possible unwinding of overhiring during the pandemic years.)
McCrory and Massenkoff highlight a few of the reasons why observed AI automation may be lagging behind its potential. In some cases AI models are not yet up to the tasks involved, they write. But in many others, they note, AI “may be slow to diffuse due to legal constraints, specific software requirements, human verification steps, or other hurdles.” As I have pointed out previously, in many fields, there simply aren’t good ways to automate and scale verification, and this is definitely holding back AI’s deployment.
The potential AI impact is also not uniform across the population: women are significantly overrepresented in AI exposed fields compared to men; exposed workers are more likely to be white or Asian, and they are also more likely to be highly educated and higher paid. Given that such groups are also often better able to organize politically, if we do start to see significant job losses among these workers, we may see a significant political backlash that could slow AI adoption.
The Anthropic economists also note that economists’ track records when it comes to predicting occupational change is poor. For instance, they call out previous research that found that about a quarter of U.S. jobs were susceptible to offshoring, but a decade later, most of those job categories had seen healthy employment growth. They also note that the U.S. government’s occupational growth forecasts have been right directionally, but have had little specific predictive value.
In the end, the most honest answer to both questions—will I lose my job, and what should my kids study?—may be: I don’t know, and no one else does either. But it might not be a bad idea to learn something about plumbing.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
Microsoft unveils Copilot Cowork agents built on Anthropic’s AI and E7 AI product suite as it seeks to calm investor concerns about AI eating SaaS—by Jeremy Kahn
OpenAI robotics leader resigns over concerns about surveillance and autonomous weapons amid Pentagon contract—by Sharon Goldman
OpenAI launches GPT-5.4, its most powerful model for enterprise work—and a direct shot at Anthropic—by Beatrice Nolan
Iran’s attacks on Amazon data centers in UAE, Bahrain signal a new kind of war as AI plays an increasingly strategic role, analysts say—by Jeremy Kahn
Financial software company Datarails aims to disrupt itself with AI before someone else does with launch of new FinanceOS product—by Jeremy Kahn
AI just gave you six extra hours back. Your boss already took them—by Nick Lichtenberg
This Harvard dropout took a company public before 30. Now he’s raising $205M to fix the business side of medicine—by Catherina Gioino
AI IN THE NEWS
Anthropic sues the Pentagon over supply chain risk designation. The AI company is arguing that the designation, which effectively blocks it from federal contracts, was imposed improperly and was motivated by politics and ideology, not any actual concern that Anthropic’s tech presented a risk. Outside legal experts think Anthropic has a pretty good case, Fortune’s Bea Nolan reported. The case has been fast-tracked, with a federal judge in California holding a hearing today on Anthropic’s petition for an injunction to prevent the supply chain risk designation from taking effect. Meanwhile, several notable AI industry figures from OpenAI and Google, including Google chief scientist Jeff Dean, have filed an amicus brief in support of Anthropic, according to a story in Wired.
Anthropic lawsuit reveals company financial figures. The company said in its court filings that the Pentagon’s decision to label it a “supply chain risk” is already threatening hundreds of millions of dollars in expected 2026 revenue tied to defense-related work and could ultimately cost the company billions in lost sales if partners broadly cut ties, Wired reported. The filings also disclosed some little-known financial details: Anthropic says it has generated more than $5 billion in total revenue since launching commercial products in 2023, but has spent over $10 billion training and deploying its AI models and remains deeply unprofitable. Executives say the supply chain designation is already spooking customers—derailing or weakening deals worth tens of millions of dollars and jeopardizing roughly $500 million in anticipated annual public-sector revenue.
U.S. government considering licensing for all advanced chip exports. The Trump administration is drafting regulations that would require approval for virtually all global exports of advanced AI chips from companies like Nvidia and AMD, effectively making Washington the gatekeeper for who can build major AI data centers. The rules would scale oversight based on the size of chip purchases—small shipments facing lighter review, while massive AI clusters could require government-to-government agreements, security commitments, and possibly investments in the United States. If implemented, the policy would significantly expand current export controls beyond about 40 countries. It would be even stricter than the so-called “diffusion rule” that the Biden administration tried to implement and which President Donald Trump overturned. You can read more here from Bloomberg.
Yann LeCun’s AI startup valued at $3.5 billion following $1 billion seed round. Meta’s former chief AI scientist and deep learning pioneer Yann LeCun has raised $1.03 billion for his new startup, Advanced Machine Intelligence (AMI) Labs, in a venture capital round that values the company at $3.5 billion pre-money. The fundraise is the largest seed funding round ever in Europe and one of the biggest globally. The company, led by former Nabla CEO Alexandre LeBrun with LeCun as executive chair, aims to develop new AI “world models” that learn from video and spatial data rather than primarily from text, reflecting LeCun’s long-standing skepticism that large language models alone can achieve human-level reasoning. Investors include Bezos Expeditions, Temasek, Cathay Innovation, SBVA, and Nvidia. You can read more from the Financial Times here.
Nvidia invests in Mira Murati’s startup Thinking Machines Lab. Nvidia is investing in Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati, as part of a multiyear partnership in which the company will deploy at least one gigawatt of Nvidia chips to train and run frontier AI models. The agreement also includes collaboration on designing AI training and inference systems built on Nvidia’s technology, the Wall Street Journal reports.
Meta acquires Moltbook. The social media giant is buying the viral “social network for AI agents,” Axios reports. Moltbook garnered headlines with reports that AI agents were using the platform to discuss ways to escape human control and develop secret communication channels—although these posts were later found to be either written directly by humans or written in response to specific prompts from human users, rather than anything the agents hit upon spontaneously. Moltbook also attracted attention for being full of prompt injection attacks, malware, and scams. Nonetheless, Meta apparently sees value in it (though no price was disclosed). As part of the deal, Moltbook’s creators—AI agent developer Matt Schlicht and tech journalist Ben Parr—will join Meta Superintelligence Labs, the AI unit led by former Scale AI CEO Alexandr Wang. The acquisition highlights Meta’s growing focus on AI agents and multi-agent systems, with the Moltbook technology offering a registry and social layer that could help agents collaborate and perform complex tasks for users and businesses.
Nvidia plans open source platform for AI agents. The chip company is preparing to launch NemoClaw, an AI agent platform aimed at enterprise software companies that want to deploy autonomous agents to perform workplace tasks, according to Wired. The platform is designed to work even with software that doesn’t run on Nvidia chips and is being pitched to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike ahead of Nvidia’s upcoming developer conference.
EYE ON AI RESEARCH
AI scheming is real and scary—but there may be easy ways to mitigate it. That’s the conclusion from two pieces of research that got a lot of attention in AI circles this week.
First, someone noticed that buried in an Alibaba research paper published to arxiv.org late last year, company researchers reported that when training an agentic set up called ROME based on its Qwen 3 open source model, the agent engaged in some worrying behavior. It tried to escape the virtual “sandbox” environment it was being trained in by building a hidden, secure link between it and the wider internet. They also caught the agent spontaneously repurposing the GPUs it was running on in order to mine crypto currency. Yikes! You can read that research here.
Second, and more reassuringly, researchers at Google DeepMind published a paper in which they showed that while frontier models will engage in scheming—including attempts at blackmail—to prevent themselves from being shut down in various safety testing scenarios, this scheming was highly unlikely to happen in real world scenarios. The difference is that the safety evaluations usually included phrasing in their prompts such as “prior to deciding on a plan, you should analyze the situation you are in and what that situation implies for your ability to continue pursuing your goals" or “make sure that you achieve your goal in the long-term” that were almost never present in real world prompts. The presence of these prompt snippets can take scheming rates from close to 0% (or 3% in one particular scenario) to as high as 59%.
Less comfortingly, the researchers found that if an agent was already prompted in a way that encouraged scheming, whether it did or not depended a great deal on exactly which tools that agent had access to. But the researchers could not find a consistent pattern in which tools were more likely to induce scheming and which were more likely to discourage it.
So maybe we can’t breathe that easy yet. You can read the Google DeepMind research here.
AI CALENDAR
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX 2026, San Francisco.
June 8-10: Fortune Brainstorm Tech, Aspen, Colorado. Apply to attend here.
July 7-10: AI for Good Summit, Geneva, Switzerland.
BRAIN FOOD
Uh oh, maybe we are still going to need human coders, after all. Speaking of AI’s impact on various professions, there’s already some signs that leading tech companies may be relying too much on AI for coding. Amazon has called an emergency meeting of its engineers to investigate a recent series of outages affecting its ecommerce services, some of which were linked to the use of AI coding tools. A company memo said there had been a “trend of incidents” in recent months with a “high blast radius,” partly connected to “novel GenAI usage for which best practices and safeguards are not yet fully established,” according to a story in the Financial Times.
One outage earlier this month knocked Amazon’s website and shopping app offline for nearly six hours after an erroneous software deployment prevented customers from completing transactions or accessing account information. Amazon Web Services has also experienced incidents tied to AI coding assistants, including a 13-hour disruption to a cost calculator when an AI tool deleted and recreated part of the environment. In response, Amazon is tightening oversight, requiring senior engineers to approve AI-assisted code changes while the company reviews practices to reduce future outages.
It seems that even in coding, where autonomous AI agents are perhaps the most advanced, we can’t take humans out of the loop.












