The AI jobs apocalypse is not yet upon us, according to new data

By Beatrice NolanTech Reporter
Beatrice NolanTech Reporter

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

An illustration of a human and a robot sitting in front of computers working.
New research shows AI hasn’t upended U.S. jobs yet.

Hello and welcome to Eye on AI. In this edition: No AI Jobpocalypse, plus early signs of life for entry-level jobs…OpenAI launches Sora 2Meta plans to use AI chatbot conversations to personalize ads…and more companies are disclosing AI-related risks.

Hi, Beatrice Nolan here, filling in for AI reporter Sharon Goldman, who is out today. For all the corporate hype and Silicon Valley hand-wringing, new research suggests that the U.S. jobs market hasn’t yet experienced the AI apocalypse some have warned about.

In a new report, researchers from Yale’s Budget Lab and the Brookings Institution said they had found no evidence of any “discernible disruption” to jobs since the launch of OpenAI’s ChatGPT in November 2022. The study found that most of the ongoing shifts in the U.S. occupational mix, a measure of the types of jobs people hold, were already underway in 2021, and recent changes don’t appear any more dramatic.

“While the occupational mix is changing more quickly than it has in the past, it is not a large difference and predates the widespread introduction of AI in the workforce,” the researchers wrote in the report. “Currently, measures of exposure, automation, and augmentation show no sign of being related to changes in employment or unemployment.”

Industries with higher AI exposure, such as Information, Financial Activities, and Professional and Business Services, have seen some downward shifts, but these trends largely began before ChatGPT’s launch.

The conclusion isn’t altogether shocking, although it flies in the face of some of the AI doomsayers’ more dramatic claims. Historically, major workplace disruptions have unfolded over decades, not months or years. Computers, for example, didn’t become common in offices until nearly 10 years after their debut, and it was even longer before they reshaped workflows. If AI ends up transforming the labor market as dramatically as computers did—or more so—it’s reasonable to expect that broad effects will take longer than three years to appear.

Some executives have also told me they are taking a “wait and see” approach to hiring while they assess whether the tech can really deliver on its productivity promises. This approach can slow hiring and make the labor market feel sluggish, but it doesn’t necessarily mean workers are being automated out of their jobs.

While anxiety over the effects of AI on today’s labor market may be widespread, the new data suggests that this anxiety is still largely speculative. 

Entry-level hiring woes

The real hiring pain has been felt by college grads and entry-level workers.

There’s no denying that AI is better at tasks typically done by this class of workers, and companies have increasingly been saying the quiet part out loud when it comes to junior roles. But claims that AI is keeping recent graduates out of work aren’t entirely supported by the new data. When researchers compared jobless rates for recent graduates to those with more experience, new grads seemed to be having a slightly tougher time landing roles, but the gap wasn’t big enough to suggest technology is the main factor.

The researchers found a small increase in occupational dissimilarity compared to older graduates, which could reflect early AI effects but also could just as easily be attributed to labor market trends, including employers’ and job-seekers’ reactions to noise about AI replacing workers. The report suggests that entry-level struggles are more likely to be part of broader labor market dynamics rather than a direct result of AI adoption.

Recently, there have also been anecdotal but promising signs of life in the entry-level job market. For example, Shopify and Cloudflare are both increasing their intern intake this year, with Cloudflare calling AI tools a way “to multiply how new hires can contribute to a team” rather than a replacement for the new hires themselves. Younger workers are typically more receptive, more eager to experiment, and more creative when it comes to using emerging technology, which could give companies that hire them an edge. As U.K.-based programmer Simon Willison put it: “An intern armed with AI tools can produce value a whole lot faster than interns in previous years.”

The researchers cautioned that the analysis isn’t predictive, and they plan to keep updating their findings. They also warned that the sample size is small.

Just because AI hasn’t significantly impacted the labor market yet doesn’t mean it won’t in the future. Some recent assessments, such as OpenAI’s new GDPval benchmark, show that leading AI models are getting better at performing professional tasks at or above human expert level on roughly half of cases, depending on the sector. As AI tools improve and companies get better at integrating them, the tech could have a more direct impact on the workforce.

But should we be thinking of AI as just the next computer, or as a new industrial revolution? At least for now, the jury’s still out.

With that, here’s the rest of the AI news.

Beatrice Nolan
bea.nolan@fortune.com
@beafreyanolan

FORTUNE ON AI

We’re not in an ‘AI winter’—but here’s how to survive a cold snap —by Sharon Goldman

California governor signs landmark AI safety law, forcing major tech companies to disclose protocols and protect whistleblowers —Beatrice Nolan

How OpenAI and Stripe’s latest move could blow up online shopping as we know it —by Sharon Goldman

Meta is exploiting the ‘illusion of privacy’ to sell you ads based on chatbot conversations, top AI ethics expert says—and you can’t opt out —Eva Roytburg

AI IN THE NEWS

Meta plans to use AI chatbot conversations to personalize ads. Meta will begin using chats with its AI assistant to shape ads and content recommendations across Facebook and Instagram. The company announced the update to its recommendation system on Wednesday, adding it will take effect on Dec. 16, with user notifications beginning Oct. 7. The company told the Wall Street Journal that it will not use conversations about religion, politics, sexual orientation, health, or race and ethnicity to personalize ads or content. The move will tie Meta’s massive investments in generative AI into its core ad business. Users can’t opt out, but those who don’t use Meta AI won’t be affected, according to the Journal.

Mira Murati’s Thinking Machines Lab launches its first product. Thinking Machines, an AI lab lead by former OpenAI CTO Mira Murati, has launched a tool that automates the creation of custom frontier AI models. Murati told Wired the tool, called Tinker, "will help empower researchers and developers to experiment with models and will make frontier capabilities much more accessible to all people." The team believes that giving users the tools to fine-tune frontier models will demystify the process of model tuning, make advanced AI accessible beyond big labs, and help to unlock specialized capabilities in areas like math, law, or medicine. The startup raised $2 billion in seed funding in July 2025, before releasing any products, and is made up of a team of top researchers including John Schulman, who cofounded OpenAI and led the creation of ChatGPT. Read more from Wired.

OpenAI launches a new version of Sora. OpenAI has launched Sora 2, its next-generation AI video and audio model, along with a companion app that lets users create, share, and remix AI-generated videos. The new model improves photorealistic motion, generates speech, and introduces “cameos,” allowing users to insert themselves into videos via a short verification recording. However, according to the Wall Street Journal, the new video generator requires copyright holders to opt out. This means that movie studios and other IP owners must actively request that OpenAI exclude their copyrighted material from videos generated by the new version of Sora. A later report from 404 Media found that users are able to generate strange and often offensive content featuring copyrighted characters like Pikachu, SpongeBob SquarePants, and figures from The Simpsons. Read more from 404 Media here.

A new startup is scooping up top AI researchers. Periodic Labs, a new San Francisco startup founded by ChatGPT co-creator Liam Fedus and former DeepMind scientist Ekin Dogus Cubuk, has recruited a string of top AI researchers from OpenAI, Google DeepMind, and Meta, according to the New York Times. More than 20 researchers, including Rishabh Agarwal, who was poached by Meta from DeepMind just a few months ago, have left their work at major AI companies to join the startup focused on building AI that accelerates real-world scientific discovery in physics, chemistry, and materials science. It's backed by $300 million in funding and plans to use robots to run large-scale lab experiments. Read more from the New York Times.

AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam.

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego.

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

EYE ON AI NUMBERS

72% 

That's the percentage of S&P 500 companies that have disclosed an AI-related risk this year, according to The Conference Board, a nonprofit think tank and business membership organization, and ESGAUGE, a data analytics firm. Public company disclosure of AI as a material risk has surged in the past two years, with the share of  S&P 500 companies citing an AI-related risk jumping from 12% in 2023 to 72% this year.

Reputational risk is the most frequently cited concern around AI, disclosed by 38% of companies in 2025. Cybersecurity was a close second, cited by 20% of firms in both 2024 and 2025. While all sectors are disclosing risks, financial, health care, and industrials have seen the sharpest rise. This may be because financial and health care companies face regulatory risks tied to sensitive data and fairness, while industrials are largely scaling automation and robotics.

"The rise in AI-related risk disclosures reflects the rapid mainstreaming of AI across corporate functions in recent years, as companies embed it more deeply into areas such as supply chains, customer engagement, and product development," Andrew Jones, principal researcher at The Conference Board, told Fortune. "With adoption expanding, firms have increased their internal focus on governance, compliance, and operational considerations, with boards, risk committees, and legal teams evaluating potential challenges from data privacy and bias to regulatory uncertainty and liability." 

The dramatic surge in disclosures does signal that more companies are seeing AI integration as a material risk that needs to be actively managed and communicated to investors. The findings were based on Form 10-K filings from S&P 500 companies available through Aug. 15, 2025.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.