Hello and welcome to Eye on AI. In this edition: Could a recession actually be good for AI adoption?…Meta’s Llama 4 is released amid controversy…ByteDance shows off a more efficient way to run big LLMs…Everyone is talking about AI 2027.
U.S. President Donald Trump’s tariffs have precipitated a global stock market rout and threaten a global recession. The tariffs hit at a time when investor enthusiasm for AI was already showing signs of flagging—Nvidia’s stock was down 20% this year, Google-parent Alphabet was down 17%, Salesforce, which has gone all in on AI agents, was down 14%, and Microsoft was down 8%, before Trump announced his tariffs. All have fallen considerably further since. Now some are wondering if the tariffs spell the end of the road for the AI boom.
There’s plenty of reporting to suggest as much. My Fortune colleagues Jessica Mathews and Allie Garfinkle report that venture capitalists are telling their startups to brace for trouble as potential customers pull back from IT spending, while my colleague Sharon Goldman reports that the tariffs, while exempting semiconductors for the moment, are likely to hurt the build out of AI data centers and the energy infrastructure to power them. The Information has also reported that companies may slash their spending on AI and AI-enhanced enterprise software as they look to cut costs.
It is certainly true that the investment case for AI has just gotten considerably more difficult. But is it true that a recession would likely set back AI adoption? Or is it possible that the need to cut costs will actually push companies to adopt AI even faster? To find out, I took a look at what economic research has to say about the effect previous recessions had on automation and digital technologies.
After an initial halt, the Great Depression catapulted tech forward
That research is intriguing, but not conclusive. Most of the economic literature indicates that at least for U.S. companies, a recession might speed AI adoption. In a 2016 paper, economist Shingo Watanabe examined technological progress during the Great Depression and found that while investment in tech initially stalled, there was a big rebound after 1933. In fact, adoption of new technology—primarily factory, mining, and agricultural automation—moved so rapidly that the 1930s eclipsed any other decade before World War II on the metric of tech implementation, including the roaring twenties that had preceded it.
But what about more recently? Here too, there is some strong evidence to support the idea that a recession will speed AI adoption. Economists, including Nir Jaimovich and Henry Siu, have found that the “jobless recoveries” experienced in the U.S. following the recessions that bracketed the 1980s (with recessions in 1981 and 1991) as well as following the 2001 recession can be blamed in large part on businesses adopting new computer technologies and manufacturing automation during those recessions, meaning that even when economic growth recovered, businesses no longer needed to hire as many workers as in the past. (The off-shoring of jobs during these recessions was also a factor.)
The Great Recession and COVID-19 boosted automation
Examining the Great Recession of 2008-2009, economics researchers Brad Herbsheim and Lisa Kahn (no relation) found that the skills employers sought shifted during the economic downturn, with much more demand for workers with software and coding skills following the recession. They concluded that this was an indication that companies had adopted digital technologies and process automation during the recession. The University of Pennsylvania’s Alexandr Kopytov and Nikolai Roussanov and Cornell University’s Mathieu Taschereau-Dumouchel created an economic model that showed how the recession had led corporations to speed up the pace of technological adoption, particularly automation.
Perhaps most profound in terms of accelerating tech adoption was the sudden, sharp recession caused by the COVID-19 pandemic. The OECD found that firms around the world, including many companies that had previously been laggards when it came to tech, rushed to deploy digital technologies—including, of course, software to support remote work, but also cloud computing, e-commerce software, and business process automation.
Now, evidence from outside the U.S. is less clear. A 2021 study looking at how German firms dealt with the 2008 financial crisis, found that many cut back on IT spending, innovation and R&D. Other studies have suggested something similar in other European locations. More rigid labor laws may be one of the reasons—it is harder to lay off workers in Europe, which may make it more difficult for businesses to pursue automation strategies. On the other hand, in East Asia, where there are fewer labor protections, studies have also shown that recessions since the 1990s have resulted in falling economic output, falling labor participation, and falling productivity—indications that companies are not replacing labor with new technology.
Still, there’s ample reason to believe that a tariff-induced recession could be a boon to AI adoption, particularly in the U.S. So while the frothy valuations of AI companies may be coming back down to Earth, the AI revolution may not be ending. It may only just be getting going.
With that, here’s the rest of this week’s AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Before we get to the news, if you’re interested in learning more about how AI will impact your business, the economy, and our societies (and given that you’re reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I’ll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
AI IN THE NEWS
OpenAI and Google take a maximalist stand on U.K. copyright proposal. The British government has proposed allowing AI companies to train on public content unless copyright holders opt out. But that isn’t enough to satisfy the two AI companies which have told the government an opt out is technically unfeasible, Politico Europe reports. Their stance increases pressure on the U.K. government, which is still considering over 11,000 consultation responses while trying to balance the desire to be a center of AI innovation with strong pushback from the country’s prominent creative industries.
Meta releases Llama 4 but debut marred by controversy. The company released its Llama 4 family of multi-modal models. It claimed the largest of these Llama 4 Behemoth, with 2 trillion total parameters, or tunable variables, outperforms OpenAI, Anthropic’s, and Google’s latest models “on several STEM benchmarks.” But Behemoth is not yet available to the public. Instead, Meta released two smaller models called Llama 4 Scout and Llama 4 Maverick. And the release faced criticism from developers who noted that benchmark tests for Maverick that Meta released were performed on a customized version not available to the public, while other reports surfaced that Meta might have mixed benchmark test results to make the models performance look stronger—something Meta has denied. You can read more on Meta’s blog here, on TechCrunch here, and about the controversy from VentureBeat here.
Runway debuts new video generation model. The AI startup says the new model, Gen-4, enables filmmakers to generate consistent characters and scenes across multiple shots using a single reference image and descriptive prompts, addressing previous challenges in maintaining continuity in AI-generated videos. Read more from the Verge here.
Microsoft updates Copilot with new capabilities. In a release timed to the company’s 50th anniversary celebrations, Microsoft unveiled a significant upgrade of its Copilot AI chatbot, designed to turn the AI into more of a personal assistant, the Verge reports. Copilot now remembers user preferences, can be personalized (including with a digital avatar), and can take actions across the internet, including booking reservations and making purchases. It can also analyze content on user’s screens or fed to the AI through a mobile phone camera to offer real-time assistance across applications.
Trump administration moves to accelerate government use of AI. The White House Office of Management and Budget issued two revised memos to federal agencies instructing them to remove bureaucratic barriers to AI usage throughout government. It says that agencies should appoint Chief AI Officers who will act as “change agents” to accelerate AI adoption. It also calls on government to focus on “competitive AI marketplace development” through acquisition of AI software. You can read the memos and a fact-sheet about them on the White House website here.
OpenAI in talks to buy CEO Sam Altman’s and Johnny Ive’s hardware startup. That’s according to a story in The Information which cited two people it said had direct knowledge of the discussions. The OpenAI CEO cofounded a company called io Products that has been working with former Apple-designer Ive’s design studio on a “non-phone device” that would be a new interface between people and voice-enabled AI assistants. OpenAI has discussed buying the company in a deal valued at as much as $500 million, although they have also discussed partnerships that would not involve an acquisition, The Information said.
EYE ON AI RESEARCH
Ping-ponging your way to GPU optimization. Researchers at the Chinese tech giant ByteDance have published a paper on how they run large “mixture-of-experts” (MoE) large language models (LLMs) efficiently on the Nvidia GPU hardware available in China—older generation A100s, and H20 and L40S GPUs, which Nvidia has specifically designed to offer good performance while still complying with U.S. export restrictions. ByteDance has created a method it calls “MegaScale-Infer” that separates two different processes LLMs use, attention and feed-forward processing, into two separate modules run on separate GPUs, each optimized for that process. It then uses what ByteDance calls a “ping-pong pipeline” strategy to shuttle data between these two modules in small batches. The system achieves 1.9 times higher throughput than existing state-of-the-art systems. Many large LLMs, including Meta’s latest Llama 4 Behemoth, are MoE models, where only a fraction of the models parameters are active at any one time, so the method may be of interest to plenty of companies looking to reduce the cost of using such models. You can read the research paper here on the non-peer reviewed research repository arxiv.org.
FORTUNE ON AI
Google Cloud moves deeper into open-source AI with Ai2 partnership —by Jeremy Kahn
Shopify CEO tells employees to prove AI can’t do jobs before asking for new hires —by Beatrice Nolan
Is the CEO of the heavily funded humanoid robot startup Figure AI exaggerating his startup’s work with BMW? —by Jason Del Rey
Google DeepMind 145-page paper predicts AGI will match human skills by 2030 — and warns of existential threats that could ‘permanently destroy humanity’ —by Beatrice Nolan
The AI cost collapse is changing what’s possible—with massive implications for tech startups —by Thiyagarajan Maruthavanan (Commentary)
AI CALENDAR
April 9-11: Google Cloud Next, Las Vegas
April 24-28: International Conference on Learning Representations (ICLR), Singapore
May 6-7: Fortune Brainstorm AI London. Apply to attend here.
May 20-21: Google IO, Mountain View, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
BRAIN FOOD
In the year 2027…A scenario about how AI might rapidly advance beyond human capabilities in the next two years, with potential for human loss-of-control, has received a ton of attention in AI circles. The scenario—which you can read in full here, or read about in the New York Times here—has attracted attention in part because of the pedigree of its writers and the fairly rigorous methodology they apply to formulating their scenario. AI 2027’s authors include Daniel Kokotajlo, a former policy researcher at OpenAI who resigned over what he has said was the company’s “reckless” pursuit of profits ahead of AI safety. But Kokotajlo is also known for having penned a prescient scenario in 2021, well before ChatGPT debuted hat has accurately predicted many subsequent AI developments. In fact, much of what Kokotajlo predicted has occurred slightly ahead of when he thought it might (that earlier scenario was called “What 2026 Looks Like.” He was joined in penning AI 2027 by Thomas Larsen, Eli Lifland, and Romeo Dean, all of whom have backgrounds in AI and computer science and a reputation as “superforecasters” (people who are especially good at predicting future events), and Scott Alexander, a psychologist and blogger whose Astral Codex 10 site is popular with AI Safety researchers. I am not sure I buy some of the ideas in AI 2027, but the scenario is worth reading and thinking about.