Hello and welcome to Eye on AI. In this edition…AGI timelines are getting shorter but so is the amount of attention AI labs seem to be paying to AI safety…Venture capital enthusiasm for OpenAI alums’ startups shows no sign of waning….A way to trace LLM outputs back to their source…and the military looks to LLMs for decision support, alarming humanitarian groups.
“Timelines” is a short-hand term AI researchers use to describe how soon they think we’ll achieve artificial general intelligence, or AGI. While its definition is contentious, AGI is basically an AI model that performs as well as or better than humans at most tasks. Many people’s timelines are getting alarmingly short. Former OpenAI policy researcher Daniel Kokotajlo and a group of forecasters with excellent track records have gotten a lot of attention for authoring a detailed scenario, called AI 2027, that suggests AGI will be achieved in, you guessed it, 2027. They argue this will lead to a sudden “intelligence explosion” as AI systems begin building and refining themselves, rapidly leading to superintelligent AI.
Dario Amodei, the cofounder and CEO of AI company Anthropic, thinks we’ll hit AGI by 2027 too. Meanwhile, OpenAI cofounder and CEO Sam Altman is cagey, trying hard not to be pinned down on a precise year, but he’s said his company “knows how to build AGI”—it is just a matter of executing—and that “systems that start to point to AGI are coming into view.” Demis Hassabis, the Google DeepMind cofounder and CEO, has a slightly longer timeline—five to 10 years—but researchers at his company just published a report saying it’s “plausible” AGI will be developed by 2030.
The implications of short timelines for policy are profound. For one thing, if AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare. While I have previously predicted AI won’t lead to mass unemployment, my view is predicated on the idea that AGI will not be achieved in the next five years. If AGI does arrive sooner, it could indeed lead to large job losses as many organizations would be tempted to automate roles, and two years is not enough time to allow people to transition to new ones.
If timelines are short, safety should matter more
Another implication of short timelines is that AI safety and security ought to become more important. (The Google DeepMind researchers, in their latest AI safety paper, said AGI could lead to severe consequences, including the “permanent end of humanity.”)
Jack Clark, a cofounder at Anthropic who heads its policy team, wrote in his personal newsletter, Import AI, a few weeks ago that short timelines called for “more extreme” policy actions. These, he wrote, would include increased security at leading AI labs, mandatory pre-deployment safety testing by third-parties (moving away from the current voluntary system), and spending more time talking about—and maybe even demonstrating—dangerous misuses of advanced AI models in order to convince policymakers to take stronger regulatory action.
Companies are paying less attention to safety
But, contrary to Clark’s position, even as timelines have shortened, many AI companies seem to be paying less, not more, attention to AI safety. For instance, last week, my Fortune colleague Bea Nolan and I reported that Google released its latest Gemini 2.5 Pro model without a key safety report, in apparent violation of commitments the company had made to the U.S. government in 2023 and at various international AI safety summits. And Google is not alone—OpenAI also released its DeepResearch model without the safety report, called a “system card,” publishing one only months later. The Financial Times also reported this week that OpenAI has been slashing the time it allows both internal and third-party safety evaluators to test its models before release, in some cases giving testers just a few days for evaluations that had previously been allotted weeks or months to be completed. Meanwhile, AI safety experts criticized Meta for publishing a system card for its new Llama 4 model family that provided only barebones information on the models’ potential risks.
The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market. The closer AGI appears to be, the more bitterly fought the race to get there first will be.
The U.S. government sees safety as an impediment to beating China
In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way. The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn’t help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, including possible licensing requirements for leading AI labs, but now says he thinks AI companies can self-regulate on AI safety—are lobbying the government to eschew any legal requirements.)
Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing’s interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there’s a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies’ timelines are wrong.
With that, here’s the rest of this week’s AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Before we get to the news, if you’re interested in learning more about how AI will impact your business, the economy, and our societies (and given that you’re reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I’ll be there, of course. I hope to see you there too. You can apply to attend here.
And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.
AI IN THE NEWS
Investor enthusiasm for AI companies isn’t over. Or at least, not if your company happens to have been founded by prominent OpenAI alums. Both Ilya Sutskever, an OpenAI cofounder and former chief scientist, and Mira Murati, OpenAI’s former chief technology officer, announced mega-venture capital rounds for their respective startups. Sutskever has raised $2 billion for his startup Safe Super Intelligence at a valuation of $32 billion, the Financial Times reported. Meanwhile, Murati is, according to a report in Business Insider, in the process of raising a $2 billion “seed” round (perhaps the largest seed round in history) for her startup Thinking Machines—for which she has hired a slew of fellow OpenAI alumni. The round would value the startup at $10 billion, at least, Business Insider said, citing sources familiar with the discussions.
OpenAI releases new models, hints at AI software engineer. OpenAI debuted a new flagship model, GPT-4.1, that outperforms its predecessor GPT-4o, while also being 26% cheaper. You can read more in the Verge here. At the same time, The Information reported that the company was preparing for the public release of its o3 model, which it had previewed previously, as well as an even more capable “reasoning” model called o4. It said that these models would be able to invent new ideas—including possibly suggesting new kinds of materials or new treatments for diseases, by making connections between data found in disparate sources. Finally, in a talk at a Goldman Sachs conference, Sarah Friar, OpenAI's chief financial officer, said the company was working on an agentic AI software engineer (called A-SWE). You can view Friar’s discussion of A-SWE here.
Nvidia announces a major deal to build its Blackwell chips in the U.S. The leading AI chip company announced it had gotten its Taiwan-based supply chain partners, including TSMC, which produces most of its chips, and Foxconn, which helps assemble them into larger components, to agree to build its latest Blackwell AI chips and associated “AI supercomputers” in the U.S. for the first time. The chips will be made at a plant in Arizona and assembled into larger computing servers in Texas. In total, the company said it would produce up to $500 billion worth of AI infrastructure in the U.S. over the next four years. The moves come as Nvidia faces pressure from the Trump administration, which announced new tariffs on semiconductors would likely come into effect in the next two months. You can read more from the Wall Street Journal here.
White House advisor lays out administration’s high-level tech strategy. Michael Kratsios, the director of the White House Office of Science and Technology Policy, laid out the administration’s tech strategy in a major speech in Texas on Monday. Kratsios called for the U.S. to spend public research funds in a more focused way, for deregulation to help American tech and energy companies, and for increased efforts to prevent China from gaining a technological advantage over the U.S. You can read more from my Fortune colleague Jessica Mathews here.
Palantir sues AI startup Guardian for trade secret theft. Palantir alleges in a suit filed in federal court last month that Guardian cofounders Mayank Jain and Pranav Pillai stole trade secrets from Palantir’s healthcare division, where they worked prior to launching their AI startup. Guardian uses AI to help fight insurance claim denials, a service Palantir also offers. Jain told Forbes by email that “This is quite new and we’re working on resolving it with Palantir at the moment.”
EYE ON AI RESEARCH
Tracing how LLMs arrive at outputs. The Allen Institute for Artificial Intelligence (Ai2) has developed a system that can trace, in real time, how a large language model’s output in response to a prompt is influenced by individual pieces of data from its training data set. The system could be used to fact-check LLM outputs, cutting down on hallucinations, as well as to discover outputs that plagiarize from copyrighted data. It could also be used to compensate copyright holders for the contribution that their data makes to any particular LLM output. But in order to work, the user needs to have access to the complete training data set and use an open-source AI model, like Ai2’s OLMo. You can read the research paper here on arxiv.org.
FORTUNE ON AI
With defense tech booming and Palantir stock up 323% over the past year, CEO Alex Karp has finally been vindicated —by Michal Lev-Ram
She was one of the youngest general partners in venture capital. Now she’s at the forefront of AI investing —by Allie Garfinkle
AI company Hugging Face buys humanoid robot company Pollen Robotics —by Jeremy Kahn
Sam Altman says ‘10% of the world now uses our systems a lot’ as Studio Ghibli-style AI images help boost OpenAI signups —by Beatrice Nolan
12 former OpenAI employees asked to be heard in Elon Musk’s lawsuit against the company; one calls Sam Altman a ‘person of low integrity’ —by Sharon Goldman
Meta’s AI research lab is ‘dying a slow death,’ some insiders say. Meta prefers to call it ‘a new beginning’ —by Sharon Goldman
AI CALENDAR
April 24-28: International Conference on Learning Representations (ICLR), Singapore
May 6-7: Fortune Brainstorm AI London. Apply to attend here.
May 20-21: Google IO, Mountain View, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
BRAIN FOOD
LLMs are going to war. The U.S. military has begun experimenting with the use of LLM-based AI systems to help troops make battlefield decisions. The systems analyze surveillance data and other intelligence and can recommend actions troops should take in response. But whether such systems are actually up-to-the-task remains a big question—and the consequences could not be higher. Human rights groups are alarmed the AI systems are too prone to error and will result in deadly mistakes. MIT Tech Review has a good exploration of the latest military AI tech and its potential consequences, which you can read here.