Hello and welcome to Eye on AI. In this edition…Politicians don’t want to talk about AI Safety anymore; Nvidia lashes out at Biden’s last minute export control regime; Microsoft offers lessons from red teaming AI models; We wanted AGI. Did we end up with BSI instead?
Back when I covered finance, traders would talk about whether the markets were “risk off”—fearful of losses and retreating to the safest assets—or “risk on,” pouring money into riskier equities in the hopes of earning big returns. Well, when it comes to AI, 2025 is certainly shaping up as a “risk on” period.
This vibe shift is perhaps most notable in the U.K., where I live. The previous Conservative Party-led British government tried to make a name for itself on AI policy by hosting an international conference aimed at addressing AI’s potential doomsday risks. The AI Safety Summit at Bletchley Park in November 2023 brought together the heads of leading AI companies along with diplomats from 28 nations to discuss a shared approach to identifying AI risks and building mechanisms to prevent them. Former Prime Minister Rishi Sunak also created the world’s first AI Safety Institute to test leading AI models for potential dangers. But yesterday, the Labour Party government of Keir Starmer, which was elected in July, endorsed an “AI Opportunities Action Plan” that is all about how the country can move at speed to embrace AI and hopefully use it to boost Britain’s moribund economy. In fact, in its official announcement, Starmer’s government said the plan “mainlines AI into the veins” of the U.K. economy.
No safe word
The Opportunities Action Plan—which was crafted at the new government’s behest by Matt Clifford, a venture capitalist who also, interestingly, helped the previous Tory government with its AI Safety initiatives, including the Bletchley Summit—talks about AI Safety in just two of its 50 recommendations. Both involve continuing to support the work of the U.K.’s AI Safety Institute in testing powerful AI models for possible dangers.
Instead, the plan pushes for the construction of new data centers in “AI Growth Zones,” and an overhaul of energy policies to ensure they have the electric power they need. Starmer has committed the U.K. to building at least one government-run data center with 100,000 top-of-the-line graphics processing units (GPUs), the specialized chips used for AI applications, by 2030.
Concerns about how to create these massive new data centers without scuppering Britain’s net zero CO2 goals or depleting its groundwater or leaving it without enough power for people’s homes is simply left to a recommendation that the government set out some unspecified plan for avoiding this. It is interesting that the first AI Growth Zone is being positioned next to the U.K.’s experimental fusion reactor—although if the government is counting on fusion to get it out of its sustainability dilemma, that is indeed a risky choice.
Enlisting the government to create data for AI companies
The AI Opportunities Action Plan also recommends that the U.K. gather and consolidate a vast amount of data into a National Data Library that would be available for both researchers and companies to use to train future AI models. The plan calls for the government to look for ways to integrate AI into the provision of public services and into the educational system. The government has also said it will explore changes to the law to make it easier to create large datasets that AI model developers can use without worrying about violating copyright. In fact, the U.K. Copyright Office has already launched a consultation around a proposed rule that would require copyright holders to opt out of having their works used for AI training.
It’s obvious why Starmer’s government is embracing all this. The U.K. is in desperate need of economic growth. While the U.S. economy has been humming, with GDP growing 3.1% in the third quarter of last year, the U.K. has been shuffling meekly forward at less than 1%. Labor productivity growth has been especially poor in Britain, which has eked out gains of just 1.7% compared to pre-pandemic levels, while U.S. workers are now 6.7% more productive than pre-2020. Meanwhile, the bond markets have been punishing the Labour government for tax-and-spend policies that traders don’t think will manage to rein in public debt. As a result, they’ve pushed borrowing costs higher, threatening to derail Labour’s entire domestic policy agenda. The only way out of this bind is for Starmer to deliver economic growth—and he is grasping at AI to do it.
Macron wants in on the ‘action’ too
But Starmer is hardly alone among global politicians suddenly talking more about AI’s potential economic upsides than its existential risks. French President Emmanuel Macron’s government will host the next big diplomatic confab on international AI governance in Paris in a few weeks. The gathering is a successor to the Bletchley Summit. Only rather than calling it an “AI Safety Summit,” the French government is calling the Paris meeting an “AI Action Summit.” (“Action” seems to be the AI buzzword of the day.) And it’s notable that the conference will include five tracks—with an emphasis on “public interest AI” and “innovation and culture.” In fact, “global AI governance” is listed last among these tracks and the description of the track mentions risk just once, and safety, not at all.
Meanwhile, the European Union has voiced its concerns about the Biden Administration’s eleventh hour rulemaking on the export of advanced AI technologies, including model weights and, most significantly, cutting-edge GPUs and other AI chips. Those rules allow free export of these technologies to a select group of close U.S. allies, but place most nations in a second tier of countries to which such exports will be limited. This could make it hard for some EU countries to build the huge data centers needed to train and run the most advanced AI models. (My colleague David Meyer has more on this here.) Again, the EU doesn’t want to miss out on the economic opportunities.
Blueprints and Red Flags
In the U.S., OpenAI also just unveiled a document, titled “AI in America,” that the company is calling its “economic blueprint.” Like the U.K. Action Plan, OpenAI is recommending special AI economic zones, where data centers and power plants can be concentrated, and it is calling for the government to digitize more data so that companies can use it to create AI models.
Overall, the blueprint is an effort to lobby against state-by-state regulation of AI and for the federal government to step in with a clear set of guidelines—but, of course, nothing too onerous. “AI in America” leans heavily on an analogy to Britain’s 19th Century “Red Flag Act,” which mandated that early automobiles could travel at no more than four miles per hour and had to always be preceded by a person on foot, carrying a red flag, whose task was to warn horse-traffic (which had priority) and pedestrians of the approaching vehicle. OpenAI argues the act is a classic example of focusing too much on potential harms and over-regulating a new technology, and it says the act hobbled the development of a British auto industry—whereas in the U.S., which had no such rules, automaking thrived.
As with all historical analogies, it’s highly imperfect. But OpenAI may get its wish. Around the world, politicians seem suddenly eager to ditch the red warning flags and open the roads to AI’s speeding onslaught.
With that here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
AI IN THE NEWS
Nvidia pushes back against the Biden Administration’s sweeping new AI export controls. The company dropped its traditional reticence to wade directly into policy debates and lashed out at the new rules, which the White House issued on Monday. The company also took the opportunity to praise incoming President Donald Trump’s policies on semiconductors during his previous term in office and express optimism about his policy making as he prepares to take office for a second time. My Fortune colleague Sharon Goldman has more here on Nvidia’s lobbying against the new export regime and what it may signal.
Chipmaker AMD invests in AI drug-discovery company Absci. Advanced Micro Devices announced a $20 million strategic investment into Absci, an AI-enabled drug discovery company. The partnership will allow Absci to shift some of its AI workloads from Nvidia GPUs to AMD's. The move, according to a story in the Wall Street Journal, seems to be part of a broader AMD strategy to try to expand its market share by targeting specific industry verticals. Its arch-rival Nvidia has also invested into several similar healthcare startups.
Financial giant Macquarie commits $5 billion to Applied Digital’s AI data centers. The Australia-based financial group says it will invest $5 billion in Applied Digital's AI-focused data center expansion, starting with $900 million for a North Dakota facility, with an option to invest an additional $4.1 billion over a 30-month period, the Wall Street Journal reports. The deal grants Macquarie a 15% stake in Applied Digital's high-performance computing business while providing funding to repay debt and recover equity from prior projects. Applied Digital, originally focused on cryptocurrency mining, has pivoted to AI infrastructure, leveraging the booming demand for power-intensive data centers to support AI applications.
AI-powered financial advice apps often steer users to higher-fee products. That’s the conclusion of a Wired magazine story that looked at popular “robo-advisor” apps Cleo AI and Bright, both of which count many younger people, without substantial wealth, as customers. While the apps hold out the promise of providing high-quality, bespoke financial advice to the masses at a price they could not otherwise afford, Wired’s reporter found the apps pushed users towards third-party financial products, including cash advances, many of which carried high fees, high interest rates, or both. Barney Hussy-Yeo, the founder and CEO of Cleo, defended his app as providing “the right advice and the right products to help you make better financial decisions.”
EYE ON AI RESEARCH
Lessons from red teaming 100 genAI models—it isn’t pretty. A group of Microsoft researchers have released a paper summarizing their lessons from having red-teamed more than 100 generative AI products at the company in recent years. While much of the paper, which was published this week on arxiv.org, is devoted to helpful advice on how to conduct effective red teaming, some of their conclusions should give all of us pause. For instance, the Microsoft researchers advise that you don’t have to have access to a model’s underlying weights to effectively break it; that “responsible AI harms are pervasive but difficult to measure” (perceptions of harm were somewhat subjective, the researchers wrote, but more importantly, the models were unpredictable with red teamers often left scratching their heads as to why a particular prompt elicited a harmful response); and that large language models “amplify existing security risks and introduce new ones.”
FORTUNE ON AI
Reimagining the artist’s signature so creative people can thrive—even as AI content explodes (Commentary) —by Scott Belsky
Honeywell CEO: AI will transform industry at scale beginning in 2025 (Commentary) —by Vimal Kapur
Elon Musk says AI has already gobbled up all human-produced data to train itself and now relies on hallucination-prone synthetic data —by Sasha Rogelberg
AI CALENDAR
Jan. 16-18: DLD Conference, Munich
Jan. 20-25: World Economic Forum, Davos, Switzerland
Feb. 10-11: AI Action Summit, Paris, France
March 3-6: MWC, Barcelona
March 7-15: SXSW, Austin
March 10-13: Human [X] conference, Las Vegas
March 17-20: Nvidia GTC, San Jose
April 9-11: Google Cloud Next, Las Vegas
BRAIN FOOD
If what we have today isn’t AGI, what is it? In last week’s newsletter, I picked on AI skeptic Gary Marcus for being too pessimistic about the business benefits of today’s not-yet-human-level, still somewhat unreliable AI. Marcus later messaged me to suggest I had misinterpreted or mischaracterized some of what he said. I stand by what I wrote, but I do want to highlight another blog item from Marcus this week that I think hits the mark. Marcus uses the blog to argue against those who say that today’s most powerful AI models, such as OpenAI’s o3, meet the definition of artificial general intelligence (or AGI), the long-sought Holy Grail of AI research. This is generally defined as AI that performs as well or better than humans at most tasks.
Marcus consults some of the key people who helped coin and popularize the term AGI in the early 2000s, and all of them agree with him that today’s AI models do not meet their original AGI definitions. This is for a number of reasons, the most important of which is their inability to apply conceptual knowledge to completely new situations that are different from anything they’ve ever encountered in training.
But then Marcus asks, if today’s AI doesn’t qualify as AGI, what should we call it? It isn’t like earlier kinds of “narrow AI” that could only do one task at superhuman levels (play chess or Go or detect subtle defects in autoparts), but could not do anything else. Today’s AI can be applied to a lot of different problems, but it's not fully reliable. Marcus suggests we should call it “broad, shallow intelligence” (or BSI) to differentiate it from AGI and also from ASI, or artificial superintelligence (which would be AI even more powerful than AGI.) I think it’s not a bad term. It does get at the essence of what’s right and wrong with today’s AI systems. What do you think? Will BSI become a thing?