Hello and welcome to Eye on AI. In this edition…The news media grapples with AI; Trump orders U.S. AI Safety efforts to refocus on combating ‘ideological bias’; distributed training is gaining increasing traction; increasingly powerful AI could tip the scales toward totalitarianism.
AI is potentially disruptive to many organizations’ business models. In few sectors, however, is the threat as seemingly existential as the news business. That happens to be the business I’m in, so I hope you will forgive a somewhat self-indulgent newsletter. But news ought to matter to all of us since a functioning free press performs an essential role in democracy—informing the public and helping to hold power to account. And, there are some similarities between how news executives are—and critically, are not—addressing the challenges and opportunities AI presents that business leaders in other sectors can learn from, too.
Last week, I spent a day at an Aspen Institute conference entitled “AI & News: Charting the Course,” that was hosted at Reuters’ headquarters in London. The conference was attended by top executives from a number of U.K. and European news organizations. It was held under Chatham House Rules so I can’t tell you who exactly said what, but I can relay what was said.
Tools for journalists and editors
News executives spoke about using AI primarily in internally-facing products to make their teams more efficient. AI is helping write search engine-optimized headlines and translate content—potentially letting organizations reach new audiences in places they haven’t traditionally served, though most emphasized keeping humans in the loop to monitor accuracy.
One editor described using AI to automatically produce short articles from press releases, freeing journalists for more original reporting, while maintaining human editors for quality control. Journalists are also using AI to summarize documents and analyze large datasets—like government document dumps and satellite imagery—enabling investigative journalism that would be difficult without these tools. These are good use cases, but they result in modest impact—mostly around making existing workflows more efficient.
Bottom-up or top-down?
There was active debate among the newsroom leaders and techies present about whether news organizations should take a bottom-up approach—putting generative AI tools in the hands of every journalist and editor, allowing these folks to run their own data analysis or “vibe code” AI-powered widgets to help them in their jobs, or whether efforts should be top-down, with the management prioritizing projects.
The bottom-up approach has merits—it democratizes access to AI, empowers frontline employees who often know the pain points and can often spot good use cases before high-level execs can, and frees limited AI developer talent to be spent only on projects that are bigger, more complex, and potentially more strategically important.
The downside of the bottom-up approach is that it can be chaotic, making it hard for the organization to ensure compliance with ethical and legal policies. It can create technical debt, with tools being built on the fly that can’t be easily maintained or updated. One editor worried about creating a two-tiered newsroom, with some editors embracing the new tech, and others falling behind. Bottom-up also doesn’t ensure that solutions generate the best return on investment—a key consideration as AI models can quickly get expensive. Many called for a balanced approach, though there was no consensus on how to achieve it. From conversations I’ve had with execs in other sectors, this dilemma is familiar across industries.
Caution about jeopardizing trust
News outfits are also being cautious about building audience-facing AI tools. Many have begun using AI to produce bullet-point summaries of articles that can help busy and increasingly impatient readers. Some have built AI chatbots that can answer questions about a particular, narrow subset of their coverage—like stories about the Olympics or climate change—but they have tended to label these as “experiments” in order to help flag to readers that the answers may not always be accurate. Few have gone further in terms of AI-generated content. They worry that gen AI-produced hallucinations will undercut trust in the accuracy of their journalism. Their brands and their businesses ultimately depend on that trust.
Those who hesitate will be lost?
This caution, while understandable, is itself a colossal risk. If news organizations themselves aren’t using AI to summarize the news and make it more interactive, technology companies are. People are increasingly turning to AI search engines and chatbots, including Perplexity, OpenAI’s ChatGPT, and Google’s Gemini and the “AI Overviews” Google now provides in response to many searches, and many others. Several news executives at the conference said “disintermediation”—the loss of a direct connection with their audience—was their biggest fear.
They have cause to be worried. Many news organizations (including Fortune) are at least partly dependent on Google search to bring in audiences. A recent study by Tollbit—which sells software that helps protect websites from web crawlers—found that clickthrough rates for Google AI Overviews were 91% lower than from a traditional Google Search. (Google has not yet used AI overviews for news queries, although many think it is only a matter of time.) Other studies of click through rates from chatbot conversations are equally abysmal. Cloudflare, which is also offering to help protect news publishers from web scraping, found that OpenAI scraped a news site 250 times for every one referral page view it sent that site.
So far, news organizations have responded to this potentially existential threat through a mix of legal pushback—the New York Times has sued OpenAI for copyright violations, while Dow Jones and the New York Post have sued Perplexity—and partnerships. Those partnerships have involved multiyear, seven-figure licensing deals for news content. (Fortune has a partnership with both Perplexity and ProRata.) Many of the execs at the conference said the licensing deals were a way to make revenue from content the tech companies had most likely already “stolen” anyway. They also saw the partnerships as a way to build relationships with the tech companies and tap their expertise to help them build AI products or train their staffs. None saw the relationships as particularly stable. They were all aware of the risk of becoming overly reliant on AI licensing revenue, having been burned previously when the media industry let Facebook become a major driver of traffic and ad revenue. Later, that money vanished practically overnight when Meta CEO Mark Zuckerberg decided, after the 2016 U.S. presidential election, to de-emphasize news in people’s feeds.
An AI-powered Ferrari yoked to a horse cart
Executives acknowledged needing to build direct audience relationships that can’t be disintermediated by AI companies, but few had clear strategies for doing so. One expert at the conference said bluntly that “the news industry is not taking AI seriously,” focusing on “incremental adaptation rather than structural transformation.” He likened current approaches to a three-step process that had “an AI-powered Ferrari” at both ends, but “a horse and cart in the middle.”
He and another media industry advisor urged news organizations to get away from structuring their approach to news around “articles.” Instead, they encouraged the news execs to think about ways in which source material (public data, interview transcripts, documents obtained from sources, raw video footage, audio recordings, and archival news stories) could be turned into a variety of outputs—podcasts, short-form video, bullet-point summaries, or yes, a traditional news article—to suit audience tastes on the fly by generative AI technology. They also urged news organizations to stop thinking of the production of news as a linear process, and begin thinking about it more as a circular loop, perhaps one in which there was no human in the middle.
One person at the conference said that news organizations needed to become less insular and look more closely at insights and lessons from other industries and how they were adapting to AI. Others said that it might require startups—perhaps incubated by the news organizations themselves—to pioneer new business models for the AI age.
The stakes couldn’t be higher. While AI poses existential challenges to traditional journalism, it also offers unprecedented opportunities to expand reach and potentially reconnect with audiences who have “turned off news”—if leaders are bold enough to reimagine what news can be in the AI era.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
Correction: Last week’s Tuesday edition of Eye on AI misidentified the country where Trustpilot is headquartered. It is Denmark. Also, a news item in that edition misidentified the name of the Chinese startup behind the viral AI model Manus. The name of the startup is Butterfly Effect.
AI IN THE NEWS
AI companies push for U.S. rules to exempt them from state-level regs, copyright exemption. The biggest U.S. AI companies, including OpenAI, Google, and Anthropic, have submitted their recommendations for the Trump administration's forthcoming AI Action Plan. The companies disagreed on some elements of what they would like to see, my Fortune colleague David Meyer reports. OpenAI and Google both called for a national framework that would preempt a multitude of state AI laws. OpenAI also called for a specific exemption to U.S. copyright law to allow AI models to be trained on copyrighted works while Google asked for what it called “a balanced approach.” Many of the companies asked for changes to the so-called “AI Diffusion Rule” that the Biden Administration created in its final days. OpenAI and Microsoft both called for the number of countries allowed unrestricted access to cutting edge AI chips to be expanded, while Google would like to see the current AI diffusion rule scrapped entirely. Many of the companies called for quicker permitting for new power plants and electricity transmission lines.
Baidu launches two competitive AI models. Baidu has launched two new AI models, including ERNIE X1, which it claims matches DeepSeek R1’s performance at half the cost and features advanced reasoning capabilities, Reuters reports. The company also introduced ERNIE 4.5, boasting improved multimodal understanding, language generation, logic, and memory, along with the ability to interpret internet memes and satire. It said the model beat OpenAI’s new GPT-4.5 model on some benchmarks involving Chinese language. Baidu continues to face stiff competition in China’s AI market and it has struggled to achieve widespread adoption for its Ernie chatbot against rivals like DeepSeek.
Trump administration orders government AI experts to reduce ‘ideological bias.’ The U.S. National Institute of Standards and Technology (NIST) has directed scientists collaborating with the U.S. Artificial Intelligence Safety Institute (AISI) to remove terms like "AI safety," "responsible AI," and "AI fairness" from their objectives, Wired reports. NIST has instead told them to focus on reducing "ideological bias" in AI models and enhancing American economic competitiveness. This change departs from previous guidelines that emphasized identifying and mitigating discriminatory behaviors and combating misinformation.
People are using Google’s new Gemini 2.0 Flash AI to remove digital watermarks. That’s according to TechCrunch. The publication said users have found Google’s Gemini 2.0 Flash image generation model, released last week, particularly adept at removing digital watermarks. These watermarks are usually used by copyright holders, such as Getty Images and other stock photography services, for example, to reduce the likelihood that their photographs or other digital images will be used without consent or licensing. Currently, Gemini 2.0 Flash's image generation feature is labeled as "experimental" and is accessible only through Google's developer tools, such as AI Studio, but it can be used for free. Google has acknowledged the issue, stating that using their generative AI tools for copyright infringement violates their terms of service, and they are monitoring the situation closely.
China introduced new AI-generated content labeling requirements. Beijing has introduced new rules requiring AI-generated content to be explicitly labeled, either through visible markers or metadata, Bloomberg reported. The requirement, which is designed to help combat disinformation, takes effect on Sept. 1. The move follows similar steps by the European Union in its new AI Act and by the U.S. under former President Joe Biden, who issued an executive order on content provenance. The Chinese regulation holds AI service providers accountable for labeling, although it still allows unlabeled AI content in specific, regulated circumstances.
A new jailbreaking technique convinces AI models to create credentials-stealing tool. Most major LLMs can be prompted in such a way that they will write and deploy a piece of credential-stealing malware (in the form of a malicious Chrome browser extension), according to new research from cybersecurity company Cato Networks. In its latest threat report Cato says the technique works for almost all AI models (Anthropic’s Claude is the one notable exception). The jailbreak, which Cato researcher Vitaly Simonovich walked Fortune through, is a spin on the good old role-playing game (a longstanding framework for fooling LLMs). Taking advantage of the long context windows of LLMs like GPT-4o and DeepSeek-R1, the nefarious user provides chapters of story to gradually draw the AI into a world where a hacker character is hiding secrets in Chrome, convincing the model to eventually write a piece of computer code that will discover the hidden secrets. Cato informed leading AI model vendors about the jailbreak before publishing its report. Simonovich said Microsoft and OpenAI had acknowledged receipt of the jailbreak, but he had no luck getting hold of DeepSeek. Google also acknowledged receipt of the code too.
EYE ON AI RESEARCH
Google researchers say that as models get bigger, the advantages of distributed training grow. There’s increasing interest in how powerful AI models could be trained in a distributed way—using multiple data centers spread out geographically or even using spare GPU capacity on personal laptops. The technique could make it much easier and less costly to train models. It also overcomes a problem that is starting to constrain AI model training in traditional single data center training regimes. In some cases, the models are getting so large that there is no efficient way to shuffle data around different server racks during the training process due to the limitations of the networking equipment. That means training takes much longer and is much less energy efficient than it would be otherwise. Distributed training could potentially overcome these limitations. What’s more, if it works, it also has big implications for AI policy, as it is much harder for governments to police distributed training.
Google DeepMind researchers had previously pioneered a distributed training method called DiLoCo (Distributed Low-Communication). Now a different team of Google researchers has found that the method scales well compared to traditional single data center training methods. The scientists conducted experiments across model sizes from 35 million to 10 billion parameters, surprisingly finding that DiLoCo's benefits increase with model size. Larger models can be synchronized less frequently between data centers while still maintaining performance. They even developed “scaling laws” that they say reliably predict performance as models get larger using the method. They found DiLoCo had several technical advantages too and reduces training time by orders of magnitude on bandwidth-constrained networks. You can read the research paper here on the non-peer reviewed research repository arxiv.org.
FORTUNE ON AI
Inside the AI talent wars where research scientists get multimillion-dollar stock grants and wooed by Mark Zuckerberg —by Sharon Goldman
AI search engines are confidently wrong more than half the time when they cite sources, study finds —by Beatrice Nolan
Google DeepMind CEO says that humans have just over 5 years before AI will outsmart them —by Emma Burleigh
She never set out to work in AI. Now she’s ensuring the transformative technology is accessible across more than 100 languages —by Sharon Goldman
AI CALENDAR
April 9-11: Google Cloud Next, Las Vegas
April 24-28: International Conference on Learning Representations (ICLR), Singapore
May 6-7: Fortune Brainstorm AI London. Apply to attend here.
May 20-21: Google IO, Mountain View, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
BRAIN FOOD
Powerful AI will empower totalitarian government more than it will enable democracy. That’s the rather grim conclusion of new research from a group of researchers at Texas A&M University and the Foundation for American Innovation.
They find that without specific interventions to avoid it, there’s a high likelihood that future powerful artificial general intelligence (or AGI)—a system that can perform all cognitive tasks about as well as a human—will result in one of two outcomes. The first is what they term a “despotic Leviathan” in which governments have an iron grip on most aspects of life through enhanced AI-enabled surveillance and control. The other scenario results in what they call “an absent Leviathan,” in which non-government actors, such as tech companies, have unprecedented control, but in which the government, through its inability to use technology as effectively, is delegitimized. Neither outcome sounds good.
To avoid these outcomes, the researchers call for explicit technical safeguards, as well as some deliberate rules around how government can use AI so that meaningful human oversight and control is maintained. They also call for privacy enhancing technologies and new methods of helping citizens participate in government (such as citizens’ assemblies) that might help individuals maintain their liberty. You can read the research paper here.