• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

Google released safety risks report of Gemini 2.5 Pro weeks after its release — but an AI governance expert said it was a ‘meager’ and ‘worrisome’ report

By
Jeremy Kahn
Jeremy Kahn
and
Beatrice Nolan
Beatrice Nolan
Down Arrow Button Icon
By
Jeremy Kahn
Jeremy Kahn
and
Beatrice Nolan
Beatrice Nolan
Down Arrow Button Icon
April 17, 2025, 4:28 PM ET
Image of a mobile phone screen displaying the Google Gemini AI logo.
Google released a key document detailing some information about the capabilities and risks of its Gemini 2.5 Pro AI model, weeks after the model was released. But one AI governance expert said the report was "meager" and "worrisome."Photo Iilustration by Thomas Fuller/SOPA Images/LightRocket via Getty Images

Google has released a key document detailing some information about how its latest AI model, Gemini 2.5 Pro, was built and tested, three weeks after it first made that model publicly available as a “preview” version.

Recommended Video

AI governance experts had criticized the company for releasing the model without publishing documentation detailing safety evaluations it had carried out and any risks the model might present, in apparent violation of promises it had made to the U.S. government and at multiple international AI safety gatherings.

A Google spokesperson said in an emailed statement that any suggestion that the company  had reneged on its commitments was “inaccurate.”

The company also said that a more detailed “technical report” will come later when it makes a final version of the Gemini 2.5 Pro “model family” fully available to the public. 

But the newly published six-page model card has also been faulted by at least one AI governance expert for providing “meager” information about the safety evaluations of the model.

Kevin Bankston, a senior advisor on AI Governance at the Center for Democracy and Technology, a Washington, D.C.-based think tank, said in a lengthy thread on social media platform X that the late release of the model card and its lack of detail was worrisome.

“This meager documentation for Google’s top AI model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,” he said.

He said the late release of the model card and its lack key safety evaluation results—for instance, details of “red-teaming” tests to trick the AI model into serving up dangerous outputs like bioweapon instructions—suggested that Google “hadn’t finished its safety testing before releasing its most powerful model” and that “it still hasn’t completed that testing even now.”

Bankston said another possibility is that Google had finished its safety testing but has a new policy that it will not release its evaluation results until the model is released to all Google users. Currently, Google is calling Gemini 2.5 Pro a “preview,” which can be accessed through its Google AI Studio and Google Labs products, with some limitations on what users can do with it. Google has also said it is making the model widely available to U.S. college students.

The Google spokesperson said the company would release a more complete AI safety report “once per model family.” Bankson said on X that this might mean Google would no longer release separate evaluation results for fine-tuned versions of its models that it releases, such as those that have been tailored for coding or cybersecurity. This could be dangerous, he noted, because fine-tuned versions of AI models can exhibit behaviors that are markedly different from the “base model” from which they’ve been adapted. 

Google is not the only AI company seemingly retreating on AI safety. Meta’s model card for its newly released Llama 4 AI model is of similar length and detail to the one Google just published for Gemini 2.5 Pro and was also criticized by AI safety experts. OpenAI said it was not releasing a technical safety report for its newly-released GPT-4.1 model because it said that the model was “not a frontier model,” since the company’s “chain of thought” reasoning models, such as o3 and o4-mini, beat it on many benchmarks. At the same time, OpenAI touted that GPT-4.1 was more capable than its GPT-4o model, whose safety evaluation had shown that model could pose certain risks, although it had said these were below the threshold at which the model would be considered unsafe to release. Whether GPT-4.1 might now exceed those thresholds is unknown, since OpenAI said it does not plan to publish a technical report.

OpenAI did publish a technical safety report for its new o3 and o4-mini models, which were released on Wednesday. But at the same time, earlier this week it updated its “Preparedness Framework” which describes how the company will evaluate its AI models for critical dangers—everything from helping someone build a biological weapon to the possibility that a model will begin to self-improve and escape human control—and seek to mitigate those risks. The update eliminated “Persuasion”—a model’s ability to manipulate a person into taking a harmful action or convince them to believe in misinformation—as a risk category that the company would assess during it pre-release evaluations. It also changed how the company would make decisions around releasing higher risk models, including saying the company would consider shipping an AI model that posed a “critical risk” if a competitor had already debuted a similar model.

Those changes divided opinion among AI governance experts, with some praising OpenAI for being transparent about its process and also providing better clarity around its release policies, while others were alarmed at the changes. 

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Authors
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in Tech

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Most Popular

placeholder alt text
Future of Work
Ford CEO has 5,000 open mechanic jobs with up to 6-figure salaries from the shortage of manually skilled workers: 'We are in trouble in our country'
By Marco Quiroz-GutierrezJanuary 31, 2026
1 day ago
placeholder alt text
Economy
'I just don't have a good feeling about this': Top economist Claudia Sahm says the economy quietly shifted and everyone's now looking at the wrong alarm
By Eleanor PringleJanuary 31, 2026
2 days ago
placeholder alt text
Success
Ryan Serhant starts work at 4:30 a.m.—he says most people don’t achieve their dreams because ‘what they really want is just to be lazy’
By Preston ForeJanuary 31, 2026
2 days ago
placeholder alt text
Big Tech
The Chan Zuckerberg Initiative cut 70 jobs as the Meta CEO’s philanthropy goes all in on mission to 'cure or prevent all disease'
By Sydney LakeFebruary 1, 2026
15 hours ago
placeholder alt text
Success
U.S. Olympic gold medalist went from $200,000-a-year sponsorship at 20 years old to $12-an-hour internship by 30
By Orianna Rosa RoyleFebruary 1, 2026
10 hours ago
placeholder alt text
Economy
Meet the first CEO of the IRS: A Jamie Dimon protégé facing a $5 trillion test this tax season
By Shawn TullyJanuary 31, 2026
2 days ago

© 2026 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Latest in Tech

Startups & Ventureautonomy
Waymo seeking about $16 billion near $110 billion valuation
By Edward Ludlow, Aaron Kirchfeld and BloombergFebruary 1, 2026
2 hours ago
AIspace
SpaceX seeks FCC nod to build data center constellation in space
By Sana Pashankar, Loren Grush and BloombergFebruary 1, 2026
2 hours ago
dewar
CommentaryLeadership
The AI adoption story is haunted by fear as today’s efficiency programs look like tomorrow’s job cuts. Leaders need to win workers’ trust
By Carolyn DewarFebruary 1, 2026
13 hours ago
trader
Investingbubble
‘We’re not in a bubble yet’ because only 3 out of 4 conditions are met, top economist says. Cue the OpenAI IPO
By Nick LichtenbergFebruary 1, 2026
13 hours ago
Big TechMark Zuckerberg
The Chan Zuckerberg Initiative cut 70 jobs as the Meta CEO’s philanthropy goes all in on mission to ‘cure or prevent all disease’
By Sydney LakeFebruary 1, 2026
15 hours ago
The founder and CEO of $1.25 billion AI identity verification platform Incode, Ricardo Amper
SuccessGen Z
CEO of $1.25 billion AI company says he hires Gen Z because they’re ‘less biased’ than older generations—too much knowledge is actually bad, he warns
By Emma BurleighFebruary 1, 2026
16 hours ago