• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AIaudit

Exclusive: Former OpenAI policy chief creates nonprofit institute, calls for independent safety audits of frontier AI models

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
January 15, 2026, 12:01 PM ET
Photo of Miles Brundage, a former OpenAI policy researcher who has founded AVERI, a nonprofit institute advocating for independent AI safety audits of top AI labs.
Former OpenAI policy chief Miles Brundage, who has just founded a new nonprofit institute called AVERI that is advocating for independent AI safety auditing of the top AI labs.Photo courtesy of AVERI

Miles Brundage, a well-known former policy researcher at OpenAI, is launching an institute dedicated to a simple idea: AI companies shouldn’t be allowed to grade their own homework.

Recommended Video

Today Brundage formally announced the AI Verification and Evaluation Research Institute (AVERI), a new nonprofit aimed at pushing the idea that frontier AI models should be subject to external auditing. AVERI is also working to establish AI auditing standards.

The launch coincides with the publication of a research paper, coauthored by Brundage and more than 30 AI safety researchers and governance experts, that lays out a detailed framework for how independent audits of the companies building the world’s most powerful AI systems could work.

Brundage spent seven years at OpenAI, as a policy researcher and an advisor on how the company should prepare for the advent of human-like artificial general intelligence. He left the company in October 2024. 

“One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own,” Brundage told Fortune. “There’s no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules.”

That creates risks. Although the leading AI labs conduct safety and security testing and publish technical reports on the results of many of these evaluations, some of which they conduct with the help of external “red team” organizations, right now consumers, business and governments simply have to trust what the AI labs say about these tests. No one is forcing them to conduct these evaluations or report them according to any particular set of standards.

Brundage said that in other industries, auditing is used to provide the public—including consumers, business partners, and to some degree regulators—assurance that products are safe and have been tested in a rigorous way. 

“If you go out and buy a vacuum cleaner, you know, there will be components in it, like batteries, that have been tested by independent laboratories according to rigorous safety standards to make sure it isn’t going to catch on fire,” he said.

New institute will push for policies and standards

Brundage said that AVERI was interested in policies that would encourage the AI labs to move to a system of rigorous external auditing, as well as researching what the standards should be for those audits, but was not interested in conducting audits itself.

“We’re a think tank. We’re trying to understand and shape this transition,” he said. “We’re not trying to get all the Fortune 500 companies as customers.”

He said existing public accounting, auditing, assurance, and testing firms could move into the business of auditing AI safety, or that startups would be established to take on this role.

AVERI said it has raised $7.5 million toward a goal of $13 million to cover 14 staff and two years of operations. Its funders so far include Halcyon Futures, Fathom, Coefficient Giving, former Y Combinator president Geoff Ralston, Craig Falls, Good Forever Foundation, Sympatico Ventures, and the AI Underwriting Company. 

The organization says it has also received donations from current and former non-executive employees of frontier AI companies. “These are people who know where the bodies are buried” and “would love to see more accountability,” Brundage said.

Insurance companies or investors could force AI safety audits

Brundage said that there could be several mechanisms that would encourage AI firms to begin to hire independent auditors. One is that big businesses that are buying AI models may demand audits in order to have some assurance that the AI models they are buying will function as promised and don’t pose hidden risks.

Insurance companies may also push for the establishment of AI auditing. For instance, insurers offering business continuity insurance to large companies that use AI models for key business processes could require auditing as a condition of underwriting. The insurance industry may also require audits in order to write policies for the leading AI companies, such as OpenAI, Anthropic, and Google.

“Insurance is certainly moving quickly,” Brundage said. “We have a lot of conversations with insurers.” He noted that one specialized AI insurance company, the AI Underwriting Company, has provided a donation to AVERI because “they see the value of auditing in kind of checking compliance with the standards that they’re writing.”

Investors may also demand AI safety audits to be sure they aren’t taking on unknown risks, Brundage said. Given the multi-million and multi-billion dollar checks that investment firms are now writing to fund AI companies, it would make sense for these investors to demand independent auditing of the safety and security of the products these fast-growing startups are building. If any of the leading labs go public—as OpenAI and Anthropic have reportedly been preparing to do in the coming year or two—a failure to employ auditors to assess the risks of AI models could open these companies up to shareholder lawsuits or SEC prosecutions if something were to later go wrong that contributed to a significant fall in their share prices.  

Brundage also said that regulation or international agreements could force AI labs to employ independent auditors. The U.S. currently has no federal regulation of AI and it is unclear whether any will be created. President Donald Trump has signed an executive order meant to crack down on U.S. states that pass their own AI regulations. The administration has said this is because it believes a single, federal standard would be easier for businesses to navigate than multiple state laws. But, while moving to punish states for enacting AI regulation, the administration has not yet proposed a national standard of its own.

In other geographies, however, the groundwork for auditing may already be taking shape. The EU AI Act, which recently came into force, does not explicitly call for audits of AI companies’ evaluation procedures. But its “Code of Practice for General Purpose AI,” which is a kind of blueprint for how frontier AI labs can comply with the Act, does say that labs building models that could pose “systemic risks” need to provide external evaluators with complimentary access to test the models. The text of the Act itself also says that when organizations deploy AI in “high-risk” use cases, such as underwriting loans, determining eligibility for social benefits, or determining medical care, the AI system must undergo an external “conformity assessment” before being placed on the market. Some have interpreted these sections of the Act and the Code as implying a need for what are essentially independent auditors.

Establishing ‘assurance levels,’ finding enough qualified auditors

The research paper published alongside AVERI’s launch outlines a comprehensive vision for what frontier AI auditing should look like. It proposes a framework of “AI Assurance Levels” ranging from Level 1—which involves some third-party testing but limited access and is similar to the kinds of external evaluations that the AI labs currently employ companies to conduct—all the way to Level 4, which would provide “treaty grade” assurance sufficient for international agreements on AI safety.

Building a cadre of qualified AI auditors presents its own difficulties. AI auditing requires a mix of technical expertise and governance knowledge that few possess—and those who do are often lured by lucrative offers from the very companies that would be audited.

Brundage acknowledged the challenge but said it’s surmountable. He talked of mixing people with different backgrounds to build “dream teams” that in combination have the right skill sets. “You might have some people from an existing audit firm, plus some people from a penetration testing firm from cybersecurity, plus some people from one of the AI safety nonprofits, plus maybe an academic,” he said.

In other industries, from nuclear power to food safety, it has often been catastrophes, or at least close calls, that provided the impetus for standards and independent evaluations. Brundage said his hope is that with AI, auditing infrastructure and norms could be established before a crisis occurs.

“The goal, from my perspective, is to get to a level of scrutiny that is proportional to the actual impacts and risks of the technology, as smoothly as possible, as quickly as possible, without overstepping,” he said.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in AI

Photo: President Donald Trump during a bill signing event with dairy farmers in the Oval Office on Wednesday January 14, 2026.
InvestingMarkets
Trump’s chips ‘proclamation’ causes retail investors to dump the Magnificent Seven stocks  
By Jim EdwardsJanuary 15, 2026
8 hours ago
sharma
CommentaryTraining
AI will infiltrate the industrial workforce in 2026—let’s apply it to training the next generation, not replacing them
By Kriti SharmaJanuary 15, 2026
9 hours ago
Young girl uses AI virtual assistant to do schoolwork.Concept of Artificial Intelligence and Futuristic technology
AIEducation
Teachers decry AI as brain-rotting junk food for kids: ‘Students can’t reason. They can’t think. They can’t solve problems’
By Eva RoytburgJanuary 15, 2026
9 hours ago
CommentaryBusiness
Using AI just to reduce costs is a woeful misuse of a transformative technology
By Nigel VazJanuary 15, 2026
11 hours ago
AIResearch
AI ‘godfather’ Yoshua Bengio says he’s found a fix for AI’s biggest risks and become more optimistic by ‘a big margin’ on humanity’s future
By Sharon GoldmanJanuary 15, 2026
14 hours ago
AIHiring
McKinsey challenges graduates to master AI tools as it shifts hiring hunt toward liberal arts majors
By Jake AngeloJanuary 14, 2026
1 day ago

Most Popular

placeholder alt text
Personal Finance
Peter Thiel makes his biggest donation in years to help defeat California’s billionaire wealth tax
By Nick LichtenbergJanuary 14, 2026
1 day ago
placeholder alt text
AI
Being mean to ChatGPT can boost its accuracy, but scientists warn you may regret it
By Marco Quiroz-GutierrezJanuary 13, 2026
2 days ago
placeholder alt text
AI
'Godfather of AI' says the technology will create massive unemployment and send profits soaring — 'that is the capitalist system'
By Jason MaJanuary 12, 2026
3 days ago
placeholder alt text
Success
Despite a $45 million net worth, Big Bang Theory star still works tough, 16-hour days—he repeats one mantra when overwhelmed
By Orianna Rosa RoyleJanuary 15, 2026
9 hours ago
placeholder alt text
Success
Despite his $2.6 billion net worth, MrBeast says he’s having to borrow cash and doesn’t even have enough money in his bank account to buy McDonald’s
By Emma BurleighJanuary 13, 2026
2 days ago
placeholder alt text
Economy
Jamie Dimon warns $38 trillion national debt is going to 'bite': 'You can't just keep borrowing money endlessly'
By Eleanor PringleJanuary 14, 2026
1 day ago

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.