• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

American AI companies have very different ideas about regulation and how best to thwart China

By
David Meyer
David Meyer
Down Arrow Button Icon
By
David Meyer
David Meyer
Down Arrow Button Icon
March 17, 2025, 12:07 PM ET
The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone
Jaque Silva—NurPhoto via Getty Images

The biggest U.S. AI companies all have strong views about what the country’s incoming “AI Action Plan” should look like, but they don’t all want the same things.

With the deadline for submissions having passed on Saturday, now is a good time to compare what OpenAI, Anthropic, Microsoft and Google had to say. (Meta has presumably also made a submission, but, unlike its peers, it has not publicized its proposals.)

So here is that comparison. (For brevity’s sake, we have not included the submissions of lobbying groups and investors, nor those of various institutes and think tanks—but we are including a list of links to these proposals at the bottom of this piece.)

AI laws

As we wrote last week, OpenAI’s submission called on the Trump administration to rescue it and its peers from a likely flood of disparate state-level AI laws; over 700 bills are currently out there. But it doesn’t want federal legislation—rather, OpenAI (which was loudly calling for AI legislation a year or two ago) now wants a narrow, voluntary framework that would pre-empt state regulation. Under this deal, AI companies would get juicy government contracts and a heads-up on potential security threats, and the government would get to test the models’ new capabilities and evaluate them against foreign models. (It is notable that this is something most of the top AI firms, including OpenAI, had already voluntarily committed to doing when the Biden Administration was in power.)

Google also wants the pre-emption of state laws with a “unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive.” However, it isn’t against the idea of federal AI regulation, as long as it focuses on specific applications of the technology and doesn’t hold AI developers responsible for the tools’ misuse. Interestingly, Google used this opportunity to push for a new federal privacy policy that would also pre-empt state-level efforts, on the basis that this affects the AI industry too.

Google wants the administration to engage with other governments on the issue of AI legislation, pushing back against any laws that would require companies to divulge trade secrets, and establishing an international norm where only a company’s home government would get to deeply evaluate its models.

Export controls and China

The big AI companies all urged the Trump administration to revise the “AI diffusion” rule that the Biden administration introduced in January, in an attempt to stop China routing unlawful imports of powerful U.S. equipment through third countries. But they want different things.

OpenAI would like more countries added to the rule’s top tier, which allows uncapped imports of U.S. AI chips, as long as those countries “commit to democratic AI principles by deploying AI systems in ways that promote more freedoms for their citizens.” (It calls this “commercial diplomacy.”) Most countries are currently in the AI diffusion rule’s second tier, with fewer than 20 currently being in the top tier.

Microsoft has also said it wants the number of countries that qualify for the Diffusion Rule’s Tier 1 category expanded. Meanwhile, it wants more resources devoted to helping the Commerce Department enforce a portion of the Diffusion Rule that said that cutting-edge AI chips can only be exported and deployed in data centers that the U.S. government certifies as trusted and secure. It says this would prevent Chinese companies from accessing the most powerful AI chips through a burgeoning gray market of small data center providers in Asia and the Middle East who don’t ask too many questions about who exactly is renting time on their servers. (Microsoft has not yet published its full submission on the U.S. AI Action plan, but instead published a blog post from President Brad Smith talking about what it thinks the Trump Administration should do in terms of the Diffusion Rule.)

Anthropic wants those in Tier 2 to face even tighter controls on the numbers of Nvidia H100s that they can import. The Claude-maker also wants U.S. export controls expanded so China cannot get its hands on Nvidia’s less-powerful H20 chips, which Nvidia specifically designed for the Chinese market to get around existing U.S. export controls.

Google doesn’t like the AI diffusion rule at all, arguing that it imposes “disproportionate burdens on U.S. cloud service providers,” even if its national-security goals are valid.

OpenAI has also suggested a global ban on Huawei chips and Chinese “models that violate user privacy and create security risks such as the risk of IP theft,” which is being widely interpreted as a dig at DeepSeek.

Copyright

OpenAI scorned Europe’s AI Act, which gives rightsholders the ability to opt out of having their works automatically used to train AI models. It urged the Trump administration to “prevent less innovative countries from imposing their legal regimes on American AI firms and slowing our rate of progress.”

Google, meanwhile, called for balanced copyright laws, and also privacy laws that automatically grant an exemption for publicly available information. It also suggested a review of “AI patents granted in error,” especially because Chinese companies have recently been scooping up increasing numbers of U.S. AI patents.

Infrastructure

OpenAI, Anthropic and Google all called for the streamlining of permitting around transmission lines, to encourage a faster energy buildout to support new AI data centers. Anthropic also called for 50 gigawatts of extra energy in the U.S., only for AI use, by 2027.

Security and government adoption

OpenAI called on the government speed up cybersecurity approvals of the top AI tools, so agencies can more easily test their use. It proposed public-private partnerships to develop national-security models that would otherwise have no commercial market, such as models for classified nuclear tasks.

Anthropic also suggested speeding up procurement procedures to get AI embedded into government functions. Notably, it also called for strong security-evaluation roles for the National Institute of Standards and Technology and the U.S. AI Safety Institute, both of which have been hit hard by the Trump administrations mass firings.

Google argued that national-security agencies should be allowed to use commercial storage and compute for their AI needs. It also called on the government to free up its datasets for commercial AI training, and to mandate open data standards and APIs across different government cloud deployments to enable “AI-driven insights.”

AI’s effects

Anthropic urged the administration to keep a close eye on labor markets and prepare for big changes. Google also said shifts were coming, arguing that they would need wider AI skills. Google also asked for more funding for AI research and a policy to make sure U.S. researchers have access to enough compute power, data, and models.

Other submissions included those from: the Future of Life Institute, Internet Works, the News Media Alliance, the Association of American Publishers, the Authors Alliance, the Business Software Alliance, the Securities Industry and Financial Markets Association, the American National Standards Institute, the Center for AI and Digital Policy, a16z, the Center for Data Innovation, the ARC Prize Foundation, the R Street Institute, the Abundance Institute, and the Foundation for American Innovation.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By David Meyer
LinkedIn icon
See full bioRight Arrow Button Icon

Latest in Tech

Big TechSpotify
Spotify users lamented Wrapped in 2024. This year, the company brought back an old favorite and made it less about AI
By Dave Lozo and Morning BrewDecember 4, 2025
8 hours ago
InnovationVenture Capital
This Khosla Ventures–backed startup is using AI to personalize cancer care
By Allie GarfinkleDecember 4, 2025
12 hours ago
AIEye on AI
Companies are increasingly falling victim to AI impersonation scams. This startup just raised $28M to stop deepfakes in real time
By Sharon GoldmanDecember 4, 2025
12 hours ago
Jensen Huang
SuccessBillionaires
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant ‘state of anxiety’ out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
12 hours ago
Ted Pick
BankingData centers
Morgan Stanley considers offloading some of its data-center exposure
By Esteban Duarte, Paula Seligson, Davide Scigliuzzo and BloombergDecember 4, 2025
12 hours ago
Zuckerberg
EnergyMeta
Meta’s Zuckerberg plans deep cuts for metaverse efforts
By Kurt Wagner and BloombergDecember 4, 2025
13 hours ago

Most Popular

placeholder alt text
Economy
Two months into the new fiscal year and the U.S. government is already spending more than $10 billion a week servicing national debt
By Eleanor PringleDecember 4, 2025
18 hours ago
placeholder alt text
Success
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
13 hours ago
placeholder alt text
North America
Jeff Bezos and Lauren Sánchez Bezos commit $102.5 million to organizations combating homelessness across the U.S.: ‘This is just the beginning’
By Sydney LakeDecember 2, 2025
3 days ago
placeholder alt text
Success
Nearly 4 million new manufacturing jobs are coming to America as boomers retire—but it's the one trade job Gen Z doesn't want
By Emma BurleighDecember 4, 2025
14 hours ago
placeholder alt text
Success
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant 'state of anxiety' out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
12 hours ago
placeholder alt text
Health
Bill Gates decries ‘significant reversal in child deaths’ as nearly 5 million kids will die before they turn 5 this year
By Nick LichtenbergDecember 4, 2025
1 day ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.