• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Why the U.S. Could Fall Behind in the Global AI Race

By
Joshua New
Joshua New
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
By
Joshua New
Joshua New
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
August 1, 2018, 1:39 PM ET
ROBOY humanoid robot
A scientist is preparing ROBOY, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab (AI-Lab). ROBOY is the prototype of a next generation humanoid robot that uses tendons and a skeleton like structure for moving body parts. By using 3D-printing technology, it has been developed within only 9 months. *** Editorial use only ***EThamPhoto—Getty Images

The country that wins the global race for dominance in artificial intelligence stands to capture enormous economic benefits, including potentially doubling its economic growth rates by 2035. Unfortunately, the United States is getting bad advice about how to compete.

Over the past year, Canada, China, France, India, Japan, and the United Kingdom have all launched major government-backed initiatives to compete in AI. While the Trump administration has begun to focus on how to advance the technology, it has not developed a cohesive national strategy to match that of other countries. This has allowed the conversation about how policymakers in the United States should support AI to be dominated by proposals from advocates primarily concerned with staving off potential harms of AI by imposing restrictive regulations on the technology, rather than supporting its growth.

AI does pose unique challenges—from potentially exacerbating racial bias in the criminal justice system to raising ethical concerns with self-driving cars—and the leading ideas to address these challenges are to mandate the principle of algorithmic transparency or algorithmic explainability, or to form an overarching AI regulator. However, not only would these measures likely be ineffective at addressing potential challenges, they would significantly slow the development and adoption of AI in the United States.

Proponents of algorithmic transparency contend that requiring companies to disclose the source code of their algorithms would allow regulators, journalists, and concerned citizens to scrutinize the code and identify any signs of wrongdoing. While the complexity of AI systems leaves little reason to believe that this would actually be effective, it would make it significantly easier for bad actors in countries that routinely flout intellectual property protections, most notably China, to steal U.S. source code. This would simultaneously give a leg up to the United States’ main competition in the global AI race and reduce incentives for U.S. firms to invest in developing AI.

Others have proposed algorithmic explainability, where the government would require companies to make their algorithms interpretable to end users, such as by describing how their algorithms work or by only using algorithms that can articulate rationales for their decisions. For example, the European Union has made explainability a primary check on the potential dangers of AI, guaranteeing in its General Data Protection Regulation (GDPR) a person’s right to obtain “meaningful information” about certain decisions made by an algorithm.

Requiring explainability can be appropriate, and it is already the standard in many domains, such as criminal justice or consumer finance. But extending this requirement to AI decision-making in circumstances where the same standard doesn’t apply for human decisions would be a mistake. It would incentivize businesses to rely on humans to make decisions so they can avoid this regulatory burden, which would come at the expense of productivity and innovation.

Additionally, there can be inescapable trade-offs between explainability and accuracy. An algorithm’s accuracy typically increases with its complexity, but the more complex an algorithm is, the more difficult it is to explain. This trade-off has always existed—a simple linear regression with two variables is easier to explain than one with 200 variables—but the tradeoffs become more acute when using more advanced data science methods. Thus, explainability requirements would only make sense in situations where it is appropriate to sacrifice accuracy—and these cases are rare. For example, it would be a terrible idea to prioritize explainability over accuracy in autonomous vehicles, as even slight reductions in navigational accuracy or to a vehicle’s ability to differentiate between a pedestrian on the road and a picture of a person on a billboard could be enormously dangerous.

A third popular, but bad idea, championed most notably by Elon Musk, is to create the equivalent of the Food and Drug Administration or National Transportation Safety Board to serve as an overarching AI regulatory body. The problem is that establishing an AI regulator falsely implies that all algorithms pose the same level of risk and need for regulatory oversight. However, an AI system’s decisions, like a human’s decisions, are still subject to a wide variety of industry-specific laws and regulation and pose a wide variety of risk depending on their application. Subjecting low-risk decisions to regulatory oversight simply because they use an algorithm would be a considerable barrier to deploying AI, limiting the ability of U.S. firms to adopt the technology.

 

Fortunately, there is a viable way for policymakers to address the potential risks of AI without sabotaging it: Adopt the principle of algorithmic accountability, a light-touch regulatory approach that incentivizes businesses deploying algorithms to use a variety of controls to verify that their AI systems act as intended, and to identify and rectify harmful outcomes. Unlike algorithmic transparency, it would not threaten intellectual property. Unlike algorithmic explainability, it would allow companies to deploy advanced, innovative AI systems, yet still require that they be able to explain certain decisions when context demands it, regardless of whether AI was used in those decisions. And unlike a master AI regulator, algorithmic accountability would ensure regulators could understand AI within their sector-specific domains while limiting the barriers to AI deployment.

If the United States is to be a serious contender in the global AI race, the last thing policymakers should do is shackle AI with ineffective, economically damaging regulation. Policymakers who want to focus now on unfair or unsafe AI should instead pursue the principle of algorithmic accountability as a means of addressing their concerns without kneecapping the United States as it enters the global AI race.

Joshua New is a senior policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy.

About the Authors
By Joshua New
See full bioRight Arrow Button Icon
By Bethany Cianciolo
See full bioRight Arrow Button Icon

Latest in Commentary

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map
  • Facebook icon
  • Twitter icon
  • LinkedIn icon
  • Instagram icon
  • Pinterest icon

Latest in Commentary

louisa
CommentaryDavos
Davos 2026: reading the signals, not the headlines
By Louisa LoranJanuary 21, 2026
6 hours ago
Davos
CommentaryConsulting
The world needs 8.5x higher GDP to give everyone a Swiss standard of living. As leaders gather in Davos, fear of growth holds this back
By Chris Bradley, Nick Leung and Sven SmitJanuary 21, 2026
6 hours ago
ready
CommentaryPinterest
Pinterest CEO: the Napster phase of AI needs to end
By Bill ReadyJanuary 19, 2026
2 days ago
mohamad ali
CommentaryConsulting
I lead IBM Consulting, here’s how AI-first companies must redesign work for growth
By Mohamad AliJanuary 19, 2026
2 days ago
CommentaryLetter from London
I have been coming to Davos for 16 years. I have never seen such a crisis in U.S./European relations 
By Kamal AhmedJanuary 19, 2026
2 days ago
ravi
Commentaryinformation technology
Learning and work are converging in an integrated new life template for the AI era 
By Ravi Kumar SJanuary 19, 2026
2 days ago

Most Popular

placeholder alt text
AI
Elon Musk says that in 10 to 20 years, work will be optional and money will be irrelevant thanks to AI and robotics
By Sasha RogelbergJanuary 19, 2026
2 days ago
placeholder alt text
Personal Finance
Current price of silver as of Tuesday, January 20, 2026
By Joseph HostetlerJanuary 20, 2026
1 day ago
placeholder alt text
Economy
Trump added $2.25 trillion to the national debt in his first year back in charge, watchdog says
By Nick LichtenbergJanuary 20, 2026
18 hours ago
placeholder alt text
Success
Billionaire Marc Andreessen spends 3 hours a day listening to podcasts and audiobooks—that’s nearly an entire 24-hour day each week
By Preston ForeJanuary 20, 2026
23 hours ago
placeholder alt text
Politics
The U.S. Supreme Court could throw a wrench into Trump’s plan to take Greenland as soon as Tuesday
By Jim EdwardsJanuary 19, 2026
2 days ago
placeholder alt text
Success
Half of veterans leave their first post-military jobs in less than a year, and spouses face sky-high unemployment—this CEO has a $500 million fix
By Emma BurleighJanuary 19, 2026
2 days ago

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.