• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Why the U.S. Could Fall Behind in the Global AI Race

By
Joshua New
Joshua New
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
By
Joshua New
Joshua New
and
Bethany Cianciolo
Bethany Cianciolo
Down Arrow Button Icon
August 1, 2018, 1:39 PM ET
ROBOY humanoid robot
A scientist is preparing ROBOY, a humanoid robot developed by the University of Zurich's Artificial Intelligence Lab (AI-Lab). ROBOY is the prototype of a next generation humanoid robot that uses tendons and a skeleton like structure for moving body parts. By using 3D-printing technology, it has been developed within only 9 months. *** Editorial use only ***EThamPhoto—Getty Images

The country that wins the global race for dominance in artificial intelligence stands to capture enormous economic benefits, including potentially doubling its economic growth rates by 2035. Unfortunately, the United States is getting bad advice about how to compete.

Over the past year, Canada,China, France, India, Japan, and the United Kingdom have all launched major government-backed initiatives to compete in AI. While the Trump administration has begun to focus on how to advance the technology, it has not developed a cohesive national strategy to match that of other countries. This has allowed the conversation about how policymakers in the United States should support AI to be dominated by proposals from advocates primarily concerned with staving off potential harms of AI by imposing restrictive regulations on the technology, rather than supporting its growth.

AI does pose unique challenges—from potentially exacerbating racial bias in the criminal justice system to raising ethical concerns with self-driving cars—and the leading ideas to address these challenges are to mandate the principle of algorithmic transparency or algorithmic explainability, or to form an overarching AI regulator. However, not only would these measures likely be ineffective at addressing potential challenges, they would significantly slow the development and adoption of AI in the United States.

Proponents of algorithmic transparency contend that requiring companies to disclose the source code of their algorithms would allow regulators, journalists, and concerned citizens to scrutinize the code and identify any signs of wrongdoing. While the complexity of AI systems leaveslittle reason to believe that this would actually be effective, it would make it significantly easier for bad actors in countries that routinely flout intellectual property protections, most notably China, to steal U.S. source code. This would simultaneously give a leg up to the United States’ main competition in the global AI race and reduce incentives for U.S. firms to invest in developing AI.

Others have proposed algorithmic explainability, where the government would require companies to make their algorithms interpretable to end users, such as by describing how their algorithms work or by only using algorithms that can articulate rationales for their decisions. For example, the European Union has made explainability a primary check on the potential dangers of AI, guaranteeing in its General Data Protection Regulation (GDPR) a person’s right to obtain “meaningful information” about certain decisions made by an algorithm.

Requiring explainability can be appropriate, and it is already the standard in many domains, such as criminal justice or consumer finance. But extending this requirement to AI decision-making in circumstances where the same standard doesn’t apply for human decisions would be a mistake. It would incentivize businesses to rely on humans to make decisions so they can avoid this regulatory burden, which would come at the expense of productivity and innovation.

Additionally, there can be inescapable trade-offs between explainability and accuracy. An algorithm’s accuracy typically increases with its complexity, but the more complex an algorithm is, the more difficult it is to explain. This trade-off has always existed—a simple linear regression with two variables is easier to explain than one with 200 variables—but the tradeoffs become more acute when using more advanced data science methods. Thus, explainability requirements would only make sense in situations where it is appropriate to sacrifice accuracy—and these cases are rare. For example, it would be a terrible idea to prioritize explainability over accuracy in autonomous vehicles, as even slight reductions in navigational accuracy or to a vehicle’s ability to differentiate between a pedestrian on the road and a picture of a person on a billboard could be enormously dangerous.

A third popular, but bad idea, championed most notably by Elon Musk, is to create the equivalent of the Food and Drug Administration or National Transportation Safety Board to serve as an overarching AI regulatory body. The problem is that establishing an AI regulator falsely implies that all algorithms pose the same level of risk and need for regulatory oversight. However, an AI system’s decisions, like a human’s decisions, are still subject to a wide variety of industry-specific laws and regulation and pose a wide variety of risk depending on their application. Subjecting low-risk decisions to regulatory oversight simply because they use an algorithm would be a considerable barrier to deploying AI, limiting the ability of U.S. firms to adopt the technology.

Fortunately, there is a viable way for policymakers to address the potential risks of AI without sabotaging it: Adopt the principle of algorithmic accountability, a light-touch regulatory approach that incentivizes businesses deploying algorithms to use a variety of controls to verify that their AI systems act as intended, and to identify and rectify harmful outcomes. Unlike algorithmic transparency, it would not threaten intellectual property. Unlike algorithmic explainability, it would allow companies to deploy advanced, innovative AI systems, yet still require that they be able to explain certain decisions when context demands it, regardless of whether AI was used in those decisions. And unlike a master AI regulator, algorithmic accountability would ensure regulators could understand AI within their sector-specific domains while limiting the barriers to AI deployment.

If the United States is to be a serious contender in the global AI race, the last thing policymakers should do is shackle AI with ineffective, economically damaging regulation. Policymakers who want to focus now on unfair or unsafe AI should instead pursue the principle of algorithmic accountability as a means of addressing their concerns without kneecapping the United States as it enters the global AI race.

Joshua New is a senior policy analyst at the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy.

About the Authors
By Joshua New
See full bioRight Arrow Button Icon
By Bethany Cianciolo
See full bioRight Arrow Button Icon

Latest in Commentary

jobs
Commentaryprivate equity
There is a simple fix for America’s job-quality crisis: actually give workers a piece of the business 
By Pete StavrosDecember 9, 2025
18 hours ago
Jon Rosemberg
CommentaryProductivity
The cult of productivity is killing us
By Jon RosembergDecember 9, 2025
18 hours ago
Trump
CommentaryTariffs and trade
AI doctors will be good at science but bad at business, and big talk with little action means even higher drugs prices: 10 healthcare predictions for 2026 from top investors
By Bob Kocher, Bryan Roberts and Siobhan Nolan ManginiDecember 9, 2025
18 hours ago
Google.org
CommentaryTech
Nonprofits are solving 21st century problems—they need 21st century tech
By Maggie Johnson and Shannon FarleyDecember 8, 2025
2 days ago
Will Dunham is President and Chief Executive Officer of the American Investment Council
CommentaryRetirement
Private equity is being villainized in the retirement debate — even as it provides diversification and outperforms public markets long-term
By Will DunhamDecember 8, 2025
2 days ago
Justin Hotard, CEO of Nokia
CommentaryGen Z
Nokia CEO: The workforce is becoming AI-native. Leadership has to evolve
By Justin HotardDecember 8, 2025
2 days ago

Most Popular

placeholder alt text
Success
When David Ellison was 13, his billionaire father Larry bought him a plane. He competed in air shows before leaving it to become a Hollywood executive
By Dave SmithDecember 9, 2025
21 hours ago
placeholder alt text
Economy
‘Fodder for a recession’: Top economist Mark Zandi warns about so many Americans ‘already living on the financial edge’ in a K-shaped economy 
By Eva RoytburgDecember 9, 2025
10 hours ago
placeholder alt text
Success
Craigslist founder signs the Giving Pledge, and his fortune will go to military families, fighting cyberattacks—and a pigeon rescue
By Sydney LakeDecember 8, 2025
2 days ago
placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
4 days ago
placeholder alt text
Banking
Jamie Dimon taps Jeff Bezos, Michael Dell, and Ford CEO Jim Farley to advise JPMorgan's $1.5 trillion national security initiative
By Nino PaoliDecember 9, 2025
12 hours ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
13 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.