• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Investors are pouring billions into artificial intelligence. It’s time for a commensurate investment in A.I. governance

By
Beena Ammananth
Beena Ammananth
Down Arrow Button Icon
By
Beena Ammananth
Beena Ammananth
Down Arrow Button Icon
January 16, 2023, 12:26 PM ET
Pope Francis holds an audience with signatories of The Rome Call For A.I. Ethics on Jan. 10. The document was signed by the Pontifical Academy for Life, Microsoft, IBM, the FAO, and the Italian Ministry of Innovation.
Pope Francis holds an audience with signatories of The Rome Call For A.I. Ethics on Jan. 10. The document was signed by the Pontifical Academy for Life, Microsoft, IBM, the FAO, and the Italian Ministry of Innovation.Vatican Media - Vatican Pool - Getty Images)

In this heyday of A.I. innovation, organizations are pouring tens of billions of dollars into A.I. development. However, for all the money invested in capabilities, there has not been commensurate investment in A.I. governance.

Some companies may take the position that when world governments release A.I. regulations, that will be the appropriate time to wrestle A.I. programs into a governance structure that can address complex topics like privacy, transparency, accountability, and fairness. In the meantime, the business can focus solely on A.I. performance.

Regulatory wheels are already in motion. However, regulations move at the speed of bureaucracy, and A.I. innovation is only accelerating  A.I. is already deployed at scale, and we are rapidly approaching a point after which A.I. capabilities will outpace effective rulemaking, putting responsibility for self-regulation squarely in the hands of business leaders.

The solution to this puzzle is for organizations to find the balance between following existing rules and self-regulation. Some companies are rising to the responsible A.I. challenge: Microsoft has an Office of Responsible A.I. Use, Walmart a Digital Citizenship team, and Salesforce an Office of Ethical and Humane Use of Technology. However, more organizations need to quickly embrace a new era of A.I. self-regulation.

The business value in self-regulation

Government bodies cannot look into every enterprise, understand at a technical level what A.I. programs are emerging, forecast the potential issues that may result, and then rapidly create rules to prevent problems before they occur. That’s an unreachable regulatory scenario–and not one business would want in any case. Instead, every enterprise has an incisive view of its own A.I. endeavors, putting it in the best position to address A.I. issues as they are identified.

While government regulations are enforced with fines and litigation, the consequences of failing to self-regulate are potentially much more impactful.

Imagine an A.I. tool deployed in a retail setting that uses CCTV feeds, customer data, real-time behavior analysis, and other data to predict what the shopper may be most likely to buy if an employee uses a particular sales technique. The A.I. also shapes customer personas that are stored and updated for targeted advertising campaigns. The A.I. tool itself was purchased from a third-party vendor and is one of dozens of A.I. deployed throughout the retailer’s operations.

Emerging regulations may dictate how the customer data is stored and transferred, whether consent is needed before the data is collected, and whether the tool is provably fair in its predictions. Those considerations are valid, but they are not comprehensive from the business perspective. For example, was the A.I. vendor and its tools vetted for security gaps that could imperil the enterprise’s connected technologies? Do staff have the necessary training and documented responsibilities needed to use the tool correctly? Are customers aware that A.I. is being used to build a detailed persona that is stored in another location? Should they be aware?

The answers to these kinds of questions can significantly impact the enterprise in terms of security, efficiency, ROI on technology investments, and brand reputation, among other things. This hypothetical case reveals how failing to self-regulate A.I. programs exposes the organization to myriad potential problems–many of which likely fall outside of a government’s regulatory purview anyway. The best path forward with A.I. is shaped by governance.

Governance for trust in A.I.

No two companies and A.I. use cases are the same, and in the era of self-regulation, the enterprise is called to assess whether the tools it uses can be deployed safely, ethically, and in line with company values and existing or tangential rules. In short, businesses need to know if the A.I. can be trusted.

Trust as a lens for governance impacts more than just the commonly cited A.I. concerns, such as the potential for discrimination and threats to personal data security. As I discuss in my book, Trustworthy AI, trust also applies to things like reliability over time, transparency to all stakeholders, and accountability baked into the entire A.I. lifecycle.

Not all of these factors are relevant to every organization. An A.I. that automates trade reconciliation likely does not pose a threat of discrimination, but the security of the model and the underlying data is critical. Conversely, data security is somewhat less concerning for predictive A.I. used to anticipate food and housing insecurity, but unfairness and discrimination are priority considerations for a tool that relies on historical data that is potentially rife with latent bias.

Effective self-regulation in A.I. requires a whole-of-lifecycle approach, where attention to trust, ethics, and outcomes is embedded at every stage of the project. Processes must be amended to set clear waypoints for decision-making. Employees must be educated and trained to contribute to A.I. governance, with a solid understanding of the tools, their impact, and the employee’s individual accountability in the lifecycle. And the technology ecosystem of edge devices, cloud platforms, sensors, and other tools must all be aligned to promote the qualities of trust most important in a given deployment.

Self-regulation fills the gap between innovation and government-made rules. Not only does It set the enterprise on a path to meeting whatever regulations emerge in the future, but it also delivers significant enterprise value by maximizing investment and minimizing negative outcomes.

For all we have spent on building A.I. capabilities, we should also look toward investing in how we manage and use these tools to their full potential in a trustworthy way–and we should not wait for governments to tell us how.

Beena Ammananth is the executive director of the Global Deloitte A.I. Institute.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More must-read commentary published by Fortune:

  • Will the U.S. and Europe slide into recession in 2023? Here’s how to look out when economic outlooks don’t
  • Biggest CEO successes and setbacks: 2022’s triumphs and 2023’s challenges
  • I have 10 minutes to clean a plane before passengers board. Here’s why the holidays’ air travel chaos was entirely avoidable
  • The next era of work will be about skills–not pedigree. Here’s how employers are changing the way they judge potential, according to LinkedIn and Jobs for the Future
Learn how to navigate and strengthen trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Sign up here.
About the Author
By Beena Ammananth
See full bioRight Arrow Button Icon

Latest in Commentary

Amit Walia
CommentaryM&A
Why the timing was right for Salesforce’s $8 billion acquisition of Informatica — and for the opportunities ahead
By Amit WaliaDecember 6, 2025
10 hours ago
Steve Milton is the CEO of Chain, a culinary-led pop-culture experience company founded by B.J. Novak and backed by Studio Ramsay Global.
CommentaryFood and drink
Affordability isn’t enough. Fast-casual restaurants need a fandom-first approach
By Steve MiltonDecember 5, 2025
1 day ago
Paul Atkins
CommentaryCorporate Governance
Turning public companies into private companies: the SEC’s retreat from transparency and accountability
By Andrew BeharDecember 5, 2025
1 day ago
Matt Rogers
CommentaryInfrastructure
I built the first iPhone with Steve Jobs. The AI industry is at risk of repeating an early smartphone mistake
By Matt RogersDecember 4, 2025
2 days ago
Jerome Powell
CommentaryFederal Reserve
Fed officials like the mystique of being seen as financial technocrats, but it’s time to demystify the central bank
By Alexander William SalterDecember 4, 2025
2 days ago
Rakesh Kumar
CommentarySemiconductors
China does not need Nvidia chips in the AI war — export controls only pushed it to build its own AI machine
By Rakesh KumarDecember 3, 2025
3 days ago

Most Popular

placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
1 day ago
placeholder alt text
Economy
Two months into the new fiscal year and the U.S. government is already spending more than $10 billion a week servicing national debt
By Eleanor PringleDecember 4, 2025
3 days ago
placeholder alt text
Success
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant 'state of anxiety' out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
2 days ago
placeholder alt text
Success
‘Godfather of AI’ says Bill Gates and Elon Musk are right about the future of work—but he predicts mass unemployment is on its way
By Preston ForeDecember 4, 2025
2 days ago
placeholder alt text
Success
Nearly 4 million new manufacturing jobs are coming to America as boomers retire—but it's the one trade job Gen Z doesn't want
By Emma BurleighDecember 4, 2025
2 days ago
placeholder alt text
Asia
Despite their ‘no limits’ friendship, Russia is paying a nearly 90% markup on sanctioned goods from China—compared with 9% from other countries
By Jason MaNovember 29, 2025
7 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.