• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
CommentaryAI

Yoshua Bengio: California’s AI safety bill will protect consumers and innovation

By
Yoshua Bengio
Yoshua Bengio
Down Arrow Button Icon
By
Yoshua Bengio
Yoshua Bengio
Down Arrow Button Icon
August 15, 2024, 7:51 AM ET

Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award,  which has been dubbed“the Nobel Prize of Computing,” along with Geoffrey Hinton and Yann LeCun. He currently serves as professor at the University of Montreal and scientific director at Mila-Quebec AI Institute. This op-ed was faciliated by the office of California State Senator Scott Wiener, who introduced SB 1047.

Yoshua Bengio, most known for his pioneering work in deep learning, attends the 2024 TIME100 Gala on Apr. 25 in New York City.
Yoshua Bengio, most known for his pioneering work in deep learning, attends the 2024 TIME100 Gala on Apr. 25 in New York City.Kristina Bumphrey - Variety - Getty Images

As a fellow AI researcher, I have enormous respect for Dr. Fei-Fei Li’s scientific contributions to our field. However, I disagree with her recently published stance on California’s SB 1047. I believe this bill represents a crucial, light touch and measured first step in ensuring the safe development of frontier AI systems to protect the public.

Many experts in the field, including myself, agree that SB 1047 outlines a bare minimum for effective regulation of frontier AI models against foreseeable risks and that its compliance requirements are light and not prescriptive by intention. Instead, it relies on model developers to make self-assessments of risk and implement basic safety testing requirements. It also focuses on only the largest AI models—those costing over $100 million to train—which ensures it will not hamper innovation among startups or smaller companies. Its requirements align closely with voluntary commitments many leading AI companies have already made (notably with the White House and at the Seoul AI Summit).

We cannot let corporations grade their own homework and simply put out nice-sounding assurances. We don’t accept this in other technologies such as pharmaceuticals, aerospace, and food safety. Why should AI be treated differently? It is important to go from voluntary to legal commitments to level the competitive playing field among companies. I expect this bill to bolster public confidence in AI development at a time when many are questioning whether companies are acting responsibly.

Critics of SB 1047 have asserted that this bill will punish developers in a manner that stifles innovation. This claim does not hold up to scrutiny. It is common sense for any sector building potentially dangerous products to be subject to regulation ensuring safety. This is what we do in everyday sectors and products from automobiles to electrical appliances to home building codes. Although hearing perspectives from industry is important, the solution cannot be to completely abandon a bill that is as targeted and measured as SB 1047. Instead, I am hopeful that, with additional key amendments, some of the main concerns from industry can be addressed, while staying true to the spirit of the bill: Protecting innovation and citizens.

Another particular topic of concern for critics has been the potential impact of SB 1047 on the open-source development of cutting-edge AI. I have been a lifelong supporter of open source, but I don’t view it as an end in itself that is always good no matter the circumstances. Consider, for instance, the recent case of an open-source model that is being used at a massive scale to generate child pornography. This illegal activity is outside the developer’s terms of use, but now the model is released and we can never go back. With much more capable models being developed, we cannot wait for their open release before acting. For open-source models much more advanced than those that exist today, compliance with SB 1047 will not be a trivial box-checking exercise, like putting “illegal activity” outside the terms of service.

I also welcome the fact that the bill requires developers to retain the ability to quickly shut down their AI models, but only if they are under their control. This exception was explicitly designed to make compliance possible for open-source developers. Overall, finding policy solutions for highly capable open-source AI is a complex issue, but the threshold of risks vs. benefits should be decided through a democratic process, not based on the whims of whichever AI company is most reckless or overconfident.

Dr. Li calls for a “moonshot mentality” in AI development. I agree deeply with this point. I also believe this AI moonshot requires rigorous safety protocols. We simply cannot hope for companies to prioritize safety when the incentives to prioritize profits are so immense. Like Dr. Li, I would also prefer to see robust AI safety regulations at the federal level. But Congress is gridlocked and federal agencies constrained, which makes state action indispensable. In the past, California has led the way on green energy and consumer privacy, and it has a tremendous opportunity to lead again on AI. The choices we make about this field now will have profound consequences for current and future generations.

SB 1047 is a positive and reasonable step towards advancing both safety and long-term innovation in the AI ecosystem, especially incentivizing research and development in AI safety. This technically sound legislation, developed with leading AI and legal experts, is direly needed, and I hope California Governor Gavin Newsom and the legislature will support it.

More must-read commentary published by Fortune:

  • Markets have overestimated AI-driven productivity gains, says MIT economist
  • Brian Niccol may well be the messiah Starbucks—and Howard Schultz—have been looking for
  • ‘Godmother of AI’ says California’s well-intended AI bill will harm the U.S. ecosystem
  • The ‘Trump dump’ is back—and the stocks that he targets are crashing

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
By Yoshua Bengio
See full bioRight Arrow Button Icon
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.