• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

When it comes to regulating A.I., rules are good. Enforcement is better.

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
July 12, 2022, 3:04 PM ET
Two traders on the London Metals Exchange both holding two sets of telephone receivers, one held to each ear.
The London Metals Exchange, where these traders work, is considering moving to completely electronic trading, where algorithms already control much of the action. There are strict rules governing this software. But those rules are not well enforced —a lesson for other sectors where algorithms are also making rapid advances.Simon Dawson—Bloomberg via Getty Images

Governments around the world are increasingly debating how to regulate artificial intelligence. Among the most ambitious of the proposed regulations is the Artificial Intelligence Act that is currently making its way through the European Union’s legislative sausage making. In the U.S., the Federal Trade Commission has issued a number of warnings about the controls a company should have in place if it is using algorithms to make decisions and the agency has said it plans to begin rulemaking on the technology. But it is one thing to make new laws. It is another to be able to enforce them.

Bryce Elder, a journalist with The Financial Times, makes this point in a well-argued opinion piece in the newspaper’s “Alphaville” section this week. Elder points out that the industry that is in many ways the furthest along in deploying autonomous systems is finance, where firms have embraced algorithmic trading for more than two-decades and are now increasingly replacing static, hard-coded algorithms with those created through machine learning. Algorithms account for as much as 75% of U.S. equities trading volumes, and 90% on foreign exchanges, according to a 2018 SelectUSA study.

There are stringent rules on the books in most jurisdictions about these algorithms: European Union law requires that they be thoroughly tested before being set loose, with firms asked to certify that their trading bots won’t cause market disorder and that they will continue to operate correctly even “in stressed market conditions.” It also specifies that humans at the trading firms using the algorithms bear ultimate responsibility should the software run amok. Trading venues are also held responsible for ensuring market participants have tested their algorithms to this standard.

But as Elder points out, enforcement is patchy at best. The system relies heavily on self-certification by the trading firms deploying the algorithms. Worse, there are no standard testing mechanisms specified. Compliance is low, with industry consultant TraderServe estimating that fewer than half of all firms have stress tested their algorithmic trading strategies to the appropriate level. While in the U.S., there have been some record-breaking fines for market abuse using algorithms, including the $920 million settlement that JPMorgan Chase agreed to pay in 2020 for manipulating the metal markets. But in Europe, there have been no equivalent enforcement actions.

Given this record, says Elder, “good luck with self-driving cars.” He could say the same thing about A.I. more broadly. Self-driving cars, despite the hype surrounding them, are still years—and maybe even decades—away from broad deployment on our roads. But autonomous software is making rapid inroads into other areas, such as health care and medical imaging, where the stakes are literally life and death. And yet, as in finance, there are very few rules governing exactly how rigorously these systems must be tested. The European A.I. Act says that such high-risk uses of A.I. should be held to stricter standards, with the firms deploying them needing to conduct risk assessments. Sounds good on paper. But making sure firms comply is another matter altogether.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Correction: In last week’s newsletter, I misspelled the last name of Adept co-founder and CEO David Luan and the first name of Adept co-founder and CTO Niki Parmar. I apologize for the errors.

And before we get to this week’s A.I. news, Fortune has new vertical launching this week: Fortune Well. It is dedicated to health and wellness, which are increasingly top of mind issues for both C-suite executives and rank-and-file employees. You can check it out here.

A.I. IN THE NEWS

Sharing deepfake porn should be illegal, a top U.K. advisory body says. The Law Commission, an independent body that examines whether existing laws in Britain need to be overhauled, has recommended that the country adopt new laws to specifically make the sharing of deepfake porn illegal. There is currently no single criminal offense that covers deepfake porn, the commission, which has been studying the issue since 2019, said. Deepfakes are highly-realistic images and videos created using A.I., and in many cases the technique has been used to graft the head of a woman who has not appeared in a pornographic film onto the bodies of pornographic actresses. More here in the Financial Times. 

FIFA will use A.I. to help with offside calls during the 2022 World Cup. The international governing body for soccer has said it will use a combination of sensors, including one in the ball itself, and stadium-mounted cameras, along with machine learning software that can track 29 different points on players' bodies, to help determine if any of those players are offside during the 2022 World Cup in Qatar in November. Alerts from this system will be sent to officials in a nearby control room, who will validate the decision and tell referees on the field what call to make, according to a story in tech publication The Verge.

DeepMind sets up partnership with Francis Crick Institute to apply machine learning to genomics and protein structure. DeepMind, the A.I. research company owned by Alphabet, has set up a partnership with one of the U.K.'s top biomedical research labs, the Francis Crick Institute in London. The deal will see DeepMind establish a lab within the Crick to build machine learning models "to understand and design biological molecules," according to a press release from the two organizations. The lab will also work on genomics. The idea is that biologists at the Crick will be able to experimentally test various designs or hypotheses developed by the A.I. systems that DeepMind's team builds.

Chinese researchers say they can read people's thoughts with A.I., but the world cringes at the totalitarian vibe. A.I. researchers at an institute in Hefei, in China’s Anhui province, say they have developed software that can gauge how loyal people are to the ruling Communist Party by analyzing their facial expressions as they read Communist Party materials online. But the claims sparked immediate outcry, both internationally and among many Chinese citizens. Many international A.I. researchers say they doubt the technology works as well as the Chinese scientists say. But, even if the claims are true, there was widespread concern that the technology would reinforce the increasingly totalitarian control the Chinese government exercises. The Voice of America has more on the story.

EYE ON A.I. TALENT

Ian Goodfellow, a top A.I. researcher who is credited with having invented generative adversarial networks (or GANs), which is the deep learning method behind deepfakes and many other advances in the generation of synthetic images and data, has joined DeepMind as a research scientist, according to a tweet Goodfellow posted. He had most recently been at Apple, but had balked at that company's return to work policies post-Covid.

EYE ON A.I. RESEARCH

Meta unveils new language translation system that boast big improvements for "low-resource" languages. Machine translation has made massive leaps in recent years thanks to breakthrough A.I. algorithms and improved training methods. But for languages that have relatively low-levels of written material available in electronic form on which to train an A.I. system, little progress has been made. Now Meta's A.I. researchers have created a system called "No Language Left Behind" (or NLLB for short) that can translate between 200 different languages, including tough low-resource languages such as Kamba, Lao, and a number of African languages. In an overall translation benchmark judging all of the languages the A.I. system supports, NLLB improved on existing state-of-the-art results by 44%. For some Indian and African languages the improvement was as great as 70%.

Meta has begun using NLLB on its own Facebook and Instagram services and it has also made many of the NLLB translation models freely-available as open-source software. The open-source models could help many other businesses to serve the populations that speak these low-resource languages much better and could also allow speakers of those languages to better access global markets and services online. You can read Meta's blog post about the breakthrough translation system here.

FORTUNE ON A.I.

To solve the water crisis, companies are increasingly turning to A.I.—by Tony Listra

Amazon gives its smart shopping carts an upgrade and expands its checkout-free tech to a college football stadium—by Marco Quiroz-Gutierrez

Elon Musk claims Neuralink’s brain implants will ‘save’ memories like photos and help paraplegics walk again. Here’s a reality check—by Jeremy Kahn, Jonathan Vanian, and Mahnoor Khan

Europeans could be cut off from Facebook and Instagram as soon as September—and TikTok may be next on the block—by David Meyer

BRAINFOOD

Is scale the secret to more powerful, advanced A.I.? There is certainly an entire camp of A.I. researchers who think so. Among the most prominent proponents of this view is Ilya Sutskever, the chief scientist at OpenAI. But there are also other believers in the bigger-will-be-better model of building more capable A.I. to be found scattered throughout most of the world's top A.I. research labs. A whole different group thinks that scale alone isn't the secret to getting us closer to artificial general intelligence (AGI)—the kind of A.I. you know from science fiction that can perform most cognitive skills as well or better than a human. This school thinks that today's A.I. is woefully inefficient, especially when it comes to power consumption but also in terms of learning from very few examples, compared to the human brain. These researchers believe more fundamental algorithmic breakthroughs are needed to get us to the lofty goal of AGI. 

Now several researchers affiliated with New York University have created something they are calling the "Inverse Scaling Prize." It is a contest to find tasks where performance of A.I. systems actually decrease as the size of the A.I. model grows. One known example of this: the increasingly popular ultra-large language models do better on overall benchmarks the bigger they get, but when it comes to producing toxic language, bigger language models are also far more likely to output racist, sexist, homophobic, or stereotyped expressions. Now the prize hopes to give researchers an incentive to find more such examples.

The winner of the competition will receive $100,000, with up to five second prizes of $20,000 each being awarded as well, and up to 10 third prizes of $5,000 each. You can reach more about the contest here.

About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Newsletters

NewslettersMPW Daily
Female exec moves to watch this week, from Binance to Supergoop
By Emma HinchliffeDecember 5, 2025
3 days ago
NewslettersCFO Daily
Gen Z fears AI will upend careers. Can leaders change the narrative?
By Sheryl EstradaDecember 5, 2025
3 days ago
NewslettersTerm Sheet
Four key questions about OpenAI vs Google—the high-stakes tech matchup of 2026
By Alexei OreskovicDecember 5, 2025
3 days ago
Facebook CEO Mark Zuckerberg adjusts an avatar of himself during a company event in New York City on Thursday, Oct. 28, 2021. (Photo: Michael Nagle/Bloomberg/Getty Images)
NewslettersFortune Tech
Meta may unwind metaverse initiatives with layoffs
By Andrew NuscaDecember 5, 2025
3 days ago
Shuntaro Furukawa, president of Nintendo Co., speaks during a news conference in Osaka, Japan, on Thursday, April 25, 2019. Nintendo gave a double dose of disappointment by posting earnings below analyst estimates and signaled that it would not introduce a highly anticipated new model of the Switch game console at a June trade show. Photographer: Buddhika Weerasinghe/Bloomberg via Getty Images
NewslettersCEO Daily
Nintendo’s 98% staff retention rate means the average employee has been there 15 years
By Nicholas GordonDecember 5, 2025
3 days ago
AIEye on AI
Companies are increasingly falling victim to AI impersonation scams. This startup just raised $28M to stop deepfakes in real time
By Sharon GoldmanDecember 4, 2025
4 days ago

Most Popular

placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
1 day ago
placeholder alt text
Economy
JPMorgan CEO Jamie Dimon says Europe has a 'real problem’
By Katherine Chiglinsky and BloombergDecember 6, 2025
1 day ago
placeholder alt text
Politics
Supreme Court to reconsider a 90-year-old unanimous ruling that limits presidential power on removing heads of independent agencies
By Mark Sherman and The Associated PressDecember 7, 2025
17 hours ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
3 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.