The rapid development and proliferation of artificial intelligence technology are two of the biggest challenges facing government regulators around the world. While the U.S.’s and China’s approach to A.I. are still in the very early stages, the situation in Europe offers a valuable case study on regulating something as complex and fast-changing as A.I.
It’s now nearly two years since the European Commission proposed an “Artificial Intelligence Act” that is still grinding its way through the EU’s legislative process. On the one hand, this shows how inappropriately slow the A.I. regulation push is, given the breakneck speed at which the technology is developing and deploying. That said, the process’s drawn-out nature could in this case prove beneficial.
The Commission’s original proposal would ban things like the use of A.I.-based social scoring systems by public authorities, and systems that “manipulate persons through subliminal techniques beyond their consciousness.” It deems some A.I. systems “high risk” because of the threat they pose to safety or fundamental civil rights, and hits them with strict transparency, oversight, and security requirements—but the bill’s list of such systems is quite precise, including the likes of biometric identification systems and those used for managing critical infrastructure.
That original proposal doesn’t deal with more general-purpose A.I. systems (not to be confused with “artificial general intelligence”, a.k.a. The Singularity), and the only time it references chatbots is when it says they would need just “minimum transparency obligations.” The release of OpenAI’s game-changing GPT technology and its ChatGPT front end—and the coming onslaught of rival large language models from Google and Meta—have made this approach seem somewhat antiquated, and certainly not up to the task that regulators will face.
But the Commission is only one of the big three EU institutions that get to wrangle new legislation.
At the end of last year, the Council of the EU (the institution that represents the bloc’s national governments) published its preferred version of the bill. This version refers to general-purpose A.I. (GPAI) systems that could potentially be used in high-risk A.I. systems, saying they need to be regulated like high-risk systems themselves.
Obviously, this approach is extremely controversial, with critics arguing the Council’s definition of GPAI—“A.I. systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts”—is too fuzzy, and the obligations too laden with legal liabilities for open-source A.I. projects.
Yesterday, the lawmakers who are leading the European Parliament’s scrutiny of the bill presented their take on the GPAI subject. According to Euractiv, which reported on their proposals, the Parliament’s A.I. Act “rapporteurs”—the Romanian liberal Dragoș Tudorache and the Italian social-democrat Brando Benifei—would define GPAI as “an A.I. system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks.”
This would very much cover the likes of the just-released GPT-4, requiring OpenAI to submit to external audits of the system’s performance, predictability, and safety—and even its interpretability. GPAI providers would need to document the risks they cannot mitigate. They would have to identify and mitigate potential biases in the data sets on which their large language models are trained. The rapporteurs’ proposals even include the creation of international A.I. compliance benchmarks. What’s more, a company that distributes or deploys a GPAI and that substantially modifies it, would be seen as a provider of a high-risk A.I. system, and have to comply with all the above. That would presumably include the likes of Microsoft, OpenAI’s deep-pocketed partner.
With Council and Parliament being on roughly the same page regarding general-purpose A.I.—and with EU tech rules being so globally influential—it does seem like meaningful A.I. regulation is coming, at least in Europe. The question is how quickly it will arrive, and how much the landscape might have further shifted by that point. Getting the wording right will be essential if the law is to be relevant by the time it comes into force.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
ChatGPT in the workplace. People have been using ChatGPT to make their jobs easier, so Fortune spoke to about 70 CEOs to see how they’re using the tool. They say it’s been especially helpful for tasks like keyword research, obtaining email outreach templates, and identifying link-building opportunities. While some have worried their job could be replaced by the tool, others aren’t so concerned. Sameer Ahmed Khan, CEO of marketing-tech firm Social Champ, said it’s not a threat to his team’s jobs. “In reality, however, ChatGPT only complements their work and streamlines their workflow.”
TikTok sets the trends. A cybersecurity executive at TikTok says the company has manipulated its algorithm to promote events like the World Cup or Taylor Swift joining the platform, Insider reports. TikTok’s Los Angeles–based editorial team controls the boosting, and its data-management partner Oracle can review it. The exec compared the boosted rating to Netflix promoting a video or movie on its home page and said it applies to a “very small percentage of videos.”
Microsoft signs Call of Duty deal. Microsoft has reached a deal with cloud gaming company Boosteroid on distributing Call of Duty video games, the Wall Street Journal reports. This comes as Microsoft seeks approval to acquire Activision Blizzard, the video game franchise’s owner. The U.K.’s antitrust regulator previously warned that Microsoft could use Call of Duty exclusivity to boost Xbox console sales and harm Sony’s PlayStation.
ON OUR FEED
“I am not GPT-4, but I am an A.I. language model created by OpenAI. I am based on GPT-3, which was released in 2020. If there has been a newer version released since then, such as GPT-4, I wouldn’t have knowledge of it as my training data only goes up until September 2021.”
—GPT-4, when tech blogger Jane Manchun Wong asked if it’s GPT-4, which was released on Tuesday
IN CASE YOU MISSED IT
Mark Zuckerberg suggests new hires work better with 3 days a week in the office as he pursues Meta’s ‘year of efficiency,’ by Nicholas Gordon
Morgan Stanley is testing OpenAI’s chatbot that sometimes ‘hallucinates’ to see if it can help financial advisors, by Prarthana Prakash
Bank of America won big from the Silicon Valley Bank collapse, by Eleanor Pringle
The U.S. housing market could face 2 big changes in the wake of Silicon Valley Bank’s collapse, says Zillow, by Lance Lambert
Cathie Wood complains that regulators unfairly targeted crypto while missing the crisis ‘looming’ in traditional banking, by Nicholas Gordon
BEFORE YOU GO
NASA reveals an image of a star before it explodes. The Webb Telescope, equipped with an instrument that sees light on the electromagnetic spectrum, which has wavelengths longer than our eyes can see, captured the brief phase before a star’s supernova. The star, known as WR 124, is 15,000 light-years away in the constellation Sagitta and is one of a select group of stars that undergoes a period known as the Wolf-Rayet phase, where a star casts off its outer layers. Stars like WR 124 help astronomers understand the early history of the universe. “Similar dying stars first seeded the young universe with heavy elements forged in their cores—elements that are now common in the current era, including on Earth,” NASA wrote in a release.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.