Europe’s A.I. Act—which will probably be the world’s first comprehensive regulation of the technology if passed—is moving forward at high speed.
The bill was first introduced by the European Commission just over two years ago, but the rapid rise of generative A.I. recently forced lawmakers to scramble to modernize it. Today, the European Parliament approved its preferred version of the law, which would have major impacts on the likes of ChatGPT. The move opens the way for final “trilogue” negotiations between Parliament, the Commission, and national governments—and, to drive home the sense of urgency, those talks begin tonight.
For the big generative A.I. players, the most important part of Parliament’s preferred version is a new article that would force the providers of foundation models (such as OpenAI’s GPT) to assess their systems for potential impacts on fundamental rights, health and safety, the environment, democracy, and more—and to then mitigate any problems—before releasing them onto the market.
Content generated by these foundation models would have to be labeled as such, and A.I. providers would have to publish summaries of the copyrighted data they used to train the models—a potentially tall order if the training material was indiscriminately scraped from the internet.
Social media recommendation systems would be classified as high-risk, like A.I. used in critical infrastructure, recruitment, or robot-assisted surgery. That would mean serious oversight measures and transparency to the user.
Meanwhile, digital rights and consumer advocates are pretty ecstatic with the Parliament’s new bans on any real-time facial-recognition systems in public spaces; most retroactive remote biometric identification systems; the scraping of facial images from social media to create databases for facial recognition; predictive policing; social scoring by companies; automated emotion recognition in law enforcement, the workplace, and schools; and biometric categorization systems using characteristics like race or gender.
The same activists are however very unhappy with the Act’s lack of protections for migrants facing A.I.-powered risk assessments at Europe’s borders, and for the leeway A.I. vendors would have in classifying their own systems on the risk scale. Trade unionists are also grumbling that the Act only restricts A.I. in the workplace if it can be shown to pose a “significant risk”—they would prefer to be able to apply the precautionary principle.
“The bans proposed by the Parliament today on the use of facial recognition in publicly accessible spaces, or on social scoring by businesses, are essential to protect fundamental rights,” said Ursula Pachl, deputy director general of the European Consumer Organisation (BEUC). “The creation of rights for consumers, such as a right to be informed that a high-risk A.I. system will take a decision about you, are also very important.”
But Pachl added: “We however regret that the Parliament gives businesses the option to decide if their A.I. system is considered high-risk or not, and to thus escape from the main rules of the law.”
It is extraordinary for the first trilogue to take place right after Parliament’s plenary vote on a proposed law, but here we are. The political pressure to get this over the finish is immense, and the final version may even be ready this year (with companies probably then given a couple years to adapt before the law comes into force).
However, that compromise may not look quite like what I’ve just described. The EU’s member states will probably have their own ideas about restricting the use of A.I. in law enforcement, for example—and Big Tech’s lobbyists will be bending the ears of those national governments regarding the impact on all those red-hot large language models. So stay tuned. Given the influence of EU legislation on other countries’ tech laws, the A.I. Act’s final form will have global significance.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.
NEWSWORTHY
Google’s latest antitrust charges. The European Commission is launching a formal antitrust investigation after probing Google for two years over its advertising technology business. This follows previous antitrust fines that Google has faced from the Commission for Android-related abuses and unfairly promoting its comparison-shopping services. This time, the Commission alleges that the internet giant has unlawfully favored AdX, Google’s ad exchange, for almost a decade. Google disagreed with the Commission’s view and said that the investigation’s focus is on a narrow aspect of its advertising business.
Some subreddits vow to go dark indefinitely. Thousands of subreddits that went dark to express distaste for Reddit’s API changes end their protest today, but several say they will keep going for another week and possibly even indefinitely. Redditors’ continuing objection to the platform’s increased API pricing for third-party apps hasn’t swayed CEO Steve Huffman. In an internal memo to staff obtained by The Verge, Huffman said the company “absolutely must ship what we said we would.” Meanwhile, third-party Reddit apps like Apollo plan to shut down near the end of the month, before the new pricing goes into effect.
Microsoft’s news literacy program. As part of its alliance with a nonprofit group of news outlets known as the Trust Project, Microsoft will lead readers to advice on gauging the trustworthiness of news stories via such factors as a journalist’s expertise and reporting methods. Readers will be directed to the tips through ads that will appear on their devices if they use Microsoft products and systems. But the move to help people find credible sources comes as the company faces scrutiny for allowing misinformation to spread. Last week, European regulators pushed tech companies like Microsoft, Google, and Meta to label A.I.-generated content so that people aren’t confused about where information comes from.
ON OUR FEED
“It seemed to be very tactical—the millions, and at times billions, that moved within a couple of months to these offshore accounts located in different parts of the world, that still have an account at these U.S.-based banks.”
—Suzanne Lynch, an expert in money laundering and professor of economic crime at Utica College, on what investigators will be looking into following complaints last week from the Securities and Exchange Commission and Commodity Futures Trading Commission focusing on the billions of dollars flowing between different Binance companies and accounts.
IN CASE YOU MISSED IT
A.I. is changing business and society faster than anyone expected. These 13 A.I. innovators are deciding how the tech will change your life, by Andrea Guzman
A.I. company raises record $113 million just a month after being founded—despite having no product and only just hiring staff, by Chloe Taylor
A.I. makes workers feel so isolated and conflicted that it’s driving them to drink and suffer from insomnia, study finds, by Rachel Shin
50 Best Places to Live for Families, by Fortune Editors
BEFORE YOU GO
YouTube creators are boosting audience growth with A.I. translation tools. Creators are singing the praises of one particular type of A.I.: multilingual translation.
Creators like MrBeast no longer have to maintain separate language-specific YouTube channels and can now operate a single channel that has the videos play for users in their region’s language. Fortune talked to comedic content creator Adam Waheed, who is aiming to triple his annual YouTube views from 11.5 billion to about 35 billion with A.I. translations. His tools involve working with a company called Deeptune to translate his videos into French, Italian, and other languages to capture his voice, inflections, and emotions all in 15 minutes. “My content is already very global. With me translating, it’s going to explode,” he said.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.