The European Union’s AI Act is good to go after the European Parliament overwhelmingly approved it by 523 votes to 46. The law—the world’s first to be specifically aimed at AI—now just needs rubber-stamping by the EU’s member states, after which it will take effect in stages between the end of this year and mid-2026.
So, without going into the details of who proposed what and how the law evolved as it weaved through a heavily lobbied legislative process, here’s a rundown of what the finished product actually says.
Some AI practices will be straight-up banned, including those that:
—Deploy subliminal techniques to manipulate or deceive people
—Exploit people’s vulnerabilities due to their age, disability, or socio-economic situation
—Use biometric data to deduce sensitive characteristics like race or sexual orientation
—Classify people for social-scoring purposes
—Try to predict whether a person will commit a crime unless they’re already known to be involved in criminal activity
—Expand facial-recognition databases by scraping everyone’s images from footage
—Try to infer people’s emotions in workplaces and educational institutions
There’s also a ban on using real-time, remote biometric identification in public spaces for law enforcement purposes. But there are significant carve-outs for situations involving searches for abduction or sex-trafficking victims and other missing people, or for people suspected of serious crimes, or situations where law enforcement is dealing with a specific and imminent threat of a terrorist attack.
A bunch of AI systems may be considered high-risk, in which case their providers must ensure they’re deployed with human oversight, have appropriate levels of accuracy and security, come with extensive documentation, and basically work as advertised. These could include AI systems that ensure the safety of a product; carry out non-banned biometric identification or emotion recognition; are used in recruitment; assess eligibility for public services or visas; are used in law enforcement; or are part of critical infrastructure.
Rather than talking about foundation models (the standard term for the likes of OpenAI’s GPT-4 and Google’s Gemini), this law refers to general-purpose AI (GPAI) models. The developers of such models will need to give up-to-date technical documentation to the European Commission’s new AI Office and national regulators. They will also need to give certain information to AI providers whose services are built on those GPAI models, so they can properly understand their capabilities and limitations.
GPAI model developers will generally have to publish a “sufficiently detailed summary” of the content used in training, and they will have to demonstrably abide by EU copyright law. However, there’s an exception here for GPAI models (as long as they aren’t extremely powerful) that have been fully open-sourced, like Meta’s Llama models.
If a GPAI model is extremely powerful (GPT-4 and Gemini may already clear the threshold; others will in the future) it may be deemed to present a systemic risk, in which case its developer must mitigate those risks. That means having to conduct heavy testing and institute strict security protections to make sure it doesn’t do bad things. They will also need to give the authorities a heads-up about “serious incidents and possible corrective measures to address them.”
Fines for non-compliance will go up to €35 million ($38 million) or 7% of a violator’s global revenues, whichever is higher.
EU countries will each need to establish at least one “regulatory sandbox” so companies can develop, train, test, and validate their AI systems before putting them on the market. National authorities will be on hand to provide guidance and supervision and are supposed to coordinate with each other to ensure consistency. This is intended to provide a place for innovation without nasty regulatory surprises in the future. Developers of high-risk AI systems will be able to test them in the real world, but only with approval and tight restrictions on things like data protection and the protection of vulnerable groups.
Does this all strike the right balance between safety and the stimulation of innovation? As a world-first, this will probably be an influential law, but will it put Europe at a disadvantage? Let me know your thoughts. More news below.
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
NEWSWORTHY
TikTok bill passed. The House of Representatives on Wednesday passed a bill that would force ByteDance to sell TikTok or see it banned in the U.S. This comes despite former President Donald Trump's surprise opposition to the bill. However, the measure's fate in the Senate seems uncertain.
Intel’s mixed fortunes. There’s good news and bad news for Intel. Good: According to Reuters, Intel has won more time to continue selling advanced processors to China’s heavily sanctioned Huawei, despite pressure on the White House from rival AMD, which has been denied a license to sell similar products in China. Bad: Bloomberg reports that the Pentagon no longer wants to spend up to $2.5 billion on a grant for Intel’s U.S. chip-making plans, meaning that Intel may end up having to spend more of its Chips Act incentives on chips for military and intelligence purposes.
Gemini’s election muzzle. Google’s Gemini AI won’t answer any election-related questions in countries where elections are imminent, the Guardian reports. Half the world is voting this year. In a blog post announcing the move locally, the Google India team also said it was boosting authoritative information on Google Search and YouTube about things like voter registration, which is good because Gemini apparently won’t even touch that subject this year.
Bluesky’s roll-your-own moderation. X rival Bluesky will this week allow its users (on desktop first; mobile later) to apply third-party content moderation filters to the posts they see. As TechCrunch reports, Bluesky is enabling this by open-sourcing a “collaborative moderation tool” called Ozone, which people can use to set up and run their own moderation services.
SIGNIFICANT FIGURES
350,000
—The number of high-end Nvidia H100 GPUs that Meta plans to have in its generative AI-focused infrastructure build-out by the end of this year, according to an update from the company yesterday. Nvidia sold an estimated 550,000 H100s last year, commanding $25,000 or more per GPU. Overall, Meta said its portfolio would “feature compute power equivalent to nearly 600,000 H100s.”
IN CASE YOU MISSED IT
TikTok bill that could lead to a U.S. ban is set to pass the House—but its path is uncertain in the Senate, by the Associated Press
Airbnb bans all indoor security cameras to ‘prioritize the privacy of our community’, by Chris Morris
The untold story of Kickstarter’s crypto Hail Mary—and the secret $100 million a16z-led investment to save its fading brand, by Leo Schwartz and Jessica Mathews
The CEO of the Las Vegas agency behind Boring Company’s first tunnel system says his team will be ‘more involved’ after safety incidents, by Jessica Mathews
The battle between BYD and Tesla moves to a new front: Southeast Asia, by Lionel Lim
A TikTok star who is taking a pay cut for a 9-to-5 job explains why influencing is unsustainable, by Jane Thier
BEFORE YOU GO
Rocket explosion. The Canon-backed startup Space One had to blow up the first rocket it tried to send into space, Bloomberg reports. The as-yet unexplained failure of the Kairos rocket, which was carrying a Japanese government satellite, knocked the shares of Canon Electronics by nearly 13%.
This is the web version of Data Sheet, a daily newsletter on the business of tech. Sign up to get it delivered free to your inbox.