Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
The European Union today proposed rules governing the use of artificial intelligence that could wind up becoming a de facto standard for how the technology is governed in much of the globe.
That’s because the new regulations would cover any A.I. system whose decisions impact an EU citizen, whether they are a customer or an employee. That’s a market and workforce of 450 million people that most large global businesses can’t afford to ignore.
In many instances, businesses will want to use the same algorithms and systems for all customers and employees, rather than training separate software and creating separate processes just for the EU. A.I. algorithms are often more accurate if they are fed more data and training, and maintaining separate systems for different geographies may often be impractical.
Meanwhile, companies that violate the law would be subject to fines of up to €20 million ($24 million) or 4% of global sales—whichever is larger.
“The question that every firm in Silicon Valley will be asking today is, Should we remove Europe from our maps or not?” said Andre Franca, the director of applied data science at CausaLens, a British A.I. startup.
The impact of the new rules may be similar to that of the EU’s data privacy regulation, the General Data Protection Regulation (GDPR), which became the de facto privacy standard for much of the world’s largest companies after it came into effect in May 2018.
“I think there are likely to be instances where this will be the global standard,” said Anu Bradford, a law professor at Columbia University and author of the book The Brussels Effect, about the way in which the EU has successfully used its regulatory power to promote its own policies and values beyond beyond the 27-nation bloc.
“Ethical technology”
Margrethe Vestager, the EU’s powerful digital technology czar, made it clear that the European Commission, the EU’s executive branch, intends the new law to have an effect well beyond Europe’s borders. “By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way,” she said.
Bradford told Fortune that the Commission doesn’t want European companies to be at a disadvantage compared to those from the U.S. or China that are already leading A.I.’s development while Europe has lagged behind. But in many cases, those American and Chinese companies have established their frontline positions in the technology by hoovering up vast amounts of personal data and deploying A.I. in ways that trample on individual rights and civil liberties that Europe has sought to protect. “They actually want to have this global impact,” Bradford said of the Commission.
American technology companies signaled they are already gearing up to challenge the proposed law, which is called the Artificial Intelligence Act. The act is “a damaging blow to the Commission’s goal of turning the EU into a global A.I. leader,” said Benjamin Mueller, a senior policy analyst at the Center for Data Innovation, a Washington, D.C.-based think tank that is indirectly funded by several large U.S. technology companies. Mueller said the proposed law would create “a thicket of new rules that will hamstring companies hoping to build and use A.I. in Europe” and cause the continent “to fall even further behind the United States and China.”
But the Computer & Communications Industry Association, a trade body that represents a broad section of technology firms, gave tentative support to the proposed law. “We are encouraged by the EU’s risk-based approach to ensure that Europeans can trust and will benefit from A.I. solutions,” Christian Borggreen, the association’s vice president, said in a statement. But he cautioned that the regulations would need further clarity to “avoid unnecessary red tape” and that “regulation alone will not make the EU a leader in A.I.”
“Manipulative A.I.”
The proposed EU rules would ban the use of A.I. for “manipulative, addictive, social control and indiscriminate surveillance practices.” It defines “manipulative A.I.” as a system that would “cause a person to behave, form an opinion or take a decision to their detriment that they would not have taken otherwise.”
The rules would also prohibit A.I.’s use in “indiscriminate surveillance” and in “social scoring systems” unless they are “for a specific legitimate purpose of evaluation and classification.” That rule seems aimed at preventing any European countries or businesses from implementing anything similar to China’s social credit system, in which the government tracks citizens and ranks them based on their behavior, with people with low scores being excluded from certain benefits and activities.
In other circumstances, the proposal is that regulation of A.I. should follow a risk-based approach, with use cases deemed “high-risk” subject to more stringent requirements. The proposal says that high-risk use cases include critical infrastructure where they could put people’s life and health at risk; educational and vocational settings where they could determine access to education or professional training; employment, worker management, and self-employment; essential private and public services, including access to financial services such as loans; law enforcement; migration, asylum, and border control, including verifying the authenticity of travel documents; the administration of justice.
In these high-risk areas, those deploying A.I. systems will need to undertake a risk assessment and take steps to mitigate any dangers; use high quality data sets to train the system; log activity so that A.I. decisions can be recorded and traced; keep detailed documentation on the system and its purpose to prove compliance with the law to government regulators; provide clear and adequate information to the user; have “appropriate human oversight measures”; ensure a “high level of robustness, security and accuracy.”
Artificial Intelligence Board
The law also proposes an European Artificial Intelligence Board, made up of representatives of the appropriate regulators from each member state, as well as the European Data Protection Supervisor and the Commission, that will help ensure the approach to implementing the law is harmonized across the bloc. It would also be responsible for issuing recommendations to the Commission about which A.I. use cases should be deemed “high-risk” and build a group of technical experts that national authorities can consult.
Critics have already pointed out that many of the proposed act’s terms are vague, opening companies up to a potential legal morass unless they are better defined in further implementation regulations. For instance, the Center for Data Innovation’s Mueller said that the ban on “manipulative” A.I. used “questionable definitions that will spark costly legal and regulatory battles” and that the “complex regulatory and technical requirements” placed on high-risk A.I. systems “would curtail the use of many socially beneficial applications of A.I.”
But Sarah Khatry, an applied A.I. ethicist at DataRobot, a Boston-based company that helps businesses build predictive analytic systems, told Fortune that many of its customers already undertake the kind of risk-based assessment and documentation efforts the EU legislation is proposing.
Franca, from CausaLens, said the EU is “asking the right questions and they are worried about the right things” when it comes to regulation of the emerging technology. He said, however, that the detailed definitions will be important for determining exactly what the impact of the new legislation will be. For instance, he says, what is meant by “appropriate human oversight,” could be crucial.
The new law could be a boon to companies such as CausaLens that are working on A.I. systems that are more interpretable than many existing machine-learning systems, in which the exact reasons for a particular decision the software makes can be opaque even to its creators. In these “black box” systems it can be hard to know exactly when and how they will fail, which means they might fail the risk assessment and mitigations the EU is saying will have to be in place for high-risk systems.
Franca said the proposed law could spur companies to move to new kinds of A.I. that are inherently more interpretable and safer, in much the same way as GDPR pushed some technology companies to design processes that inherently protect personal data. “There could actually be a lot of advancement in this area because of the legislation,” he said.
Khatry said the U.S. may well follow Europe’s lead, enacting similar rules for governing the use of A.I. She pointed out that the Algorithmic Accountability Act, a bill introduced into U.S. Congress in 2019, incorporated many of the same ideas about providing oversight for A.I. software as the European law. That bill, however, did not make it to a vote.
Bradford estimated the proposed European law would probably take about two years to wind its way through the bloc’s legislative process. But she said that France, which is assuming the presidency of the European Council next year, is keen to push ahead on the A.I. law and several other related pieces of digital regulation.
Correction, April 21, 2021: A previous version of this story misspelled the name of the company CausaLens.
Correction, April 22, 2021: A previous version of this story misspelled the first name of EU digital technology czar Margrethe Vestager.