The EU is crafting liability laws for A.I. products that cause injuries—here’s why the whole world better pay attention
Hello and welcome to September’s special monthly edition of Eye on A.I.
We are all eagerly awaiting Tesla’s “A.I. Day” on Friday, where Elon Musk is expected to unveil a protoype of the humanoid Optimus robot that the company has been working on for the past year. My colleague Christiaan Hetzner and I interviewed robotics experts and Tesla-watchers to get a sense of what Musk may debut tomorrow—and how close it may come to the much-hyped vision of a robot butler that the billionaire entrepreneur has spun. (Spoiler alert: probably not very close.) You can check out our story here. And of course, I am sure we’ll cover whatever it is Musk unveils in next week’s regular edition of this newsletter.
In the meantime, there was a bit of news this week that has received none of the media attention lavished on Tesla’s “A.I. Day” but may nonetheless ultimately prove more significant for how businesses use A.I. On Wednesday, the European Union, which has led the way globally in creating regulation around technology, issued a new set of proposed civil liability laws for those deploying A.I. systems. The rules try to update and harmonize existing liability rules across the 27-nation bloc to take into account the fact that with many A.I. systems, it might be difficult for someone who has been harmed by A.I. software to figure out how exactly the A.I. software works and where it went wrong.
Remember, the EU already has a new A.I. Act working its way through the legislative process. That act establishes different requirements for the use of A.I. in high-risk use cases—those that affect a person’s health, basic freedom, and financial well-being—and lower-risk use cases, such as marketing or personalization of retail offerings. Companies or organizations using A.I. for high-risk uses are required to do a lot more testing of the software, have to try to make sure the A.I.’s output can be “interpreted” by the person using the software, and must maintain an audit trail.
The new proposed rules in the EU’s A.I. Liability Directive introduce several important concepts. For one thing, the rules note that data loss is a potential harm for which someone can seek civil liability damages. For another, the proposed rules make clear that software counts as a kind of “product” under the EU’s liability laws. Consumer rights and data protection advocates cheered both of these provisions.
Then the proposal says that a court can now order a company using a high-risk A.I. system to turn over evidence of how the software works. (It does say that the disclosure must be “necessary and proportionate” to the nature of the claim, a balancing test that the EU says will help ensure that trade secrets and other confidential information is not needlessly disclosed. But, as always, it will be up to the courts to figure out how this applies in any specific case.) The EU warns that if a company or organization fails to comply with a court-ordered disclosure, the courts will be free to presume the entity using the A.I. software is liable.
Secondly, and most critically, the EU is proposing that when harm can be established and an A.I. system was involved in the decision-making, and when there is at least some likelihood that the A.I. software contributed to the harm, there will be a presumption of liability.
“The principle is simple,” Didier Reynders, the EU’s justice commissioner, told reporters. “The new rules apply when a product that functions thanks to AI technology causes damage and that this damage is the result of an error made by manufacturers, developers or users of this technology.”
But interestingly, the EU defines “an error” in this case to include not just mistakes in how the A.I. is crafted, trained, deployed, or functions, but also if the “error” is the company failing to comply with a lot of the process and governance requirements stipulated in the bloc’s new A.I. Act. The new liability rules say that if an organization has not complied with their “duty of care” under the new A.I. Act—such as failing to conduct appropriate risk assessments, testing, and monitoring—and a liability claim later arises, there will be a presumption that the A.I. was at fault.
This is a clever twist on the EU’s part, according to legal experts, because it creates an additional mechanism for forcing compliance with the broader A.I. Act. If a company or organization doesn’t follow the risk mitigation and compliance steps outlined in the A.I. law, they will be at increased risk of civil liability if something later goes wrong—even if it isn’t clear that the A.I. itself caused the problem.
Of course, a defendant has a right to present evidence rebutting these presumptions—and in high-risk A.I., the defendant can argue the presumption should not apply because enough information about A.I.’s inner workings is available to prove or disprove the claim. But this presumption of fault gives liability lawyers a powerful tool to use against those deploying A.I. software.
Still, not all consumer rights advocates were happy with the new liability rules. Ursula Pachl, deputy director general of the European Consumer Organisation (which goes by the acronym BEUC) applauded the EU for updating product liability rules to address new software-driven technologies, including A.I. But she said that the new rules would actually make it far harder for a consumer to bring a liability claim than for traditional manufactured goods, such as a lawnmower or blender. In those cases, a company is liable for any defect in a product it makes and sells—regardless of any precautions it took to eliminate defects. But with A.I., a company that can prove it complied with the EU A.I. Act and took reasonable steps to mitigate the risks of its A.I. system, might be able to escape liability, particularly if the algorithm was highly complex and difficult to understand. At the very least, they would likely have to hire very skilled experts to try to determine if the algorithm was at fault.
“Consumers are going to be less well protected when it comes to AI services, because they will have to prove the operator was at fault or negligent in order to claim compensation for damages,” she said in a statement. “Asking consumers to do this is a real let down. In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules.”
We’ll see if these new EU rules actually do what EU thinks they will. But they are almost certainly an important step globally towards the regulation of A.I. Once again, Europe has jumped out ahead of other regions when it comes to regulating emerging new technologies. In the U.S., there is a lot of debate about how existing product liability laws might encompass A.I.—and how they might need to be updated. But don’t be surprised if state legislatures and others start looking to these new EU rules as a model.
Please join me for what promises to be a fantastic virtual round table discussion on A.I. “Values and Value” on Thursday, October 6th at 12:00 to 1:00 PM Eastern Time.
The A.I. and machine-learning systems that underwrite so much of digital transformation are designed to serve millions of customers yet are defined by a relatively small and homogenous group of architects. Irrefutable evidence exists that these systems are learning moral choices and prejudices from these same creators. As companies tackle the ethical problems that arise from the widespread collection, analysis, and use of massive troves of data, join us to discuss where the greatest dangers lie, and how leaders like you should think about them.
- Naba Banerjee, Head of Product, Airbnb
- Krishna Gade, Founder and CEO, Fiddler AI
- Ray Eitel Porter, Managing Director and Global Lead for Responsible A.I., Accenture
- Raj Seshadri, President, Data and Services, Mastercard
You can register to attend by following the link from Fortune’s virtual event page.
And, if you want to know more about how to use A.I. effectively to supercharge your business, please join us in San Francisco on December 5th and 6th for Fortune’s second annual Brainstorm A.I. conference. Learn how A.I. can help you to Augment, Accelerate, and Automate. Confirmed speakers include such A.I. luminaries as Stanford University’s Fei-Fei Li and Landing AI’ Andrew Ng, Google’s James Manyika, and Darktrace’s Nicole Eagan. Apply to attend today!
A.I. IN THE NEWS
Meta debuts a text-to-video generator. The social media giant doesn’t want to get left behind in the race to create A.I. systems that can automatically generate content. In the past few months, there’s been a frenzy of interest in software that can take a text prompt—“a teddy bear playing the piano”—and generate images depicting that scene in a wide range of styles, from photorealism to simple cartoons. Now Meta founder and CEO Mark Zuckerberg has posted the news that his company has created a similar system, but it can actually generate short videos from the prompt, not just static images. It’s obvious why Meta is interested in this: besides generating some tech kudos for advancing the state-of-the-art, the company is investing heavily in short-form video through its Reels feature on Instagram in an attempt to match rival TikTok, which has stolen away many younger users from Meta’s Facebook and Instagram platforms.
OpenAI fully launches DALL-E 2 as a commercial product. The San Francisco-based A.I. research company debuted the text-to-image generation system that has spawned a thousand imitators back in April and began providing limited commercial access to a select number of “Beta testers” in July. This week though it removed the waitlist to join that group of Beta testers and now allows anyone to sign up to use the software. OpenAI CEO Sam Altman told The Washington Post that giving the public open access was “an essential step in developing the technology safely. ‘You have to learn from contact with reality,’ Altman said. ‘What users want to do with it, the ways that it breaks.’ This is despite that fact that OpenAI knows a system like DALL-E could be abused to automate the creation of misinformation or possibly as part of fraud schemes. What if the guys making airplanes sent their products out into the world this way?
Making robot bosses seem too human can actually alienate people. That’s the conclusion of a study lead by a researcher from the National University of Singapore Business School that looked at human workers’ reactions to being supervised by a robot. As Kai Chi Yam, who lead the study, told The Wall Street Journal the problem is that when a human manager provides negative feedback to an employee, the employee either acknowledges that the manager is correct, that they have some shortcomings and then tries to correct them, or, the employee responds defensively, thinking the manager is out to get them. With a robot that does not appear human-like, people tend not to react that way, the researchers found, because they assume the robot has no agency. It’s just doing its job. But when that robot is given a more human-like appearance, the employees think the robot is definitely out to get them. The research is relevant because, according to data from International Data Corp. cited by The Journal, it is estimated that by 2024, 80% of the world’s 2,000 largest corporations will be using “‘digital managers’ to hire, fire, and train workers in jobs measured by continuous improvement.”
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.