Disruptive technology has always challenged legislators. It took labor laws decades to reckon with the changes brought by the Industrial Revolution. Despite vehicles posing relatively obvious safety risks, seat belts only became mandatory 60 years after the Ford Model T first hit the road. These laws are reactionary by nature: the result of governments legislating against negative use cases once they become apparent.
However, the proposal made by the White House Science and Technology Council reflects a change in tack. The nation’s leading scientists believe that artificial intelligence is such a risk that we need another Bill of Rights to protect what makes us human.
They are right.
A very human threat
Artificial Intelligence is a uniquely human technology. No technology knows more about us. On the flip side, this means it is excellent at manipulating humans.
Nowhere is this more evident than on social media platforms, which are powered by a sophisticated A.I. that knows more about our emotions than we do.
These companies understand that generally speaking, higher emotional responses result in greater engagement. By ingesting a decade’s worth of user data, these algorithms have become exceptional at predicting the emotional response an individual piece of content will elicit. To maintain engagement, increasingly emotive content is served.
These algorithms have computerized something fundamental about the human brain—the understanding of emotion—and used it to make a product more compelling. Not only does this show our susceptibility to A.I., but it also highlights the scale of the threat.
While other disruptive technologies took time to achieve mass adoption, an algorithm can impact hundreds of millions of people in seconds. And when an algorithm is based on biased, incomplete, or otherwise affected data sets, the resulting decisions could be catastrophic.
All of which poses a problem for legislators. With data knowing no borders, can individual countries protect their citizens with initiatives like the U.S. Bill of Rights?
The prospects of a universal approach
America isn’t the first region to attempt A.I. regulation. The European Union recently unveiled a preliminary framework intended to regulate A.I. use, which could serve as a basis for other A.I. legislation around the world.
The EU’s initial draft has caused significant debate. Critics on both sides of the argument point to vague language that might ultimately be defined by courts. The phrasing of concepts, for example, “manipulative A.I.,” or “programs which can cause physical or emotional harm,” has caused concerns.
One nuance the text cleverly holds is a separation of impact and intent. This begets an understanding that A.I. can have real-life impacts, regardless of the intent. The EU approach looks to categorize different “behaviors,” enforcing different requirements for A.I. looking to influence each behavioral subset. For example, the bar is set higher for algorithms influencing banking and financial behavior than for supply chain management.
By focusing on end impact, the proposed regulation aims to remove subjectivity from its enforcement. Something A.I. is inherently good at.
However, the availability of an existing policy framework raises the question of whether the U.S. should develop a separate regulation or instead work with other global powers on a unified approach. While a unified policy would of course reflect today’s global internet, the actual policing and enforcement of a global framework would be close to impossible.
It would also overlook is the nuanced relationship different countries have with technology. Even with communities as similar as Australia and the U.S., the former’s new biometric-based pandemic quarantine trial shows a striking difference in attitudes toward data collection. A uniform approach would not account for these unique regional sentiments, preventing countries from codifying the relationship their citizens want to have with A.I.
The EU’s data rights legislation (GDPR) shows that commercial considerations may be more impactful than legal ones. While the laws technically impact only operations in Europe, many companies have shifted their global policies to reflect these policies because of operational efficiency.
This is not to say a purely American A.I. Bill of Rights would be issue-free. For one, it would leave much of our future in the hands of the courts. Additionally, it leaves a void in global oversight. It will be increasingly important for global bodies to publicize the rights citizens should expect when it comes to their personal data, as well as what they should allow A.I. solutions to decide. Global citizens must be aware of the worst abuses.
A.I. is too powerful a technology for us to wait and see. The A.I. genie may already be out of the bottle, but it’s never too late to codify what our digital rights should be.
Steve Ritter is the CTO of digital ID and biometrics company Mitek Systems.
More must-read commentary published by Fortune:
- A.I. won’t break your company’s culture—and it might even boost morale
- The Fed isn’t likely to let inflation skyrocket, no matter what the doomsayers think
- How to get rid of waste and fraud in U.S. health care
- Tell your favorite celebrities deportation contractors don’t deserve their business
- Our autonomous future depends on educating drivers
Subscribe to Fortune Daily to get essential business stories delivered straight to your inbox each morning.