Skip to Content

Law Enforcement Shouldn’t Rely Entirely on A.I. to Decide Whether to Detain Suspects, Report Says

Law enforcement shouldn’t base their decisions to detain suspects or extend prison terms entirely on artificial intelligence because its flaws and biases.

A report published Friday by Partnership on AI (PAI), formed by tech giants like Amazon, Google, and Facebook along with advocacy groups like the American Civil Liberties Union, is intended a sound a cautionary note about using the buzzy technology in the criminal justice system. The overarching message is that A.I. can be a useful tool, but that it also has significant limits.

The report was created following the passage of a California law that requires state courts to use machine learning or other statistical tools to sift through the backgrounds of people accused of crimes. The law, which goes into effect in October, is intended to determine whether suspects should be incarcerated prior to trials.

Currently, much of the existing software used by courts for pretrial sentencing decisions are based on older data analytical techniques rather than cutting-edge machine learning, said Peter Eckersley, research director of PAI. But both older and newer data-crunching technologies have underlying flaws that could result in biased results, he explained.

Get Eye on A.I., Fortune’s newsletter on artificial intelligence.

For instance, sentencing tools that rely on historical data to make decisions could discriminate against minority communities. Some human rights groups have previously voiced concerns that police officers may target minorities, which could unfairly influence data-crunching systems to determine that minorities are more likely to commit crimes.

Although the report highlights many examples of algorithmic bias, Eckersley said the PAI members are not opposed to the use of data crunching technology in the criminal justice system. Instead of using predictive software to levy punishments, courts could use it to help people, he explained.

A suspect who is at risk of skipping bail may, in fact, merely lack access to a car. The software could flag the individual so that it could offer the individual transportation to get to the courtroom on time.

“If they are ever going to be appropriately used, the use shouldn’t be to decide to detain or continue to detain a person,” Eckersley said.

The report also lists the methods that makers of data crunching tools can use to mitigate potential biases in their software. It also provides advice to policy makers, lawyers, and legal practitioners on best practices for using software in the criminal justice system.

Eckersley said that many of the A.I. researchers who participate in the survey were surprised with how widespread the use of these types predictive systems are in the judicial system.

Although tech companies like Amazon, Apple, and IBM are members of the PAI, the report said “it should not under any circumstances be read as representing the views of any specific member of the Partnership.”

“Instead, it is an attempt to report the widely held views of the artificial intelligence research community as a whole,” the study said.