San Francisco Bans Facial-Recognition Tools for Its Police and Other City Departments
Concerned that some new surveillance technologies may be too intrusive, San Francisco became the first U.S. city to ban the use of facial-recognition tools by its police and other municipal departments.
The Board of Supervisors approved the Stop Secret Surveillance ordinance Tuesday, culminating a re-examination of city policy that began with the false arrest of Denise Green in 2014. Green’s Lexus was misidentified as a stolen vehicle by an automated license-plate reader. She was pulled over by police, forced out of the car, and onto her knees at gunpoint by six officers. The city spent $500,000 to settle lawsuits linked to her detention.
Since then, San Francisco officials determined flaws in the license-plate reader were just part of a wider potential for abuse with Big Brother-type surveillance capabilities. With new technologies increasingly making it possible to identify people, places, and objects, the city decided to impose a higher bar for snooping tools.
“The central motivator here is public safety, while making sure police can do their jobs,” said Matt Cagle, an attorney for the American Civil Liberties Union who helped draft the law. “We’ve learned a lot about facial recognition and seen how it’s been used in places like China to track and control populations. The public increasingly understands the threat this technology can pose and that isn’t what they want.”
While San Francisco and nearby Silicon Valley are hubs of American technology and innovation, local opposition to the ordinance was minimal. Six other Northern California municipalities have recently adopted stricter regulations on the use of surveillance technology. But San Francisco was the first to ban facial recognition outright by city departments, a law that will go into effect 180 days after a second reading of the measure next week.
The law does not prohibit companies or individuals from using facial recognition cameras or other surveillance tools, or from sharing their contents with law enforcement during an investigation. Last month, a New York student sued Apple, claiming the company’s facial-recognition software falsely linked him to a series of thefts at Apple stores.
Facial-recognition “poses a threat to people of color and would supercharge biased government surveillance of our communities,” a coalition of 25 privacy, civil rights and justice groups wrote in an April letter to city officials supporting the measure.
The U.S. Department of Justice said the technology is not always accurate and that implementation poses significant challenges to civil rights.
“The potential for misuse of face recognition information may expose agencies participating in such systems to civil liability and negative public perceptions,” according to a December 2017 report on face recognition by the Bureau of Justice Assistance. “The lack of rules and protocols also raises concerns that law enforcement agencies will use face recognition systems to systematically, and without human intervention, identify members of the public and monitor individuals’ actions and movements.”
Not everyone was a fan of the law.
“There are plenty of legitimate concerns about government surveillance, but the right approach is to implement safeguards on the use of technology rather than prohibitions,” Daniel Castro, a vice president of the Information Technology and Innovation Foundation, a non-profit think tank, said in a statement. “Good oversight and proper guidance can ensure that police and other government agencies use facial recognition appropriately.”
One of every two Americans already is captured in a face-recognition database accessible to law enforcement, according to a 2016 study at Georgetown Law. It’s mostly stored in the Federal Bureau of Investigation’s Next Generation Identification-Interstate System—which has about 411 million individual photos. In a May 2016 report, the U.S. Government Accountability Office admonished the FBI for failing to disclose the extent to which it uses the technology, and to ensure privacy and accuracy.
With passage of today’s ordinance, San Francisco is getting ahead of the problem while companies continue to develop facial-analysis systems. The measure’s sponsor acknowledged that today’s vote is unlikely to be the city’s final word on facial recognition. Supervisor Aaron Peskin said that as the policy evolves, the ordinance will be amended.
“That’s probably my biggest concern for the future,” said Brian Hofer, chairman of Oakland’s Privacy Advisory Commission. “That it’ll get too good and be the perfect form of surveillance.”
San Francisco’s new law will also require police to confirm the results of their license-plate reader with the California Department of Justice before detaining individuals. Also, any city departments seeking to acquire surveillance technology must receive formal approval. It also calls for departments currently in possession of surveillance equipment to propose regulations for use, along with an annual audit of all surveillance tools.