“Five years from now, 10 years from now, this will be viewed as a watershed moment,” says Erica Darragh.
Darragh is referring to last week’s decision by the University of California at Los Angeles to cancel plans to add facial recognition technology to a campus-wide surveillance system. Activists like Darragh, a board member of Students for Sensible Drug Policy, had campaigned against facial recognition tech over fears that it violates privacy and fosters discrimination by disproportionately misidentifying people of color.
UCLA’s reversal is particularly significant because the school was poised to become the first major higher education institution to install the technology, which uses artificial intelligence to identify people, for large-scale public surveillance. In 2018, the university started work on Policy 133, a plan to centralize its campus policing data, such as feeds from surveillance cameras.
That decision quickly set off alarm bells. Activists, including groups representing students of color, pushed the school to exclude facial recognition data from the project.
After a year of pressure, UCLA changed course. In a letter to Fight for the Future, a group that opposes facial recognition technology, UCLA vice chancellor Michael Beck explained the decision by saying the school had “determined that the potential benefits are limited and are vastly outweighed by the concerns of the campus community.”
UCLA did not respond to a request to comment for this article.
In suspending its push into facial recognition, UCLA joined 45 other colleges including Columbia, Harvard, and Massachusetts Institute of Technology that have told Fight for the Future that they have no plans to use the technology.
On March 2, the campaign against facial recognition at colleges is expected to reach a new level. Students who oppose the technology are planning a nationwide “day of action” to deliver petitions opposing the technology to university administrators and hold public meetings.
That may put further pressure on schools, including Princeton, Tufts, and Duke, which have indicated they may use the technology in the future. Like UCLA, they cite the potential security benefits.
At Oakland Community College near Detroit, tension over the topic got so heated recently that the school took the drastic step of cancelling planned student events about facial recognition technology and blocking student government resolutions about banning its use. After the American Civil Liberties Union intervened on behalf of students, the school revised its position, allowing events about the topic but leaving the ban on student government in place.
The backlash mirrors what’s happening in a number of cities nationwide. San Francisco, Oakland, and Cambridge, Mass., have all said that they would ban government use of facial recognition technology.
Campus opposition could, to a point, threaten businesses trying to sell facial recognition technology to schools. But it’s unclear how many colleges currently use it. In 2014, the University of San Francisco tested using facial recognition technology to control dormitory access, but it did not implement the project more broadly. Stanford University and the University of Southern California have reportedly deployed limited facial recognition tools for payments, but not for broader surveillance. PopID, the vendor that supplies facial recognition systems to Stanford and USC, did not reply to a request for further information.
Rustom Kanga, CEO of iOmniscient, which oversaw the University of San Francisco pilot, says facial recognition can be used in ways that protects privacy. He also says it is more effective at identifying known dangerous individuals than human security personnel. In general, Kanga says, criticisms of the technology are “based on … emotions without understanding how best to use technology.”
In fact, concerns over facial recognition often focus on flaws in the technology, especially evidence that it disproportionately misidentifies people of color. UCLA’s decision came as Fight for the Future was about to publish results of a test in which Amazon’s facial recognition software, Rekognition, mistakenly matched photos of UCLA faculty and athletes of color with police mug shots of criminal suspects.
In addition to concerns about racial profiling, such results cast doubt on the main promise of facial recognition technology—that it can accurately identify individuals who are threats to public safety.
UCLA’s scuttled plans for facial recognition highlight broader privacy risks. Data-driven policing and intelligence-gathering, which use historical data and machine learning to identify areas at high risk of crime and even individuals who may commit crimes, have been major priorities for U.S. intelligence and law enforcement agencies hoping to fight terrorism and mass shootings. But critics—including former U.S. Attorney General Eric Holder—have expressed concern that the technology may perpetuate bias by increasing the likelihood of police harassing people of color.
“Our opposition [to facial recognition] is not just about flaws in the technology,” says Hamid Kahn, a coordinator for the Stop LAPD Spying Coalition, which has campaigned against the Los Angeles Police Department’s alleged surveillance and racial profiling, and advised UCLA’s student activists. “What we oppose is the deeper purpose it serves in gathering information about people, and how that gets uploaded to [policing] databases.”
Matthew Richard, a UCLA student who is vice chair of the Campus Safety Alliance, says facial recognition technology is “universally feared on campus.” The Alliance, a coalition of groups including the Afrikan Student Union and the Muslim Student Association, has been heavily involved in opposing the technology.
But more broadly, public opinion about facial recognition is more positive. In a 2019 survey, Pew found that a slim majority of Americans, 56%, trust police to use facial recognition technology responsibly while their trust in private companies is far lower at just 36%.
The federal government has no specific regulations for facial recognition technology, or on the handling or sharing of biometric data. But there appears to be bipartisan support in Congress for some form of federal control on the technology.
Darragh, of the SSDP, says the regulatory void makes campuses a particularly effective place to push back.
“That’s where young people have power. There is a level of oversight that doesn’t exist in the private sector,” says Darragh. “While we’re waiting for the federal government to get it together and ban facial recognition, we can be helpful.”
More must-read stories from Fortune:
—Apple corrects for coronavirus to keep next iPhones on track
—Did the ‘techlash’ kill Alphabet’s city of the future?
—How technology is changing how we volunteer
—Oracle and Google will face off in tech’s trial of the century
—A.I. is transforming the job interview—and everything after
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.