CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

After Uvalde: Could A.I. prevent another school shooting?

May 31, 2022, 4:15 PM UTC

The entire world has been horrified by the mass shooting of children and their teachers in an elementary school in Uvalde, Tex., last week. This is an unfathomable atrocity, and our deepest condolences go out to all those families who lost loved ones. Following the tragedy, some have suggested that artificial intelligence might be able to help prevent the next mass shooting. I want to say at the outset that I am highly skeptical of these claims. But, for the moment, let’s examine some ideas that have been proposed.

One is to use computer vision algorithms to try to detect people attempting to carry weapons on to school grounds. There are already a number of A.I. software and CCTV camera vendors that claim to offer “gun detection” algorithms. Among these is ZeroEyes, a company from Philadelphia that says its software can monitor video feeds from existing CCTV cameras and detect a gun as soon as it is visible. The company says its software alerts school administrators and security, as well as automatically dialing local 911 operators—all within five seconds of detecting a weapon. It also says that once a weapon is detected, former U.S. military personnel who work for ZeroEyes also are alerted and monitor the video feed to provide more detailed information to security and law enforcement officials that may help them respond. The system has been deployed at schools in 14 states, according to the company, including a pilot project at Detroit’s Oxford High School, the scene of a school shooting in November last year that left four students dead and seven wounded. Other companies selling gun detection software include Arcarithm, Omnilert, Defendry, and Athena Securities.

Of course, once you already have someone at the school gate with a gun, about to carry out an attack, you are already facing an imminent emergency, with a high likelihood that at least one person may end up being shot before everyone can escape and law enforcement can intervene. As a story about gun detection technology in schools in The Desert News noted: “But in Dayton, Ohio, in 2019, officers reached the scene of a shooting within 32 seconds, and in that unthinkably short span, nine people had already lost their lives. Even on-site police and security guards have often proven ineffective, as was the case at Marjory Stoneman Douglas High School in Parkland, Florida. So how fast is fast enough?” The story also noted that some gun detection software has been faulted for a high rate of false alarms—mistaking object such as brooms for weapons.

Another suggested use of A.I. is to create “smart guns” that are fitted with cameras and A.I. software. Such a system could be set to detect whether the person being aimed at is a child and prevent the gun from being fired. (Or maybe simply preventing rifles designed to hunt game from being fired at humans.) Even if such software could be made to work outside of lab tests—and I am skeptical about this —there would be the issue of mandating gun makers equip their products with the technology and figuring out how to prevent people from tampering with or disabling it.

Some have suggested using A.I. to try to spot the people who are in danger of carrying out a mass shooting. A team at the Cincinnati Children’s Hospital Medical Center published a study in 2018 in which they used A.I. software to analyze transcripts of teens interviewed by psychiatrists and assess whether a teenager had a propensity to commit violence. The A.I. system was calibrated against the assessments of trained psychiatrists and counselors who had carried out more extensive assessments. It was then tested on transcripts that had not been used in the training. The researchers found that the assessments of the algorithm aligned with those of the trained experts 91% of the time even though the A.I. did not have access to the more extensive records and patient histories that the psychiatrists and counselors did.

I think it is possible that this kind of A.I. could help flag troubled individuals who are at risk of committing violence. But it’s not clear exactly what kind of intervention such a warning would allow. Sometimes those who carry out mass shootings are already known to health care professionals and police as troubled individuals. But in such cases, the dilemma is often what to do to prevent these people from perpetrating acts of violence? Laws designed to prevent those identified as posing a risk to themselves or others from obtaining firearms are full of loopholes that mean it is still all too easy for these people to pass a background check and obtain guns (and some gun purchases don’t even require a background check.) Nor, in many cases, do current laws allow the liberty of these individuals to be curtailed to prevent them from carrying out an attack. And, of course, there is also the risk that such A.I. software will unfairly label some people as high risk who are not—with potentially grievous effects on their lives.

Others have suggested using A.I. to sift large swathes of data from schools about past violent incidents, gun purchases, and socio-economic data to try to find patterns that may suggest that a particular school is in danger of experiencing a shooting. This is similar to how various “predictive policing” algorithms have been used to try to identify areas likely to experience a surge of violent crime and flood police resources to those locations. But predictive policing has been criticized for perpetuating systemic racism and socioeconomic bias in policing and failing to make a difference to crime rates. And I, for one, doubt that an A.I. system would find a way to actually predict school shootings from this kind of data. A study of mass shootings by the think tank Rand found: “The rare nature of mass shootings creates challenges for accurately identifying salient predictors of risk and limits statistical power for detecting which policies may be effective in reducing mass shooting incidence or lethality.”

Rahul Sood, the chief executive of Irreverent Labs, a video games company in Bellevue, Wash., posted on Twitter and LinkedIn after Uvalde that machine learning software could be used to detect warning signs of a mass shooting. Sood suggested that gun owners be required to register—a step that he admitted has been previously opposed by the gun lobby, but which he thinks they may be willing to compromise on to avoid more draconian restrictions. Then he suggested that machine learning software be used to look for unusual ammunition purchases and social media posts, particularly among this group of registered gun owners. “Using machine learning you can make predictions of where the next mass shooting might occur, we can stop it before it happens,” he wrote. “There are no other options here. Using technology, and a sensible registration process we can prevent the majority of mass shootings from ever happening.” Again, this method has not been tested. Sood is simply positing that it could work. I think it likely, however, that such a system would throw off too many false alarms to be useful.

As I said at the outset, I am highly skeptical of the idea that A.I. is the key to preventing future school shootings. At best, these suggested uses of A.I. represent genuine attempts to avert tragedy, but are most likely to result in marginal benefits—with potentially big ethical downsides. At worst, these proposals are nothing more than cynical commercial opportunism on the part of software vendors. What is actually required is not a technological fix—but a policy one. Frankly, there is no getting around the fact that in countries with stricter gun control measures, mass shootings are extremely rare; in the U.S. they have become shockingly commonplace.

What do you think about the use of A.I. to prevent mass shootings?

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Correction, June 1: An earlier version of this story misspelled the name of Omnilert, one of the companies that makes gun detection software.

A.I. IN THE NEWS

Dutch police use deepfake to spur potential leads in cold case. In 2003, 13-year-old Sedar Soares was shot dead while throwing snowballs with friends in the parking lot of a metro station in Rotterdam. His murder has never been solved. Now Dutch police, with permission from Soares's family, have created a deepfake video of the murdered teen in the hope of sparking new leads in the cold case, The Guardian reports. In the video, which is created using an A.I. technique, a lifelike image of Sedars, his head grafted on to the body of another boy, looks into the camera and picks up a football. Dutch police say they think this is the first time a deepfake has been used in this way and that several promising tips have come in from those who have seen the video.

More details emerge in case of fired Google A.I. researcher. Wired has new details of the events leading to Google's dismal of senior researcher Satrajit Chatterjee in March. The publication says Chatterjee engaged in a two-year long campaign to discredit the work of two more junior female colleagues, Anna Goldie and Azalia Mirhoseini, who had studied the ability of A.I. to potentially design new computer hardware. Wired cited five current and former Google researchers as well as an internal company document. Chatterjee, through his lawyer, declined to comment for the Wired story although his lawyer denied Chatterjee had acted inappropriately and said he had evidence that Google improperly suppressed Chatterjee's work—but he declined to share this evidence with Wired. A Google spokesperson told the publication that Chatterjee was "terminated with cause."

A.I. is transforming professional tennis. 
The BBC says that the increasing use of A.I. to analyze data is having a big impact on the way professional players and their coaches think about training and tactics in the sport. Among the insights players have gleaned from machine learning is the fact that vast majority of points are scored after just one or two touches of the ball—an insight that has changed the way players train, leading them to spend more time on finding killer shots rather than worrying as much about the consistency their strokes over time. 

EYE ON A.I. TALENT

TradeSun, a Del Mar, Calif.-based company that uses A.I. to help automate and digitize trade finance—the financial instruments that help people trade internationally—as well as to help with trade compliance and fraud prevention, has hired Bhumish Shah to be its chief technology officer, financial site Finextra reports. Shah was previously an executive at JPMorgan, where he focused on cloud-based banking solutions. 

Payments technology startup Mollie, which is based in Amsterdam, has hired Koen Koppen to be its new chief technology officer, Information Age says. Koppen served on Mollie's supervisory board and been a member of its audit/risk committee from January 2020 to March 2022. He was previously the CTO at Sweden-based buy-now, pay-later powerhouse Klarna.

EYE ON A.I. RESEARCH

Another study urges caution on the widespread deployment of A.I. to read medical images. I've detailed in this newsletter how current "explainable A.I." methods are inadequate to many real-world use cases, especially when it comes to using machine learning on medical imagery as a diagnostic aid. Now here's yet another reason to be wary of A.I. in medical imagery—there is no good way to know when this software is making a mistake.

Some scientists and engineers have posited that machine learning systems could produce confidence scores—an indicator of how sure the system is of any particular classification—and that scores below a certain threshold could be flagged to humans as an indicator that software may be making an error in its determination. That would at least allow human doctors to treat the analysis from the A.I. software in those cases with a lot of skepticism or to disregard it entirely in those instances.

But, in a study published last week on the non-peer reviewed research repository arxiv.org, a team from Imperial College London tested nine leading confidence scores across six different medical imagery datasets (X-ray, CT scans, MRIs) used for different kinds of diagnostic classifications. They found that none of the advanced confidence scoring methods consistently worked better than a simple statistical technique (the good news is that this simple technique was pretty good at detecting misclassifications—but it also had a high percentage of false positives, cases in which a correct classification was flagged as potentially false). As the researchers conclude, "current methods are not reliable for detecting failure cases." Clearly more work should be done before this software becomes a widely used tool and doctors ought to be aware of this potential problem. 

FORTUNE ON A.I.

The value of a data science degree, as told by Microsoft’s chief data scientist—by Meghan Malas

How AI brought Val Kilmer’s ‘Iceman’ back into Top Gun: Maverick—by Chloe Taylor

Why honeybees may be the key to better robots and drones—by Jeremy Kahn

New York is turning into Japan by giving robots to old people as companions—by Colin Lodewick

BRAIN FOOD

Where is the productivity payoff from A.I. and other digital technologies? In a story in the past week's New York Times reporter Steve Lohr examines the on-going "productivity puzzle"—the mystery that despite seemingly huge advances in digital technology since the 1990s, annual growth in productivity in most advanced economies has been paltry. Since the pandemic hit, it has inched along at just 1% per year, which is in line with the trend since 2010, and well below the greater than 3% clip experienced from 1996 to 2004, which most economists attribute to the first wave of internet technologies. (And even that was slower than the 3.8% seen in the post-war years from 1947 to 1972.) The story is worth reading just for the interesting examples it provides of companies using A.I. in ways that really do seem to have boosted worker productivity substantially. Most of these, interestingly, are examples from call centers. But the story also details a $400 bet between Robert Gordon, an economist at Northwestern University who is a leading skeptic of the productivity gains from digital technology, including A.I., and Erik Brynjolfsson, director of Stanford University’s Digital Economy Lab, who believes that A.I. is transformational and its impact will soon be reflected in big productivity gains. Brynjolfsson has bet that U.S. non-farm productivity growth will average more than 1.8% annually from the start of 2020 through the last quarter of 2029, when the two will settle the bet. Gordon has bet it won't. 

Who do you think will win this bet? 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.