Making Weapons That Pick Their Targets
ONCE THE STUFF OF APOCALYPTIC SCI-FI tales, killer robots capable of choosing and taking out our nation’s enemies are now within reach—if companies and the Pentagon decide to go that far. Defense officials have so far stopped short of developing Lethal Autonomous Weapons Systems (the government’s official term), which could theoretically strike without a human order as easily as Facebook can tag friends in your photos without your say-so.
But the A.I.-driven technology that could form the basis for such attacks is well underway. Project Maven, the Pentagon’s most high-profile A.I. initiative, aims to use machine-learning algorithms to identify terrorist targets from drone footage, assisting military efforts to combat ISIS (more than 20 tech and defense contractors are reportedly involved, though they have not all been publicly named). Although supporting war efforts is nothing new for the defense industry, the Pentagon has increasingly looked to Silicon Valley for expertise in A.I. and facial recognition. That growing relationship has recently sparked controversy, with Google announcing this summer that it would withdraw from Project Maven after several employees quit in protest. Going forward, companies’ only barrier to winning lucrative new A.I. defense contracts may be their own unwillingness. —Jen Wieczner
Year in which A.I. will be better than humans at folding laundry, according to researchers at Oxford and Yale.
THE FAILURE TO prevent attacks in cyberspace and IRL (in real life) is an expensive line item—the average cost of an individual data breach was nearly $4 million in 2017. But the surge in attacks of late has an upside: It means there’s also more data to mine. Machine-learning techniques have been used to detect patterns and filter emails for decades, but newer systems from vendors like Barracuda Networks can use A.I. to actually learn the unique communication patterns of particular companies and their execs in an effort to pinpoint potential phishing scams and other hacking attempts. In the world of physical security, A.I. is even being used in security cameras to “see” and try to stop threats. New cameras from startup Athena Security can identify when a gun is pulled and even automatically alert the police. In short: The more data we have, the more we can use A.I. to fight crime. —Michal Lev-Ram
HOW DO YOU catch a financial criminal? Instead of bulking up compliance staff to sift through thousands of transactions in search of suspicious activity, banks across the globe like HSBC and Danske Bank are increasingly turning to A.I. to flag financial scams, money laundering, and fraud. (This push has gained even more momentum recently as several banks were hit with huge fines for failing to detect illegal funds flowing through their accounts.) HSBC partnered with A.I. startup Ayasdi to automate some of its compliance. In its 12-week pilot with HSBC, Ayasdi’s A.I. technology achieved a 20% reduction in false positives (transactions that looked suspicious but were legit), while retaining the same number of suspicious-activity reports as human review.—Carson Kessler
A version of this article appears in the November 1, 2018 issue of Fortune as part of the article, ’25 Ways A.I. Is Changing Business.’