This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
Last week, the European Union issued its long-anticipated white paper on artificial intelligence. The document is a prequel to new legislation and regulations governing the technology that are likely to have global consequences.
That’s because, as with Europe’s privacy law, GDPR, any new A.I. rules are likely to apply to anyone who sells to an EU customer, processes the data of an EU citizen, or has a European employee. And, as with GDPR, any rules Europe enacts may serve as a model for other nations—or even individual U.S. states—looking to regulate A.I.
The paper says that the 27-nation bloc should have strict legal requirements for “high-risk” uses of the technology.
What’s high-risk? Any scenario with “a risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities,” especially in sectors such as healthcare, transportation, energy and government.
The week before the white paper’s release, Margrethe Vestager, who is best known as Europe’s tough anti-trust cop but whose remit now extends to both policing—and promoting—Europe’s digital economy, told The New York Times that she wasn’t interested in policing the algorithms that recommend Spotify tracks or Netflix movies. She was concerned about A.I. that determines who gets a loan or what diseases are diagnosed.
That all sounds reasonable. But in practice, lawmakers are likely to find it much more difficult to draw nice, fine-tipped Montblanc circles around high- and low-risk uses of A.I.
Geoff Hinton, the deep-learning pioneer who is an A.I. researcher at Google and a professor at the University of Toronto, highlighted one potential problem. In a viral tweet in response to the new white paper, he asked: “Suppose you have cancer and you have to choose between a black box A.I. surgeon that cannot explain how it works but has a 90% cure rate and a human surgeon with an 80% cure rate. Do you want the A.I. surgeon to be illegal?”
To be sure, the white paper does not say all A.I. in high-risk areas must be explainable. (Explainable A.I. is a fraught area, in which one always has to ask: Explainable to whom? To the software developer? To the doctor? To the patient?) But it does talk about the need to provide clear information about an A.I. system’s capabilities and limitations, and the need for human oversight.
Nick Cammarata, a researcher at OpenAI who works on explainability issues, tweeted a good retort to Hinton: “I’d take the 90% only if I knew the distribution it was trained on is very similar to me.” Otherwise, he’d take the human surgeon.
Some “low-risk” areas may present problems too. For instance, Vestager said she wasn’t worried about most recommendation engines. But what about targeted advertising, which seems like it might be a fairly similar use case?
Targeted advertising is combining with dynamic pricing in pernicious ways that may be difficult to police. For instance, a loan approval system would qualify as high-risk under the EU framework. One that discriminates against people with certain last names or who live in certain areas—a practice known as “digital redlining”—would likely be illegal.
But another way to accomplish the same thing, while possibly evading the “high-risk” label, is to simply never show a subset of people the ads for particular financial products. If people don’t know that a loan exists with favorable interest rates, they are much less likely to apply for one.
***
Speaking of digital redlining, I wanted to share the thoughts of a few readers who wrote in response to my newsletter on Lemonade CEO Daniel Schreiber‘s ideas for regulating the use of A.I. in insurance underwriting.
- JD Dillon says using data and A.I. to make judgments beats relying on biased humans. “Human judgment is inherently biased and flawed. The data helps us become (more) fair and impartial.”
- Bob Zeitlinger says insurance companies have a profit-motive to not use A.I. and data analytics too precisely. “With AI and ML, don’t insurance firms have the ability to better determine what kind of 18-25 year old males are most likely to get into accidents, and which ones are more like to drive like 60-year-old women?…And if the insurance companies can do that, wouldn’t the rates for those 18-25 year old males go down accordingly? You would think, right? Have that conversation with an executive from a car insurance company If you don’t get lip service, I’m sure you’ll get a litany of reasons why that doesn’t happen…Insurance firms, armed with loads of data, may have reasons (profitability) to cherry pick what data they use to perform their risk analysis. And yes, there’s competition, but when was the last time you shopped around for car insurance?”
Great to hear your thoughts. Now here’s a roundup of other notable A.I. news this past week.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com
A.I. in the news
Microsoft adds A.I. to its cybersecurity products. Using machine learning to detect suspicious patterns of network or e-mail traffic has become de rigeur for state-of-the-art cybersecurity software, and Microsoft has joined the bandwagon. The software giant announced it had added A.I.-enabled capabilities to several different cybersecurity products and is making them widely available to users. "Microsoft Security solutions help identify and respond to threats 50% faster than was possible just 12 months ago," Ann Johnson, the company's VP of cybersecurity, wrote in a blog post announcing the changes.
A.I. chipmaker Graphcore valued at $2 billion in new funding round. The U.K.-based semiconductor startup, whose specialized chips are designed to accelerate machine learning, says it has raised an additional $150 million in a funding round that values the company at $2 billion, Bloomberg reported. Investors in the latest round include Scottish asset manager Baillie Gifford, Mayfair Equity Partners, and M&G Investments. Microsoft's Azure cloud datacenters have begun using Graphcore's chips for some A.I. workloads.
ABB and San Francisco startup create robots that can sort diverse items. Creating machines that can identify, grasp and manipulate an array of items of different shapes, sizes, weights and textures, has been one of the Holy Grails of robotics. Industrial giant ABB, which is already a major producer of robots for the automotive industry, ran a contest challenging researchers to finally crack the problem. Of the 20 teams competing, only Covariant, a San Francisco startup co-founded by Pieter Abbeel, a well-known roboticist and A.I. researcher at the University of California, Berkeley, had a robot that could master the task without human assistance. Now the startup and ABB plan to mass produce the warehouse logistics bots together. You can read my Fortune colleague Jonathan Vanian's story on the development here.
Google removes gender tags from photo datasets used for A.I. tool. The search giant dropped gender labels, such as "man" and "woman," from its Cloud Vision API and other tools, which many developers use for image recognition tasks, Business Insider reported. The move was pushed by Google's own in-house data ethicists, including Margaret Mitchell, and outside data ethics groups such as the Algorithmic Justice League, who said the labels could lead to all kinds of bias—such as systems that would classify most long-haired people as women, and always identify nurses as female or fire fighters as male.
McAfee shows Tesla's computer vision system is frighteningly easy to fool. Researchers at the cybersecurity firm McAfee were able to use black tape to slightly modify the numbers on speed limit signage to fool the autopilot systems in two Teslas, tricking the vehicles into accelerating from 35 mph to 85 mph, even though a human driver would most likely not have misread the speed limits. The news, first reported by MIT Technology Review, shows how vulnerable many A.I. systems are and why it may be necessary for regulators to require such systems are tested against these kind of simple attacks.
Elon Musk calls for more A.I. regulation—including of Tesla. The billionaire Tesla founder, who also co-founded OpenAI, took to Twitter to say, "All orgs developing advanced AI should be regulated, including Tesla." Musk was responding to a big feature story on OpenAI in MIT Technology Review. If you have any interest in OpenAI or the quest to achieve artificial general intelligence, then the entire article is worth your time. But the basic takeaway: OpenAI, founded as a non-profit and dedicated to increased transparency in advanced A.I. research, has instead become increasingly secretive, hype-obsessed, and commercially-driven. (I looked at the rationale behind Microsoft's $1 billion investment into OpenAI in a recent Fortune story.)
While controversy surrounds Clearview, NEC has quietly established market leadership in facial recognition. The New York-based facial recognition startup Clearview has been under scrutiny following investigations into the company from both The New York Times and BuzzFeed News, but Japan's NEC has quietly become perhaps the pre-eminent supplier of facial recognition technology to Western governments and companies. OneZero takes a deep dive into the company and finds that many of the same issues around data gathering and the accuracy of its technology that plague Clearview also apply to NEC.
Political deepfakes—but not for disinformation (yet)
An Indian politician has used deepfake technology to produce a series of short campaign films in which he is seen giving the same speech in multiple languages and local dialects—some of which he does not actually speak, Vice reports. Many people have long-feared that deepfakes would be used for political disinformation. But this case shows how the technology can be used to amplify legitimate political messages as well as illegitimate ones. Similar deepfakes have already popped up in innovative advertising campaigns, and some think that a lot more actors may soon be out of work as the technology becomes more mainstream.
Eye on A.I. talent
- Honeywell has hired Sheila Jordan as chief digital technology officer. She was previously the chief information officer at Symantec.
- J.P. Morgan has hired Daniele Magazzeni as an executive director in A.I. research in its new, London-based A.I. research hub. Magazzeni had been a senior lecturer in artificial intelligence at King’s College London.
Eye on A.I. research
A.I. used to find new antibiotics. Researchers at MIT have used machine learning to discover a new antibiotic that shows promise against a large number of bacteria, the Financial Times reports, citing research published in the journal Cell. The FT noted that while the new drug is promising, it has yet to be clinically tested in humans. The story also highlights that the economics of the drug industry, in which companies make far more from treatments for chronic ailments than acute ones, create poor incentives for bringing new antibiotics to market.
Machine learning scores major improvements in battery charging times. Long charging times remain a big impediment to the broad adoption of electric vehicles. Techniques for speeding up charging have existed for years, but they've generally come at the expense of battery life. Now researchers from Stanford University, MIT, and Toyota have published a paper in Nature showing that they can use machine learning techniques to slash charging times by up to 98 percent with little effect on battery life.
Researchers find ways to speed up reinforcement learning. Reinforcement learning, in which algorithms take actions and then learn from experience rather than from historical data, is one of the most promising A.I. techniques. But it can be slow and very compute-intensive, which is one of the reasons it hasn't been used too much in business applications. Researchers at DeepMind, the London-based A.I. company owned by Alphabet, and at its sister organization Google Brain have discovered ways to speed up this process. They had multiple A.I. agents sharing what they've learned and, in another technique, created two separate modules for the same A.I. that have different sensitivities to how novel an action is.
Fortune on A.I.
Why the weather forecast is about to get a lot better—by Aaron Pressman
5G will transform smartphones—but it won’t stop there—by Cristiano Amon
Udacity’s new, online A.I. course targets an important market: bosses—by Jonathan Vanian
Europe wants businesses to share their data and open up their A.I. systems for scrutiny—by David Meyer
Brain food
Many machine learning researchers were shocked last week when Joe Redmon, the PhD student best known for creating YOLO, a popular image identification and classification system, revealed on Twitter that he has stopped doing computer vision research because he's disturbed by the way people—well, governments, in particular—were using his software tools.
" 'We shouldn’t have to think about the societal impact of our work because it’s hard and other people can do it for us' is a really bad argument ... I stopped doing CV research because I saw the impact my work was having. I loved the work but the military applications and privacy concerns eventually became impossible to ignore ... For most of grad school I bought in to the myth that science is apolitical and research is objectively moral and good no matter what the subject is."
As concerns grow over the use of A.I. in areas such as law enforcement, the military, finance, and recruitment, more and more researchers may start experiencing the same guilt and regret. I'm curious to see if this impacts businesses and their ability to recruit A.I. researchers and data scientists. A lot of companies tout their "A.I. for good" projects. But some of these same companies are also involved in the very projects that trouble A.I. researchers like Redmon. Will the chance to work on socially-conscious A.I. outweigh the risk that their technology could be used in ways they find distasteful or immoral?
The A.I. research community also seems to be strangely insular, with the understandable but easily disproven belief that just because its work is cutting edge, so too are the bigger philosophical questions it raises. In fact, many other fields have encountered similar issues in the past, such as nuclear physics, rocket science, chemistry and biology. Let's see if computer scientists start building bridges to these other fields and drawing insights from them about how to address ethical concerns.