This is the web version of Eye on A.I., Fortune’s weekly newsletter on the news in artificial intelligence. To get it delivered daily to your in-box, sign up here.
Not only is A.I. coming for your job, but there’s probably nothing you can do to stay ahead of the automation wave.
That’s the takeaway from a recent report from Gartner, the technology research firm, forecasting that 69% of the tasks managers currently perform will be automated within the next four years, “requiring a complete overhaul of the role of the manager.”
The same report also predicted big trouble ahead for companies’ attempts to retool and reskill their workforces to face this newly-automated future. It forecast that 47% of learning and development budgets will wind up wasted as A.I. eliminates about two-thirds of what Gartner calls “on-the-job, task-based learning opportunities.”
But for every report such as this about the vast potential of artificial intelligence to radically reshape the nature of work there seems to be another one that points to a yawning gap between that potential and what most companies are finding they can actually achieve with the technology today.
In the here and now, automation’s march does not seem to be quite so smooth. Another report out last week from Plutoshift, a Silicon Valley startup that provides software to help industrial companies collect data and implement predictive analytics, found that many manufacturing firms were struggling to use A.I.
Of the 250 industrial firms Plutoshift surveyed,
- over 72% found that they had taken far more time than anticipated to implement the necessary data collection processes for applying machine learning.
- and perhaps as a result, only 17% of those surveyed said they were actually at the full implementation stage of using A.I.,
- while about 70% said they were still studying what resources they’d need, assessing possible business use cases, or conducting small pilot projects only.
“Companies in the middle of this transformation usually lack the proper technology and data infrastructure,” Prateek Joshi, Plutoshift’s founder and chief executive officer, says. “In the end, these implementations can fail to meet expectations.”
Worryingly, almost 20% of companies cited “peer pressure” as the reason they had embarked on A.I. projects.
These dueling surveys, along with some other bits of A.I. news (see below) about companies using misleading marketing to sell their software, raise the looming spectre of disillusionment with the technology. Are businesses entering a new era of snake (or sn-A.I.-ke) oil salesmanship?
A.I. in the news
Clearview faces class-action lawsuit over facial recognition
Clearview, a controversial New York-based A.I. startup that sells facial recognition technology to law enforcement agencies, is facing a class-action lawsuit in Illinois that accuses the company of violating that state's stringent biometric data privacy law, according to a ZDNet.com report. (More on Clearview below.) The Illinois law, which prohibits entities from using residents' biometric data without consent, is an important tool for privacy advocates: Facebook is facing a similar class-action lawsuit in the state over its auto-tagging features—and the U.S. Supreme Court just decided last week not to take up Facebook's appeal, so it looks like the plaintiffs will have their day in court.
New York police dispute Clearview marketing claims
A story in Buzzfeed News cast doubt on Clearview's marketing claims that its technology helped New York police capture a terrorism suspect. Clearview had suggested, in an email and a video on its website, that its facial recognition software had played a role in the August 2019 arrest of a man who had allegedly planted rice cookers, designed to look like improvised explosive devices, around the city, setting off a bomb scare. “The NYPD did not use Clearview technology to identify the suspect in the August 16th rice cooker incident,” a department spokesperson told BuzzFeed News. The department also said "there was no institutional relationship" with Clearview, although the company founder, Hoan Ton-Thot, says the department is trialing its technology.
London police begin using live facial recognition system
The London Metropolitan Police announced that they will begin deploying a facial recognition system made by Japan's NEC Corp. across the city to aid in catching wanted suspects. It said the cameras and software would be deployed in areas of the capital where intelligence suggests such suspects were most likely to be found. Privacy advocates vowed to challenge the police department's use of the technology in court.
Google's Pichai repeats his 'more important than fire' claims
Google CEO Sundar Pichai repeated his claim that artificial intelligence is "more profound than fire or electricity" in a speech at the World Economic Forum in Davos, Switzerland. Pichai was hardly the only major tech company executive to talk about A.I. or to call for increased government regulation of the technology at Davos. But some saw these speeches about A.I. ethics as little more than a cynical ploy by the titans of tech to shift the discussion away from controversies over data privacy violations, content moderation, anti-competitive business practices or tax dodging. (For more on Pichai's views on A.I. and many other issues, I recommend you read Adam Lashinsky's illuminating Q&A with him in this month's issue of Fortune, which you can find here.)
IBM unveils A.I. regulation principles for businesses
Big Blue has issued a policy paper on A.I. regulation. The company wants regulators to take a "risk-based" approach to technology, something it called "prevision regulation," in contrast to broadly applied rules that would treat the technology the same no matter how it was being used. IBM said three big principles should govern A.I. regulation: accountability, transparency, and fairness and security. For companies, IBM advocated five more-detailed principles: each organization should appoint a lead A.I. ethics officer, undertake a risk-based assessment of potential A.I. harms, be transparent about when and where A.I. is being used, deploy explainable A.I., and test its A.I. systems for bias.
More sn(A.I.)ke oil claims?
Somewhat lost in all the other controversy surrounding Clearview and its facial recognition software: a discussion that goes to the heart of what's wrong with how a lot of today's machine learning-based solutions are sold.
In marketing materials, which Buzzfeed reports Clearview shared with the Atlanta Police Department, the company claimed it could identify an individual face out of a dataset of 1 million faces with 98.6% accuracy, compared to 83.3% from a system built by Tencent and 70.4% from Google-built software.
But, as Chris Dulhanty, a graduate student in computer vision and image processing at the University of Waterloo, in Canada, pointed out in a Twitter exchange with Clare Garvey, a senior associate at Georgetown University Law School's Center for Privacy and Technology, this claim is very likely misleading: Clearview and many other facial recognition companies have been touting their performance on a benchmark dataset of 1 million faces called MegaFace. (The creation of Megaface, which is maintained by the University of Washington with sponsorship from Google, Intel and the National Science Foundation, is itself controversial for grabbing Flickr photos without the explicit consent of those who posted them.) But there are actually two versions of this dataset: the original one, and a "cleaned" version which removed a lot of allegedly mislabelled data.
Dulhanty says that Clearview seems to be comparing its results from the cleaned-up dataset against Google's and Tencent's from the original. In other words, this is not a valid apples-to-apples comparison. What's more, good performance on the cleaned version of Megaface doesn't clearly translate to accurate performance under the real-world conditions in which police want to use facial recognition.
How much of A.I. marketing in general is guilty of similar sins? My guess is a lot. And I think this may factor into the disappointment many companies experience when they try to use such systems.
Eye on A.I. research
Performance of skin cancer-screening A.I. varies across lesion types. A study published in The European Journal of Cancer examined an A.I. software called Moleanalyzer-Pro made by FotoFinder Systems. The software has been approved for sale across Europe. FotoFinder had previously performed a study published in Annals of Oncology that pit MoleAnalyzer-Pro against 58 dermatologists on 100 lesion images and found that on average, it out-performed the humans. But the new Journal of Cancer study, which was performed by researchers at a number of European universities in collaboration with FotoFinder's own research department, found that MoleAnalyzer-Pro's performance varied greatly depending on the exact type of lesions it was analyzing. Significantly, the scientists found that the system actually performed worst on exactly those lesion types human doctors are taught to treat as most suspicious.
As Luke Oakden-Rayner, the director of medical imaging research at Australia's New Royal Adelaide Hospital, says, this study shows the importance, particularly in medicine, of not putting too much emphasis on the average performance of A.I. models, and instead investigating how these models perform on different sub-types of data. The clinical significance of false positives and false negatives are never equal across sub-types. As he tweeted, "AI makes inhuman errors (distracted by background, weak to noise etc), so subset testing is critical for safety. Imagine using the system clinically w/out this knowledge!"
Fortune on A.I.
Inside big tech’s quest for human-level A.I.—by Jeremy Kahn
A.I. is transforming the job interview—and everything after—by Maria Aspan
Medicine by machine: Is A.I. the cure for the world’s ailing drug industry?—by Jennifer Alsever
OpenAI's GPT-2 language model is designed to take human-written prompt of a sentence or two and then compose several paragraphs of novel text based on it. It is one of the largest language models ever built, comprising some 1.5 billion parameters and trained on billions of words-worth of data.
Gary Marcus, the emeritus New York University professor of cognitive psychology and current CEO of A.I. startup Robust AI, takes a long, hard look at the model in a piece for The Gradient. Marcus finds the A.I. model impressive in its fluency, its relative ability to stick to a topic over many sentences, its ability to do some question-answering and its ability to deal with typos and missing words. But he finds the model falls far short of any real language understanding.
Marcus says there are two competing ideas about how humans acquire language skills: On the one hand, Marcus says, are nativists, a line of thought he traces from Plato and Kant to Noam Chomsky, Steven Pinker, Elizabeth Spelke, and, well, himself. These people believe that fundamental aspects of language are innate—hard-wired into the brain somehow. He contrasts these folks to empiricists, whom he traces from philosopher John Locke to deep-learning pioneer Geoff Hinton, chief A.I. scientist at Facebook Yann LeCun, and Hinton's former grad student, current OpenAI chief scientist Ilya Sutskever. Empiricists, Marcus says, think language is completely learned. GPT-2 is pure empiricism, according to Marcus, and in its failings makes the case for taking a more nativist approach.
In his essay, Marcus makes an elegant and important distinction between predicting and understanding language:
"Prediction does not equal understanding ... Prediction is a component of comprehension, not the whole thing ...We frequently encounter words that we have not predicted and process them just fine. Shakespeare's audience was probably a little surprised when the Bard compared the subject of his 18th Sonnet to a summer's day, but that failure in prediction didn't mean they couldn't comprehend what he was getting at. Practically every time we hear something interesting, we are comprehending a sentence goes some place that we didn't predict."
The same distinction probably applies to the rest of artificial intelligence too: Today, people in the field frequently conflate accurate prediction with intelligence. But is human intelligence really just the cumulative sum of repeated predictions? Are our brains simply prediction machines? Some A.I. scientist certainly argue so. But, as Marcus's Shakespeare example demonstrates, there is reason for skepticism. In fact, many of the things humans refer to as "genius"—in art or music and perhaps in business, too—come from the ability to achieve something in the least predictable way.
What does A.I. mean for your company? Find out at Brainstorm A.I.
If you’re interested to learn how some of the biggest, most influential companies are strategizing about artificial intelligence, come to Fortune’s Brainstorm A.I. conference in Boston on April 27-28, 2020. A.I. is a game-changing technology that promises to revolutionize business, but it can be confusing and mysterious to executives. The savviest leaders know how to cut through the deluge of A.I. buzzwords and reap the technology’s benefits.
Attendees of this invite-only confab can take part in cutting-edge conversations with top corporate execs, leading A.I. thinkers, and power players. Among them: United States Chief Technology Officer Michael Kratsios; Accenture CEO Julie Sweet; Land O’Lakes CEO Beth Ford; Siemens U.S. CEO Barbara Humpton; Royal Philips NV CEO Frans van Houten; Landing AI founder and CEO Andrew Ng; Robust.AI founder and CEO Gary Marcus; and top machine learning experts from Bank of America, Dow, Verizon, Slack, Zoom, Pinterest, Lyft, and MIT. You can request an invitation here.