This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.
It’s been an exceptionally busy week in A.I. news. There were big announcements on research breakthroughs, a major U.S. policy report on A.I.’s national security implications, the publication of Stanford University’s snapshot of artificial intelligence progress, and a new report on business adoption of A.I. from KPMG.
Oh, then there were those Tom Cruise deepfake videos.
One big theme that ran across all of these things: A.I. is increasingly powerful and ubiquitous, and the pace of A.I. adoption is only accelerating. The news from Facebook and OpenAI both involved large neural networks, the A.I. software loosely modeled on the human brain. In both cases, the research showed that feeding sufficiently large neural networks a sufficiently large amount of unlabeled data can result in some pretty amazing capabilities. And the speed at which these breakthroughs are migrating from bleeding edge to the mainstream is head-spinning.
Take computer vision: Stanford’s A.I. snapshot, the AI Index, has a wonderful set of charts showing the cost and time it takes to train an A.I. system to 93% top-five accuracy on Imagenet, the benchmark dataset of 14 million labelled images. Top-five accuracy means that at least one of the top-five highest probability classifications the A.I. system makes matches the image label; 93% was chosen because it was state-of-the-art when this particular benchmark was established in 2018. In 2018, it took 6.2 minutes. Last year, it took 47 seconds. In 2017, it cost more than $1,000 in cloud-computing resources to reach that accuracy level. Last year, it could be done for $7.43.
The capabilities of systems that can generate images are also improving rapidly. There’s been a 37% improvement in the quality of images that algorithms can generate in just two years, according to one benchmark cited in the AI Index. And while Chris Ume, the Belgian visual effects artist who created the Tom Cruise deepfakes, highlighted the time and effort it took to create convincing fake videos, he also noted how better the technology has become in just the past few years.
Unsurprisingly, rapidly improving capabilities and rapidly reducing cost are leading to increasing adoption. A new KPMG report out today shows that a most leaders across a wide range of sectors now say A.I. is “moderately to fully functional” within their business or organization. Industrial manufacturing leads the sectors, with 93% of business leaders agreeing, followed by 84% in financial services, 83% in technology, 81% in retail, 77% in life sciences, 67% in health, and 61% in government. The biggest one-year jumps came in financial services, retail and tech. Leaders in several sectors noted how the COVID-19 pandemic has further accelerated their adoption of A.I., with the greatest number of executives agreeing with that statement coming from industrial manufacturing.
But at the same time the report highlights that a significant number of executives are now afraid that A.I. is being implemented too quickly: that adoption is out-stripping thinking about how A.I. fits in with strategy, cybersecurity, governance, and ethics considerations. “Many business leaders do not have a view into what their organizations are doing to control and govern A.I. and may fear risks are developing,” says Traci Gusher, the principal A.I. lead for KPMG. This sentiment was most acute within industrial manufacturing, where over half of executives had this view. The feeling was also high in retail, whose executives were likely to report that A.I. developments were moving so quickly that they struggled to stay on top of the evolving technology. Retail executives were also among those most likely to voice support for regulation of A.I., with 87% indicating a desire for the government to step in and set ground rules for how the technology could be used.
The AI Index though suggested one of the reasons that governance and ethics may continue to lag: there still isn’t much consensus around exactly what the standards should be and how to assess or benchmark progress. “Policymakers are keenly aware of ethical concerns pertaining to AI, but it is easier for them to manage what they can measure,” the AI Index authors write. “So finding ways to translate qualitative arguments into quantitative data is an essential step in the process.”
And with that here’s the rest of this week’s news in A.I.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
This story has been updated to correct figures from the KPMG survey about the retail industry’s view of A.I.. An earlier version misstated that retail executives were most likely to agree A.I. was moving too fast to stay on top of and were also most likely to call for government regulation.
A.I. IN THE NEWS
The U.S. government should launch a major investment drive in advanced A.I. and speed A.I. adoption in the military, key commission recommends. The National Security Commission on Artificial Intelligence—a blue-ribbon group chaired by former Google CEO Eric Schmidt and whose members also included Oracle CEO Safra Catz and veteran Microsoft chief scientific officer Eric Horvitz—released its final report. It concluded the U.S. government was not "A.I. ready" and was in serious danger of being strategically-outclassed by potential adversaries, especially China and Russia. The commission recommended the government embark on a rapid program of A.I. adoption, commit about $35 billion to ensure the U.S. is not dependent on foreign-made computer chips needed for A.I. applications, and also earmark another $40 billion to further A.I. R&D. It compared this investment to the creation of the national highway system, which cost about an equivalent amount in today's dollars. "This is not a time for abstract criticism of industrial policy or fears of deficit spending to stand in the way of progress," the commission wrote.
IBM's Watson Health A.I. doesn't work, insiders tell health news site. Amid reports that IBM is seeking a buyer for its troubled Watson Health unit, healthcare news site Stat News has an incendiary report claiming that Watson Health's technology had never been validated and that the company struggled to live up to its own marketing hype. The report is based on accounts from seven mostly anonymous former employees and also a trove of internal company documents and recordings of presentations. One big problem, the former employees said, is that IBM never clinically-validated its capabilities. "It was all made up,” one former employee told Stat of the marketing claims IBM made for the health product. “They were hellbent on putting [advertisements] out on health care. But we didn’t have the clinical proof or evidence to put anything out there that a clinician or oncologist would believe. It was a constant struggle.” In 2017 and 2018, Stat uncovered evidence that oncologists had faulted recommendations that IBM's Watson oncology product had made and that the oncology product hadn't had the impact in cancer treatment the company claimed.
Report details campaign of online harassment targeting former Google A.I. ethics researcher Timnit Gebru. Tech publication The Verge reports that after Google forced out A.I. ethics researcher Timnit Gebru, she was subjected to a relentless harassment campaign on Twitter, seemingly orchestrated by Pedro Domingos, the emeritus A.I. professor from the University of Washington, and Michael Lissack, a financial whistleblower turned self-styled crusader against political correctness run-amok and leftwing activists. The campaign seems to have involved the use of fake Twitter accounts. The story also details Domingos run-ins with other women A.I. researchers who he felt were attempting to censor scholarship for political reasons. Domingos denies he was involved in any campaign of harassment. "My entire interaction with Gebru consisted of a modest number of tweets, all of which are public, and none of which can be construed as harassment on my part by any reasonable person," he told Fortune. Lissack told Fortune that "On Twitter, somehow expressing disagreement was perceived by Gebru as 'harassment.' It should be viewed as an opportunity for learning and dialog, an opportunity she rejected."
Google accused of advising employees who complain about racism and sexism at the company to take a mental health leave. An NBC News investigation found that Google employees who filed complaints with the company's human resources department complaining of racist, sexist, or homophobic behavior at the company were often told to take a leave of absence on mental health grounds. The report cites nine current and former employees, all members of minority groups, who were told they should consider a medical leave on mental health grounds after they complained to HR about problems around diversity and inclusion at the company. Among those quoted in the story is Timnit Gebru, the A.I. ethics researcher pushed out of the company in early December. After she complained to her managers and on internal message boards about the company's treatment of women, the HR department told her to consider a mental health. “They’re like, ‘Well, if there’s something wrong with you, here are all these therapy resources.’” she told NBC. “And I would respond that no amount of support system is going to get rid of Google’s hostile work environment.”
Data labelers felt pressure to conform "majority view," potentially biasing the resulting A.I., investigation finds. Vice reports that low-paid gig workers who label data on the Amazon-owned platform Mechanical Turk and other data labelling platforms such as Clickworker felt pressure to assign labels that they thought would conform to the majority view. The problem is that in an ostensible effort to "ensure quality" the microwork platforms tend to farm the same exact labeling task out to multiple workers and then apply the label that the majority of them agree on. Outliers rightly fear that they might lose future opportunities for more work. But the main example cited by the story involves ArtEmis, an algorithm that was supposed to learn how different aspects of art affected people's emotional responses. The Vice story fails to note that ArtEmis is not a commercial application but rather a research project created by professors from Stanford University, LIX Ecole Polytechnique in France, and King Abdullah University of Science and Technology in Saudi Arabia. Saiph Savage, director of the Human Computer Interaction Lab at West Virginia University, who has studied the sociology and economics of data labelers, told Vice: “You're teaching people to be compliant. You're also affecting creativity as well. Because if you're outside the norm, or proposing new things, you're actually getting penalized.”
This news section of Eye on A.I. has been updated to include responses from both Pedro Domingos and Michael Lissack to the allegations in The Verge story referenced above.
EYE ON A.I. TALENT
Google has hired Sumit Gupta as senior director of the company’s machine learning infrastructure division, according to trade publication AI Enterprise. Gupta was previously vice president of artificial intelligence strategy and chief technology officer for data and A.I. at IBM.
Collibra, a New York-based data intelligence company, has appointed Tiffen Dano Kwan as chief marketing officer, the company said. She had been chief marketing officer at Dropbox.
Aspen Technology, a Bedford, Mass.-based company that specializes in process optimization software and services, has named Chantelle Breithaupt as senior vice president and chief financial officer, the company said in a release. She joins the company from Cisco, where she was senior vice president, finance for its customer experience organization.
EYE ON A.I. RESEARCH
A.I. is getting bigger and more powerful—but also, just maybe, more transparent.
If there was any doubt that the future belongs to super-sized A.I. systems and that these systems will essentially teach themselves from largely unstructured data, a host of developments should lay those doubts to rest. First, Facebook debuted a very large computer vision A.I. called SEER (short for Self-Supervised) that takes in 1 billion variables and was trained from 400 million public Instagram accounts. (Interestingly, tech site OneZero reports that the company avoided using any data from Europeans to avoid potentially falling foul of the European Union's stringent data protection law, GDPR.) The idea here is two-fold: to get away from having to use carefully curated, labelled dataset, which are something that has held back progress in computer vision lately. And secondly, to create the kind of huge baseline A.I. systems that can then perform multiple computer-vision tasks with no, or at least little, additional training. SEER beat all previous self-supervised systems on a range of vision tasks such an object detection, image classification and image segmentation, without any additional training. And when it was fine-tuned with just a relatively small amount of labelled data, it beat systems that were trained with a lot more data through supervised learning for that specific task. This is a big step toward doing for computer vision what similar large models have done in natural-language processing in the past two years: create a foundational building block that makes it much easier to create a range of applications on top of that foundation, opening up a range of real world applications with far less cost and effort. You can read my coverage of SEER here.
Speaking of huge A.I. systems, a group of Chinese researchers from Alibaba and Tsinghua University say they have created one of the biggest yet, and the largest ever for Chinese language: it's a multimodal system—meaning it is trained on paired images and text—and take in a 100 billion variables. It is called M6 and was trained on more than 1.9 terrabytes of images and more than 292 gigabytes of text. The system is pretrained on a task that involves generating an image to match a textual description and after training it achieves state-of-the-art performance on a task known as visual question answering in Chinese—answering questions about a particular image. The system can also serve as a baseline for other tasks. The M6 model seems similar to systems such as DAL-E and CLIP that OpenAI recently created and somewhat similar to Facebook's SEER system described above.
One criticism of these kinds of mega-large A.I. models built from neural networks (A.I. systems designed to loosely mimic the human brain) is that they are opaque. It can be very difficult to understand the rationale they use to make a decision. OpenAI though has recently had some success unpacking this "black box" by using two visualization techniques that allow researchers to see which stimulus individual neurons or groups of neurons within the artificial neural network are most sensitive too. The researchers used these techniques on their large multi-modal A.I. system CLIP (the name stands for Contrastive Language Image Pre-training). They found some fascinating things: first, that individual neurons within CLIP seem to encode very particular concepts, a finding that echoes a controversial theory from neuroscience that certain neurons in the human brain (dubbed "Halle Berry neurons" because in one research subject an individual neuron seemed to respond specifically to images and words associated with the actress). It also found that some neurons may encode conceptual biases learned during training, such as "Middle East" neuron in CLIP that also seemed to fire in association with images or words of violence or terrorism. The research is important for two reasons: it suggests that these visualization techniques can be used to gain better confidence in how deep neural networks arrive at decisions and to detect potential bias and fairness issues in that process. The research also found that although the system is multi-modal, it leans heavily on the image of the text of a word associated with a given concept. So for instance, the "old man" neuron, is easily triggered when encountering the word "old." That makes sense. But, as the researchers note, it also suggests a way that such systems could easily be tricked into misclassifying objects. For instance, they said, show CLIP a dog emblazoned with dollar signs and it is likely CLIP will classify the dog as a piggy bank. You can read more here and here.
FORTUNE ON A.I.
Money is pouring in to A.I.-assisted drug discovery, while fewer A.I. startups are getting VC backing—by Jeremy Kahn
Disinformation attacks are spreading. Here are 4 keys to protecting your company—by Lisa Kaplan
How ‘data alchemy’ could help businesses make the most of A.I.—by Francois Candelon
Researchers are peering inside computer brains. What they’ve found will surprise you—by Jeremy Kahn
Facebook says its new Instagram-trained A.I. represents a big leap forward for computer vision—by Jeremy Kahn
Deepfake master behind those viral Tom Cruise videos says the technology should be regulated—by Jeremy Kahn
BRAIN FOOD
Don't be too quick to dismiss concerns about A.I.'s contribution to climate change. In the course of my conversation with Facebook chief A.I. scientist Yann LeCun about SEER (see above), I asked LeCun about concerns that ever-larger A.I. models might have ethical issues because of the amount cloud-computing resources they used meant that they consumed a lot of energy and had a big carbon footprint. (This, by the way, is one of the criticisms of large language models that Timnit Gebru raised in a research paper that Google tried to suppress.) LeCun was dismissive, describing the whole issue as a red herring. All the data centers on the planet only consumed 1% to 2% of the world's power, and A.I. was a small fraction of that total, he said, so it hardly made much difference to climate change. When I drew attention to that part of my conversation with LeCun on Twitter, I got a response from Neil Lawrence, an A.I. professor at the University of Cambridge and a former top machine learning executive at Amazon, who pointed out that 1% to 2% of the world's power was actually a pretty huge amount. In fact, it wasn't far off what the entire continent of Africa consumes. What's more, Lawrence made the point that the consumption of data centers was likely to increase in the future thanks in part to more ubiquitous use of A.I. (and A.I. using these bigger and bigger models) and trends like the Internet of things. This sparked a debate with LeCun who noted that the energy consumption of cutting-edge A.I. systems was actually declining over time thanks to more efficient algorithms and better hardware, including computer chips specifically designed for handling neural networks. Lawrence said this might be true—that on a per inference basis, the energy consumption of A.I. was declining—but that if there were more and more A.I. systems being used by more companies, the overall energy consumption could still increase. He also pointed to analyses that indicated that while there was some improvement in data center efficiency due to companies moving from their own, on-premises servers to cloud-computing. But he said he feared this gain might be a one-shot thing: once most companies had migrated to the cloud, their compute and energy footprints would begin to increase again.
Thomas Dietterich, another well-known machine learning researcher who is an emeritus professor at Oregon State University, chimed in to say that the history of computing had shown exponential improvements in efficiency from better algorithms and hardware. That's why, he said, he was not worried about A.I.'s carbon footprint being a significant problem going forward. (It is also worth noting that many of the largest technology companies, such as Google, Facebook, Amazon and Microsoft either already purchase all of their power from renewable sources or are on a path to do so by 2025.) LeCun replied that companies like Facebook and Google had large teams "working on better hardware, firmware, compilers, network optimization, and network architectures to optimize efficiency. And that's not counting startups and hardware shops like Nvidia, AMD, Intel, ARM, Qualcomm, et al." To which Lawrence retorted on Twitter, "Good stuff, so I think we're all agreed that it's a very important problem!"