CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

A view from A.I.’s biggest conference

December 14, 2021, 5:10 PM UTC

NeurIPS, one of the most important A.I. conferences for academic research, wrapped up today. As always, the conference—held virtually this year—provides a good bellwether on the state-of-the-art and where the field may be heading.

First, the big picture: for the first time, the number of papers submitted for possible inclusion in the conference declined slightly, falling 3.5% to 9,122. Whether this indicates that the hype around A.I. is dissipating, or at least plateauing, or whether the drop is just a blip, is hard to say. At the same time, the number of papers selected for inclusion in the conference jumped more than 23% to 2,344, which is the most ever chosen. The acceptance rate, at 26%, was also slightly higher than in recent years.

The authorship of NeurIPS papers can serve as a kind of proxy for A.I. prowess. Judging by papers in which at least one of the authors had a corporate affiliation, Google was head and shoulders above its competitors—it had 177 papers, followed by Microsoft, with 116; DeepMind, with 81; and Facebook, with 78. Then it was a decent step down to other tech giants: IBM had 55, Amazon had 35, while Nvidia had 20, and Apple just 10. (Remember though that Apple is notoriously secretive and until recently rarely allowed its R&D teams to publish academic papers.) The Chinese Internet giants, Alibaba and Tencent, had 20 and 19, respectively, while Baidu had 16.

Some clear themes emerged from this year’s conference: one is that transformers remain hot. Transformers are a kind of neural network design pioneered by Google researchers in 2017 that are particularly good at learning longer-range relationships between variables. They use something called self-attention where the A.I. learns which portion of data is particularly key to focus on for a given part of a task.

Transformers have since gone on to revolutionize natural-language processing and increasingly computer vision and many other machine-learning applications. Many papers at the conference were aimed at trying to better understand exactly how transformers learn. Is the way they encode information within the neural network noticeably different from what happens inside convolutional neural networks (CNNs), the standard for computer-vision tasks before transformers came along? The answer, most papers concluded, was yes.

Another clear trend is building multimodal models—A.I. systems that don’t just learn either language or images, but rather learn both at the same time, sometimes also along with audio too. (Transformers are mostly used for these multi-modal A.I. systems.) Multi-modal algorithms require a lot of computing power to train, but they can do some amazing things, including generate completely novel images based on a natural language prompt or vice versa.

In fact, one of my favorite papers at this year’s NeurIPS was one from researchers at Nvidia on something they call EditGAN. This A.I. may transform how we edit photos and perhaps eventually video. It allows someone to take an image and alter it by using simple text commands. If you have a photo of a car and you want to make the wheels bigger, just type “make wheels bigger,” and poof!—there’s a completely photorealistic picture of the same car with bigger wheels.

Another paper I liked, from Microsoft Research, looked at what it called “unadversarial examples.” In computer vision, “adversarial examples” are changes in the appearance of objects or images, often too subtle for a human to detect, that can reliably fool an A.I. system into misclassifying that object (i.e. thinking a rifle is a turtle, in one well-known case). Here though the Microsoft researchers turned this idea on its head and asked whether we can alter how things look, painting them with certain patterns or adding subtle textures, in order to guarantee that they are correctly classified by a computer vision algorithm. As the researchers note, this may be an important safety feature in a world of increasingly populated with robots and self-driving vehicles.

Finally, A.I. ethics remains a hot button topic and led to some heated discussions among conference attendees. In a keynote panel on the topic, researchers debated whether checklists on ethical issues for those submitting research to conferences or publications was a useful tool. Proponents said it could prompt A.I. researchers to think harder about the implications of their work. Detractors said it wound up being a box ticking exercise that did little to actually force researchers to grapple with thorny questions about their data, methods or downstream impacts—much less actually alter the priorities of the field.

I liked a comment from one of the panelists, Casey Fiesler, a professor at the University of Colorado at Boulder, who emphasized that thinking about ethics needs to be built into the technical practice of those creating A.I. systems. “Whose problem is this? Whose job is it to be thinking about ethics?” she asked. “It is everyone’s job.” And while she said many machine learning researchers protested that they didn’t know enough about moral philosophy to ponder the ethics of their work, she implied this was essentially a cop-out. “You don’t have to have read Kant,” she said, referring to the German philosopher. “You just have to know enough about machine learning ethics to teach an undergrad.”

With that, on to the rest of this week’s A.I. news.

Jeremy Kahn 
@jeremyakahn
jeremy.kahn@fortune.com

Correction, January 13: An earlier version of this story misstated the number of research papers that IBM scientists had accepted to the main conference program at NeurIPS 2021.

A.I. IN THE NEWS

FTC signals it will start delving into biased algorithms. The Federal Trade Commission announced that is considering using its rule-making authority "to ensure that algorithmic decision-making does not result in unlawful discrimination," according to a report from the Electronic Privacy Information Center (EPIC). This comes after the FTC recruited three A.I. ethics researchers from the AI Now Institute, the research organization and think tank affiliated with New York University, including its outspoken co-founder Meredith Whittaker.

DHL invests in more robots to meet the holiday crunch. The delivery company has doubled the number of robots working in its warehouses and logistics centers to 1,500, according to Bloomberg News. The decision to add more robots comes as e-commerce continues to surge and fast-rising labor costs make hiring human workers more difficult. But the company still employs 15,000 seasonal human workers to help it deal with holiday package deliveries.

Clearview is getting close to patenting its controversial facial recognition system. The company has been issued with a "notice of allowance" by the U.S. Patent and Trademark Office, which means the patent for its facial recognition system will be approved once the company pays an administrative fee, Politico reported. Many civil liberties groups criticize the company because it gathered its facial data by scrapping social media sites without the consent of those pictured, potentially in violation of social media companies' rules and, in some jurisdictions, potentially the law. (The company has denied it ever broke the law.) Clearview's app is also popular with law enforcement agencies, raising concerns about how well it actually works and whether police are deploying it in a way that may reinforce inequalities in policing. 

DeepMind unveils a huge new language model alongside a slimmer, less energy-hungry one. The London-based A.I. research shop belatedly got into the game of creating ever-larger, more-capable language A.I. systems. It unveiled a system called Gopher that takes in 280 billion different adjustable variables. The company said Gopher significantly outperformed existing language models across a wide range of benchmark tests and came close to human performance in reading comprehension. It still fell short of human-level skills in other areas, like common sense reasoning. The company also unveiled a 7 billion-parameter language system called Retro that has access to the same massive text database that it uses for training while it is making inferences. Having this database available makes the system consume less computer power and thus is more energy-efficient. You can read my coverage of DeepMind's new language work here.

EYE ON A.I. TALENT

MemryX, a startup in Ann Arbor, Mich., that makes memory-intensive A.I.-specific computer chips, has hired Keith Kressin as its new chief executive, the company announced. Kressin was a senior vice president managing Qualcomm's Snapdragon computing platform. MemryX said its former CEO and company founder, Wei Lu, will become its chief technology officer

Cresta, the San Francisco-based A.I.-driven coaching platform for contact center workers, has hired Jared Lucas to be its new vice president of people, according to a story in AI Authority. Lucas was previously chief people officer at Utah-based, end-point security company MobileIron

EYE ON A.I. RESEARCH

Unlocking quantum chemistry with A.I. DeepMind, the London-based A.I. company, has had quite the month. In the span of seven days, it announced a string of big innovations. The company unveiled two innovative new language algorithms (see news section above) and used A.I. to help mathematicians make new discoveries (see the research section of last week's Eye on A.I. newsletter). Then it also published a major paper in the prestigious peer-reviewed journal Science on using A.I. to better approximate the shape and density of electron fields that surround atoms. Using a neural network to predict the electron density, DeepMind was able to overcome two common errors that plagued other methods for this task and which made it difficult to forecast quantum chemical interactions even for simple elements, such as hydrogen. These new neural network-based predictions are likely to have a big impact on quantum chemistry and could help those seeking to create exotic new materials. You can read more in DeepMind's blog post here.

FORTUNE ON A.I.

SenseTime postpones its Hong Kong IPO after Washington blocked U.S. investment in the facial recognition giant—by Yvonne Lau

A new law governing the use of A.I. in hiring may trigger lawsuits—by Ellen McGirt and Jonathan Vanian

DeepMind debuts massive language A.I. that approaches human-level reading comprehension—by Jeremy Kahn

How IBM is preparing for a new era of A.I. ethics—by Marcus Baram

BRAIN FOOD

Last chance for a ban on lethal autonomous weapons? For five years, a United Nations committee has been debating what, if anything, to do about the advent of lethal autonomous weapons systems (or LAWS). These are weapons that once deployed to an area, can identify, track and "engage" (read: kill or destroy) targets without any human intervention. Human rights groups along with hundreds of prominent A.I. researchers have called for a ban on such weapons, or at least on the sale of certain classes of them (such as small drones designed primarily to kill people, which they have dubbed "slaughterbots"). The fear is that these could become inexpensive weapons of mass destruction—easily used by terrorists or criminals to murder many thousands of people. Or they could be used to anonymously target select individuals—think judges or witnesses, or political figures—for assassination.

But so far, progress towards any kind of UN agreement on restricting the development of such weapons has barely budged. The problem is that while some 65 countries have now gotten behind an effort to ban autonomous weapons, the United States, United Kingdom, Russia, Israel, France, the Netherlands, and India, have all opposed any legally-binding treaty, in part because they are interested in developing at least large autonomous weapons that would target aircraft, ships, and tanks—and possibly also smaller autonomous drones that could guard military bases or front-line positions too.

But, while the UN committee has dithered, small lethal autonomous weapons have been deployed in combat. A Turkish-made system called Kargu-2 has, according to a UN report, been used in Libya. Israel has also deployed drone swarms to spot targets (which were then struck by more conventional weapons) in its recent war with Hamas in Gaza and may have used an autonomous machine gun to assassinate a key figure in Iran's nuclear program. And now the five-year mandate of this particular UN committee is coming to an end. What are the odds of agreement in Geneva this week, which sees the last meeting of this particular UN committee before its five-year mandate expires?

Not great, says Max Tegmark, the MIT physicist and A.I. researcher who also heads the Future of Life Institute and is a leading campaigner against small autonomous weapons. The problem, he says, is that the big powers don't want to give up technologies that they think might be decisive in a future conflict and that the whole discussion has lumped in large autonomous weapons systems with small ones. He says that it ought to be in the interest of the big powers to ban small autonomous weapons, he says, because slaughterbots will mostly benefit insurgents, terrorists, and other forces interested in upending the current international order, while larger, expensive autonomous weapons should benefit the status quo powers. 

If the UN fails to reach a deal this week, Tegmark hopes the process moves outside of the UN Convention on Certain Conventional Weapons, where the discussion of LAWS is currently taking place. He points out the land mine ban has been extremely effective even though it was enacted outside of the normal conventional weapons committee, and despite the fact that the U.S. never formally signed it.

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.