A.I. IN THE NEWS
A burrito with a side of A.I. Fast food chain Chipotle is using machine learning to monitor how much guacamole it is using, and predict how much it will need in the future, according to an interview its CEO, Brian Niccol, gave to The Washington Post. Niccol also said the company was using A.I. to help automate scheduling interviews for job applicants.
Softbank invests in robo advisor Qraft. Masayoshi Son's tech giant is investing $146 million into Korea-based Qraft, a company that uses A.I. to pick stocks, The Wall Street Journal reported. Qraft already manages $1.7 billion for various Asian banks and insurance companies through a series of A.I.-powered exchange traded funds, mostly listed in the U.S. The company plans to expand its offerings to more asset managers in the U.S. and China.
The U.S. military is increasingly concerned about the threat that small consumer drones pose. That's according to stories in both The Financial Times and The Wall Street Journal. The military is experimenting with a variety of ways to try to disable hobbyist drones—the kind anyone can buy at Walmart and that can be modified to carry explosives or scout for later attacks—but top officials say there is likely to be no silver bullet solution to the problem.
Startup gets its biological "brains on chips" to play Pong. Cortical Labs, an Australian startup that I wrote about in 2020, has succeeded in teaching its cyborg-like system that combines human biological neurons with software to play the game Pong. Nikkei Asia said Cortical's mini-brains could pave the way for other biological-silicon syntheses that might create far more energy-efficient A.I.
EYE ON A.I. TALENT
Intel has hired Jeff Wilcox to be Intel Fellow, design engineering group chief technology officer for client SoC (system-on-a-chip) architecture, according to tech publication The Register. Wilcox had been an engineer at Intel before going to Apple where he headed its efforts to build its own custom computer chips, including the M1 chip series and the T2 chip.
Databricks, a San Francisco-based company whose technology helps companies organize large pools of data for A.I., has appointed Naveen Zutshi as its chief information officer, the company said. Zutshi was most recently CIO at cybersecurity firm Palo Alto Networks.
Torch.AI, a company in Leawood, Kan., that makes A.I.-enabled data processing software, has hired Adam Lurie as its chief strategy officer, according to trade publication Datamation. Lurie was previously president of the federal solutions division for the Exiger, a New York-based company that sells compliance software.
EYE ON A.I. RESEARCH
DeepMind's AlphaFold almost nailed the shape of Omicron's tricky spike protein. Last year, I wrote a lot about how DeepMind's algorithm, AlphaFold, which can take the DNA sequence of a protein and predict that protein's shape with remarkable accuracy, is beginning to reshape biological research. Now Wired's Tom Simonite details how Colby Ford, a computational genomics researcher at the University of North Carolina, used AlphaFold and another free protein-structure prediction algorithm from scientists at the University of Washington, RoseTTAFold, to predict the structure of Omicron's mutated spike protein. Using the A.I. software, Ford was able to race ahead of scientists who were using traditional experimental methods, such as powerful electron microscopes, to verify its actual structure. From the predictions, Ford was able to say, several weeks before experimental biologists largely verified his predictions, that existing antibodies, created either through natural infection or from vaccination, would still likely work against Omicron.
As Simonite writes, A.I.-based predictions are unlikely to completely replace the need for experimentally-verified results. But the A.I. algorithms have already had a transformative effect on how scientists research protein structures, helping to guide the experiments they perform. What's more, as Ford's Omicron predictions show, the A.I. software could be invaluable for informing policy and healthcare decisions in the weeks—or sometimes months or years—before experimental data is available.
FORTUNE ON A.I.
Tesla fans start to doubt Elon Musk after latest price hike for Full Self-Driving tech—by Christiaan Hetzner
Sanofi agrees to partnership with A.I.-based drug discovery company Exscientia worth up to $5.2 billion—by Jeremy Kahn
France cracks down on dark patterns, fining Google and Facebook nearly $240 million for manipulative cookie tricks—by David Meyer
Commentary: A.I. could make your company more productive—but not if it makes your people less happy—by Francois Candelon, Su Min Ha, and Colleen McDonald
A surprising victory for "symbolic A.I." and what it may say about the future development of the field. At the big academic A.I. research conference NeurIPS in December, scientists announced the results of a competition, sponsored in part by Facebook parent Meta, to find the A.I. software that could do best on a classic video game called NetHack. Originally released in 1987, NetHack is a dungeon exploration and treasure-hunting game that is notoriously difficult for both humans and computers to master. The big news is that the A.I. system that won the competition didn't use a many-layered neural network. Neural networks, which are loosely based on the human brain, have been responsible for almost all of the stunning advances in A.I. over the past decade. The winning software also did not use reinforcement learning, which is when an A.I. system learns from experience how to maximize a reward. This is the kind of learning technique that has allowed A.I. software to conquer game after game, from Go to poker to Dota2. Instead, it used an older method of A.I., called "symbolic A.I.," in which an agent is armed with some hard-coded strategies derived from human knowledge about how to operate. (This news broke too late to make my NeurIPS 2021 roundup.)
The results created quite a stir among the A.I. research community. Ed Grefenstette, a A.I. researcher at Meta, tweeted: "This surprising result should serve as a moment of reckoning for [reinforcement learning] research. Reward may be enough in theory (if only) but an astounding amount of domain knowledge can, and probably must, be exploited in order to tractably solve complex problems."
This drew a quick reply from Grefenstette's Meta A.I. research colleague Yann LeCun, one of the pioneers of deep learning, who tweeted, "Clearly, reward is not enough, and domain knowledge is necessary. The important question is whether domain knowledge can be learned. My answer is a clear yes. And my vote is on some form of non-task-specific self-supervised learning to learn world models."