A.I. IN THE NEWS
DeepMind uses A.I. to predict a structure for almost every protein known to biology. As I reported in last week's special edition of Eye on A.I., DeepMind, the London-based A.I. company that is owned by Alphabet, used its AlphaFold A.I. system to produce predicted structures for almost every protein known to biology. The development is a major advance for basic science, and may ultimately accelerate drug discovery, research into cancer and genetic diseases, and lead to big advances for agriculture and sustainability.
Palantir extends A.I. contract with U.S. Army. Palantir, the data analytics software company, has extended its contract with the U.S. Army Research Lab in a deal worth just under $100 million over two years. The contract will see Palantir continue to develop A.I. technology for the U.S. Army's combatant commands, according to a company statement. It began working with the U.S. Army Research Lab in 2018.
British super market chain under fire for use of facial recognition technology. The Southern Co-Op chain, which has stores throughout the south of England, has been accused by privacy watchdog Big Brother Watch of "Orwellian" and "deeply unethical" uses of facial recognition technology in a complaint the group filed against the supermarket with the U.K. Information Commissioner's Office, my Fortune colleague Alice Hearing reports. The privacy watchdog says the company is harvesting people's biometric data without consent and building opaque "watch lists" of potential shoplifters and others it doesn't want in its stores. The company told Hearing it "would welcome any constructive feedback from the ICO as we take our responsibilities around the use of facial recognition extremely seriously and work hard to balance our customers’ rights with the need to protect our colleagues and customers from unacceptable violence and abuse.”
Accident leads to questions about technology at self-driving truck company TuSimple. The Wall Street Journal reports that government investigators are asking tough questions about TuSimple's A.I.-enabled self-driving trucks after one of the vehicles was involved in a single vehicle crash on a major highway in April. The paper reported that "An internal TuSimple report on the mishap, viewed by The Wall Street Journal, said the semi-tractor truck abruptly veered left because a person in the cab hadn’t properly rebooted the autonomous driving system before engaging it, causing it to execute an outdated command. The left-turn command was 2 1/2 minutes old—an eternity in autonomous driving—and should have been erased from the system but wasn’t, the internal account said. But researchers at Carnegie Mellon University said it was the autonomous-driving system that turned the wheel and that blaming the entire accident on human error is misleading. Common safeguards would have prevented the crash had they been in place, said the researchers, who have spent decades studying autonomous-driving systems." TuSimple told the paper it has since made modifications to its systems to prevent a similar accident. Nonetheless, the crash is a serious setback for TuSimple and potentially the entire self-driving truck ecosystem.
Artist using OpenAI's DALL-E to redesign city streets. Zach Katz, a Brooklyn, New York-based artist has been feeding images of various streetscapes in the U.S. to DALL-E, the impressive image generation software built by OpenAI, and asking it to reimagine the photographs with streets that are more pedestrian- and public transport-friendly, according to a Bloomberg News story. Side-by-side examples of the original street view and the DALL-E redesigns have gone viral on social media. It's a good example of how DALL-E is becoming a powerful tool for creativity and design work and may be a harbinger of future uses of such technology. OpenAI recently took steps towards offering DALL-E as a commercial product. Previously it was only available to a select group of pilot users for free.
India using A.I. to help keep an eye on endangered tiger populations. The BBC says rangers in the country's national parks have begun to use computer vision technology to help automatically catalogue and count tiger images captured by trail cameras deployed throughout the country's tiger reserves and national parks.
EYE ON A.I. TALENT
Brain Corp., the robotics company based in San Diego, CA, has named Michael Spruijt its new chief revenue officer, according to a story in trade publication Robotics Tomorrow. Spruijt was previously Brain Corp.'s senior vice president, international business.
Sigma7, the New York-based cybersecurity and risk services company, has named Jennifer Gold its chief technology officer, the company said in a press release. Gold had previously been a technology consultant to J.P. Morgan Chase & Co, as well as vice president of engineering at AQR Capital Management.
EYE ON A.I. RESEARCH
Teaching A.I. to think about what could go wrong. Reinforcement learning is a powerful way to train A.I. systems, in part because it enables the software to find good strategies for achieving some goal that humans have never conceived. Increasingly, reinforcement learning is making its way into business through more powerful simulators, including so-called digital twins, in which an entire operation (often a factory or warehouse) is simulated.
But a big problem with reinforcement learning is that while it will learn the best strategy for any given situation, it often won't take into account the potential risks if it gets the probabilities wrong and something unexpected happens. For instance, if running a particular machine in a factory at its maximum speed has a 99% chance of resulting in optimal production for the entire factory, but a 1% chance of causing the machine to explode, an A.I. naively trained with reinforcement learning might still think that running the machine at maximum speed was the best strategy—even if the consequences of the machine exploding would be catastrophic. This is also a particular problem in scenarios that are adversarial—where a person or another A.I. is specifically looking to exploit weaknesses in an opposing system. Here the adversary has an incentive to try unusual, low-probability actions in an effort to find the A.I.'s weaknesses.
Trying to use reinforcement learning to train an A.I. to both find a good strategy and avoid worst case outcomes has been technically difficult. But a group of researchers from DeepMind and the University of Alberta have now come up with a way to make reinforcement learning algorithms more robust to worst case outcomes. They did so by building on some work other researchers had done looking specifically at A.I. trained to play poker, but then generalizing the insights from this to other domains. You can read the research paper, which was presented at the International Joint Conference on Artificial Intelligence in Vienna, here.
FORTUNE ON A.I.
Supermarket chain under fire over its use of ‘Orwellian’ facial recognition technology and ‘secret watch-lists’ to cut crime—by Alice Hearing
Google’s AI chatbot—sentient and similar to ‘a kid that happened to know physics’—is also racist and biased, fired engineer contends—by Erin Prater
Mark Zuckerberg ignores objections, says Instagram will show twice as much A.I.-recommended content by end of 2023—by Chris Morris
A.I. is rapidly transforming biological research—with big implications for everything from drug discovery to agriculture to sustainability—by Jeremy Kahn
Will deep learning ever be able to learn symbolic logic? That question is the subject of heated debate among A.I. researchers, cognitive psychologists, neuroscientists and linguists. In the current issue of Noema, the magazine of The Berggruen Institute, Yann LeCun, a famous pioneer of deep learning and New York University professor who is now the chief A.I. scientist at Meta, and Jacob Browning, a postdoc student in computer science at NYU who specializes in the philosophy of A.I., provide an overview of the current state of the debate.
The essay has attracted a lot of attention on social media from both sides of the argument. LeCun is known to be in the camp of those who think it is possible that deep learning systems will one day be able to learn symbolic logic, which underpins any real understanding of mathematics, language, and a lot of common sense reasoning. But he is less dogmatic and more circumspect than some other deep learning pioneers such as Geoff Hinton and his former student Ilya Sutskever, now the chief scientist at OpenAI, who are absolutely convinced that simply scaling up today's neural network architectures will be enough to eventually deliver symbolic logic too.
On the other side of the debate are cognitive psychologists such as former NYU professor Gary Marcus and many others who see strong evidence that in people—and to some extent in animals too— symbolic logic is innate, not learned. This camp thinks that the best way to imbue A.I. with symbolic reasoning is to create hybrid systems that combine deep learning for perception and hard-coded symbolic A.I. for a lot of reasoning tasks. Alternately, they argue that a completely different approach to A.I., other than deep neural networks, will be needed to equal or exceed human intelligence.
Spoiler alert: in the end, LeCun and Browning come down on the side of deep learning and against hybrid approaches. But the essay is an excellent primer on the state of the debate and worth a read and a think.