CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Why honeybees may be the key to better robots and drones

May 27, 2022, 4:47 PM UTC

Welcome to May’s special monthly edition of Eye on A.I.

Earlier this week, I watched bees gathering pollen from raspberry canes in my garden. They certainly were industrious, flitting from blossom to blossom. And when they had gathered enough, I watched each bee in turn take off and fly, fast and straight as arrows, over the garden fence, heading in what I assume was the direction of their hive. What does any of this have to do with artificial intelligence?

Quite a lot, according to James Marshall, the co-founder and chief scientific officer Opteran Technologies, a startup that is creating A.I. inspired by what we know about how bee brains work. Marshall, who is also a professor of theoretical and computational biology with a posting in the computer science department at the University of Sheffield, is among those who think a major problem with today’s A.I. is how little it has sought to learn from biological models of intelligence. Neural networks, the software design most responsible for the current A.I. boom, are modeled very loosely on a simplified view of how the human brain works. But Marshall, who has studied bee brains, is among those who think the field would have made faster progress if it had paid more attention to other biological models, particularly animals whose brains are simpler than human’s but yet possess surprising cognitive abilities.

Marshall says bees, whose brains only have about 1 million neurons compared to the 86 billion or so in the human brain, can do many of the things we want autonomous robots, cars, or drones, to do. They can navigate the world, avoid collisions, and escape danger while accomplishing a task. They can explore their environment, construct an accurate mental map of a new space, and complete varied tasks such as gathering nectar from a variety of different kinds of flowers. They can also communicate complex information—such as where the best spot to gather pollen is—to nestmates. There’s even evidence from laboratory experiments that bees possess a kind of conceptual understanding, Marshall says, including the ability to classify objects as similar or different and to understand some associations between cause and effect. One study found bees can even understand the abstract concept of the number zero.

What’s more, bees do all this while requiring orders of magnitude fewer training examples than a typical neural network using deep learning would require. Marshall says the mental model bees build of the world also seems far less likely to fail when being presented with unusual “edge cases” than most deep learning models. And the bee’s brain consumes a fraction of the energy machine learning software would require. “All the weaknesses of existing approaches are ticked by this natural intelligence approach,” he says.

Opteran has built an A.I. system based on the way bees think and has programmed it directly into specialized computer chips. Its first use case is in helping robots and drones navigate autonomously. Today’s autonomous robots have to perform a task known to computer scientists as SLAM—simultaneous localization and mapping. (If you have a Roomba vacuum cleaner, it performs a version of SLAM as it learns where the walls and furniture typically are in your house.) Although roboticists have made steady progress on creating robots able to perform SLAM more efficiently, it’s still a tricky process that involves a series of complicated computer vision tasks and also requires the software to store a highly-precise map. “Real brains don’t do that,” Marshall says. “We don’t store a centimeter accurate map of the environment and use that for navigation.” A typical SLAM solution requires between five and 250 megabytes per square meter mapped, David Rajan, Opteran’s co-founder and CEO says. Opteran’s bee-based A.I. can find a similarly effective navigational solution using just 1 kilobyte of memory, Rajan says.

The company has experimented with fitting its bee-inspired autonomous control and navigation systems on self-balancing bicycles, wheeled and tracked robots used to inspect blast sites in mining operations, coaxial copter-style drones, industrial robot arms, and even a “Hopper” dog-like robot. It has also adapted its system to do image classification and on the classic MNIST dataset, in which an A.I. system has to correctly classify handwritten digits. It scores more than 70% accuracy after being shown just one training example of each digit, Marshall says. This is as good as any state-of-the-art supervised deep-learning system for one-shot learning without using any data enhancement techniques.

Much of how Opteran’s tech works is still under wraps as the company moves towards obtaining a series of patents. But Marshall says the overall lesson here is that if we want A.I. that can operate effectively in the real world why not draw more inspiration from the biologic brains. And there are some specific points that he thinks A.I. researchers should pay more attention to: for instance, most biologic brains are modular—they have specific components that are innately designed to perform specific functions. Most of today’s neural network architectures don’t really work this way—they are highly general, but this generality also makes them inefficient. Biologic brains also have certain hard-coded biases. These have been created through a kind of learning—namely evolution. But, having discovered these effective methods, why start over from scratch? Standard neural network approaches constantly have to “reinvent the wheel,” Marshall says, relearning biases and heuristics that we humans already know from studying both human and animal cognition. Many of these biases are innate, not learned. Why ignore the lessons of nature?

Sure, bees might never get us to artificial general intelligence—the kind of A.I. that can do all the economically useful tasks a human can as well or better than we can—but they could help us create much more useful and efficient autonomous machines. All that, and honey too.

And now here’s a bit of A.I. news from the past few days to tide you over until next week’s regular newsletter.

Jeremy Kahn


Anti-COVID drug designed by A.I. reaches pre-clinical stage. Insilico, a Hong Kong-based company that uses A.I. for drug discovery, has created a possible antiviral drug to treat people infected with COVID-19 that is now ready for preclinical testing, which is the stage before human trials begin and often involves testing on animals, tech publication The Register reported. The drug works by targeting and inhibiting 3C-like (3CL) protease, an enzyme involved in the reproduction of SARS-CoV-2, the virus that causes COVID-19. Insilico already has one A.I.-designed drug in initial human clinical trials, a compound designed to treat the chronic lung disease idiopathic pulmonary fibrosis. 

Google unveils an A.I.-powered tool to help people prepare for job interviews. The system, called Interview Warmup, helps candidates prepare for interviews by generating possible interview questions, according to a story from Voice of America (VOA). The chatbot-like interface can hold a simple back-and-forth conversation with the candidate and also offers analysis and feedback of a person's answers to the interview questions so that they can improve. Right now the tool covers topics relevant to those applying for positions in information technology and support, project management, data analytics, and online sales and marketing. You can check it out here.

Google also debuts a new A.I. system that creates photorealistic images from text prompts—but there's a catch. The software, which Google calls Imagen, is similar to OpenAI's DALL-E and DALL-E-2 software: a user gives it a general text description of the image to generate, say "a cat wearing a black leather jacket sitting on a motorcycle at the beach," and the software can create a selection of such images, including different stylistic versions, ranging from highly photorealistic to cartoonish. "Many of Imagen’s images are indeed jaw-dropping. At a glance, some of its outdoor scenes could have been lifted from the pages of National Geographic. Marketing teams could use Imagen to produce billboard-ready advertisements with just a few clicks," writes MIT Tech Review's Will Douglas Heaven.

But Heaven notes that while both Google and OpenAI tried to screen pornographic imagery out of the training data used to teach these A.I. systems, at least in Google's case it did rely on a large image dataset, LAION-400M, that is known to contain some examples of pornography as well as racist imagery and harmful stereotypes. As such, there's no guarantee Imagen won't have embedded these biases and also produce pornographic or hateful images. That's one reason why Google says it has no immediate plans to publicly release Imagen. And it's also why OpenAI is only releasing DALL-E-2 to a carefully screened small group of users. As Heaven writes "...image-making AIs have the potential to be world-changing technologies, but only if their toxicity is tamed. This will require a lot more research."

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.