CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Tesla’s A.I. boss is out. That’s bad news for Elon Musk, and a lesson for the rest of us

July 19, 2022, 5:47 PM UTC
Andrej Karpathy
Andrej Karpathy, Tesla's head of A.I., announced last week he was leaving the company. The news is probably not a good sign that Elon Musk is about to deliver on some of Tesla's biggest promises: full-self driving cars and a humanoid robot.
Michael Macor—The San Francisco Chronicle/Getty Images

The biggest news in A.I. this past week was the surprise resignation of Tesla’s Andrej Karpathy. While Karpathy may not be a household name, the 35-year old neural networks expert was a big deal inside Tesla, where he oversaw the company’s artificial intelligence group.

A founding member of OpenAI, an A.I. research company also cofounded by Tesla CEO Elon Musk, Karpathy was central to Tesla’s efforts to deliver on two of Musk’s most ambitious promises: a fully self-driving car and a humanoid robot.

Karpathy gave no explanation for his sudden departure, which he announced in a tweet, and said that he had “no concrete plans” for what he will do next. While his exit may not necessarily be a sign of major problems within Tesla’s A.I. group, it’s certainly not a vote of confidence in the near-term prospects for either of Musk’s futuristic projects.

Musk, of course, is a master of bold pronouncements (he has said that full self-driving is just a year away every year since 2014). But the departure of his A.I. lieutenant—with Musk’s techno-promises still seemingly far from being fulfilled—offers a reminder of the reality behind the hype in the field of artificial intelligence.

Self-driving technology, which requires complex decision-making and perception capabilities, is a useful proxy for gauging whether A.I. is on the verge of equaling or exceeding human capabilities. For a number of years, as A.I. researchers conquered milestone after milestone, it seemed like autonomous vehicles were about to become a mainstream technology. In 2019, Musk declared that Tesla would have 1 million robotaxis on the streets within a year. They never appeared. Although other companies are offering fully autonomous robo-taxis in a few select geographies—a single neighborhood in Phoenix, and parts of San Francisco during the wee hours of night—most experts think self-driving fleets are still many years from having any significant impact on transportation in most cities.

The problem is that driving is just a much more difficult task than most A.I. engineers imagined. There are too many things that can happen on the road. Predicting and training an A.I. for every possible scenario is a massive challenge, even in a country like the U.S., where the highways are well-marked and most drivers follow the rules. And a lot of the world looks more like India, where cars, motorcycles, rickshaws, donkey carts and even elephants converge in a chaotic, bumper-to-bumper scrum. Humans drivers successfully navigate such conditions every day, but that environment is way beyond anything today’s A.I. technology can handle. On top of all that, the advanced sensors and A.I.-based computer vision systems used by autonomous vehicles struggle in rain, fog, and snow. That’s why Cruise and Waymo are testing their robo-taxis in sunny places like Arizona and California (both companies are required to pause operations when there’s heavy rain or fog in San Francisco).

Neither traditional car companies, such as General Motors, which is promising to have self-driving cars on sale by 2025, nor tech companies, such as Apple, which has been trying to develop an autonomous car for almost a decade, have managed to figure all this out yet. Meanwhile Cruise, the self-driving car company that is backed by GM, just saw seven cars in its nascent robo taxi fleet in San Francisco get stuck, bunched-up at an intersection, resulting in a major traffic jam.

Humanoid robots, like the “Optimus” robot that Musk has promised to unveil in September, are another sci-fi staple that has seized the imagination of technologists. And amid all the innovative consumer gadgets now available, from drones to augmented reality glasses, the idea of a robot might not seem so far-fetched. But creating a droid that can do a whole range of useful things in your house is in some ways an even more daunting a problem than self-driving. Again, the complexity of even the average apartment, with different kind of furniture and floor surfaces to navigate, and hazards ranging from cats to toddlers, and the large number of possible tasks to learn, quickly outstrips the capabilities of most of today’s most advance A.I. systems.

It’s not that robotics isn’t making rapid progress, but today’s most capable robots are still specialized for one particular sort of task (picking up objects, moving things around warehouses; they aren’t general-purpose android butlers.) While it is possible Musk could surprise the world in September, when he has promised to show-off an Optimus prototype, my bet is that we are likely to get yet another Musk stunt that only highlights how far we really are from the Jetsons. Another dancer in spandex? Probably something a bit more convincing than that. A remote-controlled bot, rather than one with truly autonomous capabilities, is within the realm of possibility.

And whether it’s self-driving cars or ambulatory robots, there’s a constellation of legal and regulatory implications that still need to be understood and worked out. The U.S. National Highway Traffic Safety Administration’s investigations of car crashes involving Tesla’s Autopilot technology are just a small preview of the scrutiny to come—and that’s a good thing.

Compared to previous decades, we are living in a golden age of A.I. And I’m excited to see what Karpathy does next. But, notwithstanding what his former boss says, we have not yet stepped into the pages of an Isaac Asimov novel. For now, keep your hands on the wheel and take comfort in the soft whirling sound of your Roomba.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

A.I. meant to help companies comply with accessibility laws may actually make it harder for the blind to access websites. That's according to a story in The New York Times that examined the issues blind people and disability rights organizations are having with automated web accessibility software. Dozens of companies offer such accessibility solutions whose "pitch is often a reassurance that their services will not only help people who are blind or low vision use the internet more easily but also keep companies from facing the litigation that can arise if they don’t make their sites accessible. But it’s not working out that way. Users say the software offers little help, and some of the clients that use AudioEye, accessiBe and UserWay are facing legal action anyway. Last year, more than 400 companies with an accessibility widget or overlay on their website were sued over accessibility, according to data collected by a digital accessibility provider."

Microsoft has launched a flight simulator just to help train autonomous drones. Microsoft has created Project AirSim, a flight simulator that companies can use to train A.I. software to pilot drones. The simulators are critical because many of the places where real drones are used commercially—around power lines, wind turbines, oil platforms, and other industrial facilities—are too risky for drones to practice autonomous flights. (Most drones in such commercial settings are remotely piloted by a human.) Microsoft's new software contains highly photorealistic simulations of such settings and it allows millions of flights to be simulated in just seconds, speeding up the time it takes to train A.I. pilots. The BBC has more here.

The U.K. outlines its approach to regulating A.I. In a new white paper, the British government has set out its principles for regulating A.I. Among them are that any regulation should be "context-specific, pro-innovation and risk-based, coherent, and proportionate and adaptable." That all sounds good. But, as always when it comes to regulation, the devil is in the details—in how these broad principles will be implemented in specific rules. You can read the white paper here.

Robots trained on racist language models, unsurprisingly, also exhibit racism. In research from the Georgia Institute of Technology and Johns Hopkins University, computer scientists wanted to see if virtual robots, which had to choose blocks with images of different people's faces projected on to them, had learned racist and sexist labels from an underlying large language A.I. that was used in the robots' training. (The system was CLIP, A.I. software that can write captions for images that was created by OpenAI.) It turned out that sure enough, the robots responded to words like 'homemaker' and 'janitor' by choosing blocks with women and people of color. They also always chose Black people when prompted to choose the block depicting a "criminal." The Washington Post wrote up the research.

EYE ON A.I. TALENT

Tractable, the London-based A.I. company that uses computer vision to help insurance companies process claims, has hired Mohan Mahadevan as its chief science officer, the company said in a press release. He had previously been senior vice president of applied science at identity verification software company Onfido

EYE ON A.I. RESEARCH

An A.I. that learns physics like a human child. DeepMind, the London-based artificial intelligence company that is part of Alphabet, has created A.I. software that learns the kind of rules of physics that we as humans all learn as tiny children—that gravity exists, that if you throw a ball it will follow a kind of arcing trajectory, that you can't run through a solid object. While this seems simple, it isn't, and past A.I. systems have struggled to learn these rules of the natural world solely by observation or even from experience in a simulation. DeepMind's system, which it calls PLATO (short for Physics Learning through Auto-encoding and Tracking Objects), learns by watching videos of simple interactions. The system categorizes the world into a series of objects and "makes predictions about where objects will be in the future based on where they've been in the past and what other objects they're interacting with." The system performed better on a series of benchmark tests—that DeepMind also developed and is now open-sourcing—than other A.I. systems that did not use this "object-based" approach. And it found that the system could learn simple, intuitive physics from as little as 28 hours of video. DeepMind says it believes the result shows that an object-based approach, which is an idea taken from cognitive and developmental psychology about how human infants process the world, may be the key to giving A.I. systems a common sense understanding of physics. The research was published in scientific journal Nature Human Behavior. 

FORTUNE ON A.I.

Elon Musk’s Neuralink brain computer startup is beat again. This time a competitor implanted its device into its first U.S. patient—by Alena Botros

Is A.I. the next technical revolution coming to shake up your workplace? These leaders think so—by Alicia Adamcyzk

From search engines to predicting protein structures, A.I. increasingly drives innovation—and raises ethics questions—by Marco Quiroz-Guttierez

BRAINFOOD

Could A.I. make it impossible for submarines to remain hidden beneath the waves? That is the question posed by a story in the tech publication IEEE Spectrum. Navies around the world are using a combination of new sensors and A.I. software to detect submarines, making it harder and harder for these vessels to retain the stealthy capabilities that make them so valuable. "Submarines can now be detected by the tiny amounts of radiation and chemicals they emit, by slight disturbances in the Earth’s magnetic fields, and by reflected light from laser or LED pulses. All these methods seek to detect anomalies in the natural environment, as represented in sophisticated models of baseline conditions that have been developed within the last decade, thanks in part to Moore’s Law advances in computing power," the story says. Also helping to gather this new data are both aerial and underwater drones that can operate autonomously for days and weeks. There is also now persistent satellite surveillance using an array of sensors of swaths of the ocean that were once only covered by a satellite a few times a day. But the key is searching through all this new and varied sensor data for subtle patterns that are the signature of a sub on the move—and that's where A.I. comes in. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.