CEO DailyCFO DailyBroadsheetData SheetTerm Sheet

Why we need R2-D2 if we want autonomous ships, factories, and more

May 17, 2022, 5:05 PM UTC

We’re going to need R2-D2.

That’s what popped into my head when I heard the news earlier this week that an autonomous ship trying to make a first-of-its-kind crossing of the Atlantic Ocean had suffered a fault in a simple electrical switch that forced it to detour to the Azores for repairs. An earlier Atlantic crossing attempt by the Mayflower Autonomous Ship, named after the vessel that carried the Pilgrims to America in 1620, had to be aborted last year after an exhaust pipe broke.

The problems that have befallen the Mayflower, a project of a maritime non-profit organization called ProMare with technological support from IBM, are telling: So far, the ship’s sophisticated A.I. software, which it uses to autonomously navigate the ocean, has held up just fine. It is the physical stuff that has broken. But with no human onboard, these mechanical and electrical issues have been enough to keep the Mayflower from achieving its goal.

These challenges ought to be ringing alarm bells for the global maritime industry, which has been investing to try to make autonomous ships a reality as well as for the U.S. Navy which has said it plans to field a range of uncrewed vessels to complement its existing fleet. A few autonomous ships are already operating relatively close to shore, including an autonomous passenger ferry has been trialed in Finland, and a small autonomous cargo ship in Norway. Earlier this week an autonomous ship in Japan made a 500-mile journey, piloted only by A.I., in what may have been the first successful commercial test of such a system. In 2020, the U.S. Navy also said it sent an autonomous cargo vessel 4,000 miles from the Gulf of Mexico, through the Panama Canal, and up the Pacific Coast to California, with 98% of the journey controlled only by A.I. software. The commercial appeal of such vessels is clear: crew costs account for about 40% to 45% of daily operating expenses for most ocean-going freight vessels, according to the 2021 data from Moore, a global accounting and consulting firm that tracks industry costs. And human error is believed to be responsible for up to 95% of marine accidents, according to industry figures. So shipping companies have good reasons to want to automate their vessels.

But the vision of A.I.-captained vessels plying the seas, with humans only monitoring them remotely, may ultimately founder for this simple reason: “Boats always break,” Brett Phaneuf, co-founder of ProMare and managing director of the Mayflower Autonomous Ship project, told me back in November, reflecting on his autonomous vessel’s first aborted journey. And if they break and there’s no one onboard to fix them while underway, the ship may be rendered literally dead in the water, perhaps thousands of miles from the nearest land. At the very least, it will have to make a time-consuming detour to a port, even for a repair that in the past could have easily been completed by an onboard engineer or sailor. The U.S. Navy has already experienced a form of this problem. The U.S. Government Accountability Office found that “the Navy’s attempts to reduce crew sizes on crewed ships through increased automation, called optimal manning, resulted in large increases to maintenance costs when the automated systems failed to work as intended, ultimately leading the Navy to assigning additional crew to its ships.”

So autonomous ships might not really happen until we have some kind of general “mechanic” robot like, well, R2-D2. In Star Wars, R2 is an “astromech droid” whose job was to help repair a Starfighter or larger starship if it is damaged in the field. The droid could deploy a wide range of different mechanical tools—a bit like a Swiss Army knife on rollers—as well as a laser for welding. It also had the ability to serve as a backup navigational system in case a starship’s main computer is damaged.

Until we have something like R2-D2 it seems to me that the future of autonomous seafaring is far more limited than the marketing departments at companies like Rolls Royce, BAE Systems, and Wartsila, all of which are working to build the technology for autonomous ships, would have us believe. And while there have been some attempts to create robots that can inspect and make simple repairs to aircraft or fix pot holes, we are still a long way off from Star Wars’ multi-tasking droids. Sailors, who have been at the heart of global commerce for thousands of years, probably have job security for quite a while more.

The same lesson applies to other highly-automated things we want to build using A.I. software, such as “dark customer fulfillment centers,” warehouses without any human employees, where A.I. software and logistics robots are used to pick and pack customer orders in industries ranging from e-commerce to groceries. To a cost-conscious CFO or an efficiency-obsessed chief logistics officer, these things sound great. But what happens if one of the key conveyor belts breaks? Or if there is a fire? (The highly-automated warehouses run by the British online grocer and tech provider Ocado have suffered a number of significant fires in recent years, with some fire safety experts saying that their design complicates efforts to extinguish the blazes.)

Logistics robots can’t carry out repairs or act as robotic fire fighters. Maintenance, fixing stuff, and dealing with emergencies, are still areas where we need to rely primarily on skilled humans. The way we build our automated systems should take that into account. Software may well be eating the world, but there are some things in the physical world that aren’t so easy to ingest.

Jeremy Kahn


DeepMind faces class-action lawsuit for mishandling patient data. The London-based A.I. research company, owned by Google-parent Alphabet, is being sued for mishandling private patient data in a class-action lawsuit brought by the law firm Mischon de Reya, TechCrunch reports. The lawsuit is being financed by Litigation Capital Management, an Australian firm that seeks to profit from lawsuits and it pertains to DeepMind's work with a U.K. hospital to develop an app to alert medical staff to patients at risk of developing acute kidney injury. As part of that work, the hospital illegally transferred 1.6 million patient records to DeepMind, the U.K.'s Information Commissioner's Office found in 2017. The team that worked on the app was later moved to Google's Health unit, which has since been dissolved.

U.S. cities are backing off bans on facial recognition in the face of rising crime and increased lobbying from software vendors. Reuters reports that Virginia became the first state to repeal a ban on facial recognition technology and that California as well as the city of New Orleans may soon follow suit. Meanwhile, efforts to impose bans in places ranging from New York State and Colorado to West Lafayette, Ind., have met stiff lobbying from companies that sell the technology. Surging crime rates in many places has also contributed to police and politicians saying the technology might be useful, despite concerns that the A.I. software is riddled with racial bias and can lead to false arrests. It has also helped that federal agencies, such as the National Institute of Standards and Technology (NIST) and the Department of Homeland Security, have said vendors have made progress in eliminating racial disparities in how the technology performs. 

U.S. government cautions companies against using A.I. software to screen candidates and make hiring decisions. The U.S. Department of Justice and the Equal Economic Opportunity Commission (EEOC) jointly issued a warning to employers that they risked violating the Americans with Disabilities Act if they used algorithms or A.I. software to aid in hiring decisions and those systems disadvantaged disabled candidates, CNBC reported. Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, told NBC News there is “no doubt” that increased use of the technologies is “fueling some of the persistent discrimination.”

A.I. startup created by DeepMind, LinkedIn co-founders raises $225 million. Inflection AI, which was founded by DeepMind co-founder Mustafa Suleyman, and LinkedIn co-founder Reid Hoffman, revealed in an SEC filing that it has raised $225 million from an undisclosed group of investors. The valuation of the funding round was also not revealed. Suleyman has said previously that the Palo Alto, Calif.-based Inflection is focused on building new ways for humans to interact and give instructions to computers, without having to simplify their requests. TechCrunch was the first to spot the filing.


Isomorphic Labs, the new London-based Alphabet-owned company that was spun out of DeepMind and plans to use A.I. for drug discovery, announced several new hires to join DeepMind co-founder and CEO Demis Hassabis, who is also serving as Isomorphic's CEO. Miles Congreve will be the company's chief scientific officer. He was previously in the same role at biotech company Sosei Heptares. Sergei Yakneen will be its chief technology officer. He was previously in a similar role at A.I.-enabled genetics screening company Sophia Genetics. Max Jaderberg will be director of machine learning. He had been a researcher at DeepMind. And Alexandra Richenburg will be director of people operations. She had been in the same role at A.I. company Eigen Technologies.

Google has hired former Food and Drug Administration official Baku Patel to be its new senior director of digital health strategy, according to a story in The Verge. Patel had been the FDA's chief digital health officer of global strategy and innovation.


DeepMind researcher claims the company's newest huge A.I. system is a big step towards AGI. Others are less sure. This past week DeepMind unveiled Gato, a huge A.I. system that is, without any retraining, capable of performing some 600 very disparate tasks—from acting as a chatbot to manipulating a robot arm to playing Atari games—many of them at levels equal to or above human experts. When you consider that one definition of artificial general intelligence (AGI)—which is the kind of Holy Grail of the entire field, the kind of A.I. we know from science fiction—is a single piece of software that can perform almost any economically-useful task as well or better than a human expert, it would seem Gato might be getting kind of close. At least, one of its creators thinks so. Nando de Freitas, DeepMind's research director, said in a series of tweets: "My opinion: It’s all about scale now! The Game is Over! It’s about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, INNOVATIVE DATA, on/offline...Solving these scaling challenges is what will deliver AGI. Research focused on these problems, eg S4 for greater memory, is needed. Philosophy about symbols isn’t. Symbols are tools in the world and big nets have no issue creating them and manipulating them..." 

But others retorted that systems like Gato, for all their brilliance, seem to fail in ways that humans never would—and which we, as humans, would not think an intelligent agent with these kinds of capabilities would fail. For instance, DeepMind wanted to show that Gato could perform remarkably well at "zero-shot" learning: performing a new task for which it had not been trained. One of the zero-shot tasks they tried was photo captioning. But Gato only sometimes got the captions right and frequently got key aspects of the image wrong or showed that it did not really understand key concept categories. Gary Marcus, who has been a leading critic of today's "just keep scaling it up" deep-learning only approach to reaching AGI, seized on this, among other criticisms, in a long reposte to Gato and de Freitas on his Substack. And de Freitas's DeepMind colleague, machine learning researcher Murray Shanahan, is also skeptical, tweeting: "My opinion: Maybe scaling is enough. Maybe. And we definitely need to do all the things @NandoDF lists. But I see very little in Gato to suggest scaling alone will get us to human-level generalisation. It falls so far short. Thankfully we're working in multiple directions."


Inside Google’s epic quest to bring its 165,000-person workforce back to the office—by Beth Kowitt

Softbank’s billionaire founder is counting on an IPO for chip firm Arm to offset his Vision Fund’s $27 billion losses. That’s upping the pressure on Arm’s new CEO—by Jeremy Kahn

‘The most terrifying thing I’ve ever seen’: Europe’s proposed online child sexual abuse crackdown slammed by cybersecurity experts and privacy activists—by David Meyer

Google is giving its search engine and maps major updates. Here are 3 key takeaways about what’s coming—by Jonathan Vanian


A.I. and Chaos Theory. Can an A.I. predict the inherently unpredictable? Apparently, unbelievably, the answer seems to be yes. Last week, I caught up with Gary Kazantsev, head of quant technology strategy in the office of the CTO at Bloomberg, the financial news and data company. (Full disclosure: I worked at Bloomberg News for eight years before coming to Fortune.) Kazantsev used to run Bloomberg's machine learning engineering group and, in addition to his current Bloomberg role, teaches machine learning at Columbia University in New York. In future issues of Eye on A.I., I plan to tell you more about what Kazantsev told me about how Bloomberg is using A.I. to make it easier for its customers to access the vast amount of information and data available on the company's "terminal"—once a standalone piece of hardware, now a virtual terminal that exists in software and runs in the cloud. But today I just want to give a hat tip to Gary for pointing out to me a blind-blowing research paper from last year that I missed. It is about chaotic systems, ones which are inherently unpredictable because small changes in input variables can produce vast, and seemingly random, differences in output. 

The paper was authored by Alexander Haluszczynski, from the department of physics at Ludwig-Maximilians University in Munich, and Christoph Rath, from the Institute for Materials Physics in Space at the German Aerospace Center in Wessling, Germany, and published in Nature: Scientific Reports. The researchers took a system that should have been chaotic—i.e. completely unpredictable—and using only past data of how that system behaved, with no knowledge of the exact mathematical dynamics, trained a neural network to control the behavior of the system by adjusting its inputs. This ought to be impossible, and yet, it seemingly worked. "I am still thinking about this," Kazantsev told me. And he is a lot smarter than I am.

The papers authors point out several real-world cases in which such a neural network might be helpful, including preventing rocket engines from developing critical combustion instabilities and to better personalize the firing rate of pacemakers, which are set simply to keep the diastolic interval constant, which is not what happens in a normally beating heart. (It is just that the heart rhythms are too seemingly chaotic and individual to actually try to reproduce.) Kazantsev says he thinks modeling financial markets might well be another one. As Haluszczynksi and Rath write, "our machine learning enhanced method allows for an unprecedented flexible control of dynamical systems and has thus the potential to extend the range of applications of chaos inspired control schemes to a plethora of new real-world problems."

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.