A.I. is letting this company build a new kind of defense contractor

July 27, 2021, 6:51 PM UTC

If you doubt that A.I. is creating opportunities for new players to disrupt whole industries, look no further than Shield AI, which might just become the 21st century’s first new, big-time defense contractor.

The San Diego-based company was co-founded by ex-Navy SEAL Brandon Tseng. He served two tours in Afghanistan and other deployments in Asia and the Pacific, and says he saw firsthand soldiers risking—and sometimes losing—life and limb when entering compounds and buildings with little information about what lay around the next corner or behind the next the door.

He knew the right kind of reconnaissance technology could save lives. After an MBA at Harvard, he co-founded Shield AI along with his brother Ryan, a tech executive who had previously founded a wireless charging company acquired by Qualcomm, and Andrew Reiter, an engineer who had worked on computer vision applications for Draper Labs, the non-profit advanced engineering company in Cambridge, Massachusetts.

Shield’s first product is a small quadcopter drone, Nova2, that can be easily carried by a small infantry unit and deployed as a digital scout, even inside crowded apartment complexes or underground tunnels. Because Tseng knew that more sophisticated opponents were increasingly turning to GPS and radio jamming to protect themselves, Shield designed the Nova2 to operate autonomously, without the need for a human to control it remotely or for a constant connection to some distant datacenter. It also doesn’t need a GPS signal to navigate and map its environment.

Fleets of Shield’s drones can also operate in a swarm, coordinating with one another. That may be critical for keeping watch over a large base or mapping out a large set of buildings. And while the drones Shield currently makes aren’t armed, Tseng tells me the company has no objection to having weapons on board in the future. A single drone can always be shot down. A drone swarm, on the other hand, might be able to quickly overwhelm an enemy’s air defenses.

Shield’s Nova2 drones have already been used by the U.S. Special Operations Command since 2018, including in combat. That’s not bad for a startup defense contractor. But what really impressed me about Shield, and why I think it might just become the next big defense contractor, is the company’s vision. Incumbent defense contractors tend to think of their products as silos, each centered around a big piece of hardware designed to meet a particular customer spec, often with bespoke software designed from scratch just for that product. That’s not how Shield thinks, Tseng tells me. (Shield has also done some interesting things to keep its workforce connected and motivated during the pandemic, as my colleague S. Mitra Kalita chronicled here.)

Whether it is a small team of Army Rangers or a sortie of sophisticated F-18 fighter-bombers, or a Navy destroyer trying to get close to a dispute island chain, these units face a common overarching dilemma: the increasing need to operate in what are called “denied environments.” That means an area where, through some combination of terrain, digital and electronic warfare, and sophisticated weapons systems, the enemy can prevent U.S. forces from operating without incurring prohibitive losses. “The premise of the U.S. military that it can project power and maneuver in the environment freely, for the first time since World War II, that has been challenged by Russia and China,” Tseng says.

The solution to denied environments, Tseng says, is autonomy. Unmanned systems that can operate on their own, even when communication and navigation is disrupted, is a common piece of A.I. know-how that needs to be implemented across a wide range of military hardware and scenarios. This A.I.-first, A.I.-centric approach is not what one finds at Lockheed Martin or Raytheon.

Shield intends to bring this kind of autonomy to increasingly larger “platforms.” “We want to climb the unmanned systems food chain,” Tseng says.  And Shield plans to get there fast, both through organic growth and acquisitions. The key is figuring out what elements can be shared across these platforms – such as algorithms for autonomously mapping and navigating a world—and what’s different.

To that end, last week the company announced it is buying Heron Systems, a small Virginia- and Maryland-based defense contractor that burst onto the scene last year when its software won a series of simulated dogfights against top human fighter pilots. Heron is known for its use of reinforcement learning, where A.I. software learns from experience, usually in a simulation, rather than from labelled data. While Shield already relies on sophisticated simulators and reinforcement learning to train its quadcopter software, Heron’s know-how about simulating the physics of fighter planes and other larger, fixed-wing aircraft, and the kinds of combat environments they operate in and threats they face, was what attracted Shield to Heron, according to Tseng.

Of course, there’s a big debate about exactly how disruptive A.I. will be to warfare. A provocative recent blog post by Jack McDonald, a professor of war studies at Kings College London, argued that rather than upsetting the balance of power, the technology might just be wash. Major powers will all invest in A.I. technology, meaning no one may gain a decisive upper hand. Combat, especially against non-state actors, is likely to be driven increasingly into urban areas, where military forces are more difficult to spot—even with technology like Shield’s Nova drones—and even harder to take out, without incurring unacceptable levels of civilian casualties.

Tseng’s retort to this: even if its true, the U.S. military can’t afford not to invest in A.I. capabilities. “In this contest, you don’t want to be second place,” he says.

That’s a lesson about A.I. that business probably needs to take to heart too.

***

Before we get to the rest of this week’s A.I. news, do you want to learn more about how A.I. is disrupting your industry and its potential to transform your company? You’ll find plenty of insights at the inaugural Fortune Brainstorm A.I. conference, coming up November 8–9 at The Ritz-Carlton Boston. You’ll hear cutting-edge case studies from senior executives who are using A.I. and meet the C-suite leaders, big tech companies, startups, and Fortune 500 businesses leading the charge in developing and deploying A.I. You can see the program and learn more about the event here. We’d love to see you there! Apply here to attend.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

Alphabet launches a new industrial robotics company, Intrinsic. The Google-parent has created a new "other bets" company, called Intrinsic that is working on software that will give all sorts of industrial robots "the ability to sense, learn, and automatically make adjustments as they’re completing tasks so they work in a wider range of settings and applications,” according to a company blog post. Wendy Tan-White, who had been a vice president in Alphabet's X division, where the idea for Intrinsic was incubated, has been appointed CEO of the new company. My Fortune colleague Emma Hinchliffe, who broke the news of Tan-White's appointment, has her plans for the company here

DeepMind publishes A.I.-based predictions for the shapes of all human proteins and those of 20 other key organisms. More news from an Alphabet-owned company, this time DeepMind. Last week, the London-based A.I. research shop published the details of AlphaFold, its A.I. system that can predict the three-dimensional structure of proteins from their genetic sequences, in many cases doing so as accurately as the best experimental methods. It also open-sourced AlphaFold's code so others can use it. This week, in a giant leap for biology, the company used AlphaFold to predict the structure of more than 350,000 proteins, including the entirety of the human proteome and those of 20 other organisms, and published the results. The data dump doubles the number of proteins for which structural information of some sort is available. What's more, DeepMind plans to eventually publish AlphaFold's predictions for more than 100 million proteins. The new AlphaFold protein database, which is hosted by the European Bioinformatics Institute of the European Molecular Biology Laboratory, is freely available for scientists to search. Some researchers said the ability to compare large groups of proteins would open up "new vistas" in biology. Others said it was likely to help with drug discover. I have more on the announcement in Fortune here.

Instacart is teaming with Fabric to offer automated fulfillment solutions to grocers. Instacart, the grocery delivery service, is going to begin offering automated online order fulfillment to U.S. and Canadian grocers. And it has signed a partnership with Fabric, a company that makes warehouse robots and the software to run them, according to this story in Grocery Dive.

Clearview AI scores another $30 million in venture capital despite controversy. The facial recognition company, which is the subject of several lawsuits, government probes on multiple continents, and cease-and-desist notices for alleging scrapping social media sites in violation of their terms and conditions, has nonetheless managed to raise an additional $30 million in a Series B VC funding round, according to The New York Times. The investors, which the company's CEO Hoan Ton-That said include "institutional investors and private family offices," have decided, however, to keep their names private. In the past the company raised money from, among others, tech billionaire Peter Thiel and Hal Lambert, who created MAGA ETF, a fund that says it invests in companies that "align with Republican beliefs," The Times noted.

Neural networks could be used to hide malware. That is what a trio of researchers from the University of the Chinese Academy of Sciences suggest in a new research paper. According to a story on the research in Vice, not only could the researchers hide malware within the "hidden layers' of a large neural network, the presence of the malware did very little to degrade the A.I. system's performance on the task for which it was trained. Security researcher Lukasz Olejnik told Vice that detecting the malware "would not be simple" but he also downplayed how useful the attack would be in practice, since accessing the neural network in the first place would probably require a hacker to have already penetrated a system, at which point they might not really need to hide malware in the A.I. software at all. 

Baidu's new massive language model tops the natural language understanding league table. The Chinese Internet giant has created Ernie 3.0, a language model that has now achieved the highest score on a benchmark of natural language understanding known as SuperGLUE. What's interesting about Ernie 3.0 is that it is not only a massive language model, but unlike OpenAI's GPT-3, for example, it is a hybrid model that includes elements of symbolic knowledge, an older idea about how to imbue computers with language understanding, as well as elements of pure deep-learning system. You can read more about the development here. These hybrid approaches are very hot at the moment in NLP. 

EYE ON A.I. TALENT

Pizza Hut International has hired Joe Park as its new chief digital and technology officer, according to a post on Park's LinkedIN page. Park was previously the chief innovation officer at Pizza Hut parent-company Yum! Brands. Before that, he was vice president, associate digital experience & enterprise architecture at Walmart.

AppHarvest, a Morehead, Kentucky-based company that is focused on next generation farming, including the use of robotics and A.I., has named Mark Keller as its senior vice president of software applications platform, the company said in a statement. Keller previously worked on robotics and warehouse automation at Amazon.

Consulting giant Booz Allen Hamilton has named Mark Tamascio a senior vice president in its Strategic Innovation Group, leading the firm's analytics and A.I. business support to the U.S. Department of Defense, according to a story in Virginia Business. He was previously vice president of A.I. at Lockheed Martin and before that was chief data and analytics officer there. 

EYE ON A.I. RESEARCH

Uh oh, why eliminating A.I. bias may not be simple at all. Researchers from a number of U.S. and Canadian universities and medical centers have found a deep learning system trained on medical images (including X-rays and CT scans) could accurately predict the race of patients, even when the data contains no reference to race and there is nothing about the clinical features in the images that medical experts know to correlate with race. What's more, they said in a paper published on the non-peer reviewed research repository arxiv.org, this finding held true even after researchers distorted the images to such a degree that human experts had trouble even determining what they were looking at. This surprising and confounding result shows how difficult it may be to actually eliminate bias from deep learning systems.

It also shows why it is so important the those using A.I. systems be able to investigate and understand why these systems make the classification decisions they do. In this case, it is possible there is some hidden proxy for race in the data that the researchers haven't noticed -- maybe in some of the text accompanying the images, since the distortion of the image didn't seem to degrade the system's accuracy that much. (In previous studies with medical imagery, A.I. systems have sometimes learned short-cuts through the data that don't correspond to clinical features, such as using text that denoted whether a portable chest X-ray was used as a short-cut for predicting patients who were more likely to deteriorate. It turns out that portable chest X-rays are only used on patients who are more ill to begin with.) It might also be that the A.I. has learned subtle anatomical or clinical differences between races of which modern medicine is unaware. But there's no way to really know for sure, and that in itself is part of the problem.

Without being able to determine what the A.I. is even picking up on that is allowing it to make classifications accurately, it will be very hard to prevent the system from making decisions based on criteria such as race and gender, which can be illegal depending on the context. It could also be dangerous.

As the researchers write: "We emphasize that the AI’s ability to predict racial identity is itself not the issue of importance, but rather that this capability is trivially learned and therefore likely to be present in many medical image analysis models, providing a direct vector for the reproduction or exacerbation of the racial disparities that already exist in medical practice. This risk is compounded by the fact that human experts cannot similarly identify racial identity from medical images, meaning human oversight of AI models is of limited use to recognise and mitigate this problem. This creates an enormous risk for all model deployments in medical imaging: if an AI model relied on its ability to detect racial identity to make medical decisions, but in doing so misclassified all Black patients, clinical radiologists (who do not typically have access to racial demographic information) would not be able to tell..."

And while this study was all about medical imaging, the exact same problem might apply in other areas where deep learning is being applied or might be applied in the near future, such as mortgage lending or underwriting loans or insurance or bail decisions. 

FORTUNE ON A.I.

Pivot or else: How China’s largest edtech company can survive the government’s latest crackdown—by Yvonne Lau

Intel makes its next big bet—by Jonathan Vanian

Intel sees a growth driver in self-driving cars—by Dave Gershgorn

Exclusive: Alphabet taps Wendy Tan White as CEO of new robotics company Intrinsic—by Emma Hinchliffe

In giant leap for biology, DeepMind’s A.I. reveals secret building blocks of human life—by Jeremy Kahn

Lyft teams up with Ford to bring robotaxis to select U.S. cities—by Christiaan Hetzner

BRAIN FOOD

Can an A.I. chatbot stand in for a lost love? That's the question at the heart of a story a lot of people were talking about this week. The San Francisco Chronicle's Jason Fagone looked at Project December, a website created by Jason Rohrer, a Bay Area freelance coder, that allows someone to upload snippets of text and then creates a chatbot that will reply in the same style. The actual writing of the chatbot replies is composed by OpenAI's large language models GPT-3. Rohrer "borrowed" beta testing credentials for GPT-3 in order to be able to interface with the language model. But Fagone's story isn't really so much about Rohrer as it is about Joshua Barbeau, a lonely 33-year old freelance writer in Bradford, Canada, who suffers from depression and anxiety and has spent years mourning the untimely death of his ex-fiance Jessica Pereira, who was just 23 when she died eight years ago from a rare liver disease. Barbeau had stumbled upon Project December and decided to feed the system lots of old texts and Facebook messages written by Jessica in order to create a chatbot impersonation of her that he could talk to. Read the story to find out what happens.

But Barbeau isn't the first person to do this, although he may be the first to use GPT-3 specifically as the chatbot engine for this kind of project. Programmer and entrepreneur Eugenia Kayuda had done something similar, creating a chatbot, trained on old texts and emails from her best friend, Roman Mazurenko, after he died. You can read that strange, sad tale in this feature by Casey Newton in The Verge.

A big question in both stories is about how healthy this behavior is? Does it help the bereaved person creating the chatbot cope with loss and bring them comforting memories, or does it simply make grief more intense, like constantly picking at an emotional wound, and prevent someone from every achieving any closure, or at least the kind of healing that time usually brings? Will a person's emotional connection with the chatbot preventing them from forming real relationships? (That one of the themes explored in the excellent 2013 sci fi movie, "Her" where Joaquin Phoenix falls in love with his A.I.-enabled digital assistant.) 

These are questions we are all likely to have to grapple with more often as this as natural language processing continues to improve and becomes more ubiquitous and people increasingly turn to tech to help them deal with both the practical and emotional aspects of death and grieving. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Read More

CEO DailyCFO DailyBroadsheetData SheetTerm Sheet