Biden AdministrationUkraine InvasionInflationEnergyCybersecurity

Pentagon adopts new ethical principles for using AI in war

February 24, 2020, 7:44 PM UTC

The Pentagon is adopting new ethical principles as it prepares to accelerate its use of artificial intelligence technology on the battlefield.

The new principles call for people to “exercise appropriate levels of judgment and care” when deploying and using AI systems, such as those that scan aerial imagery to look for targets. They also say decisions made by automated systems should be “traceable” and subject to testing.

Defense Department officials outlined the new approach Monday. The Pentagon said in a statement that “the use of AI raises new ethical ambiguities and risks,” but its principles fall short of stronger restrictions favored by arms control advocates.

“I worry that the principles are a bit of an ethics-washing project,” said Lucy Suchman, an anthropologist who studies the role of AI in warfare. “The word ‘appropriate’ is open to a lot of interpretations.”

The principles follow recommendations made last year by the Defense Innovation Board, a group led by former Google CEO Eric Schmidt.

An existing 2012 military directive requires humans to be in control of automated weapons but doesn’t address broader uses of AI.

More must-read stories from Fortune:

—Are we undergoing an industrial revolution or a phase change?
—Understanding the 2020 election as brand marketing
Investors shouldn’t underestimate election volatility, warns UBS
—Angela Merkel is on her way out. Meet her potential replacements
How the 2020 election could influence your personal finances

Get up to speed on your morning commute with Fortune’s CEO Daily newsletter.