By Jonathan Vanian
September 13, 2018

Digitally altered photos and videos known as “deep fakes” pose a potential risk to national security, U.S. lawmakers said Thursday.

U.S. House representatives Carlos Curbelo (R-Fla.), Stephanie Murphy (D-Fla.), and Adam Schiff (D-Calif.) sent a letter to Director of National Intelligence Dan Coats, urging U.S. intelligence agencies to investigate the rise of altered videos, audio clips, and photos that appear to be untouched but are actually manipulated.

Because the doctored video, audio, and photography appear to be accurate, the lawmakers are concerned that unspecified “malicious foreign or domestic actors” would be able to easily spread misinformation and propaganda.

“By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality,” the politicians wrote.

Advances in artificial intelligence technologies have allowed researchers to create realistic-looking videos, audio samples, and photos that are actually heavily manipulated. Last year, for example, researchers at the University of Washington, created a video of former President Barack Obama giving a speech that did not actually take place.

 

The same deep learning technology that can alter videos without being detected can also be used to create audio clips that sound like real people along with tampered photos that are more convincing than those altered with conventional software like Photoshop. The more believable altered media appears to be, the more easily bad actors would be able to fool the public by spreading fake information, as Russian-linked organizations did on Facebook and elsewhere in prelude to the 2016 U.S. presidential election.

“You have repeatedly raised the alarm about disinformation campaigns in our elections and other efforts to exacerbate political and social divisions in our society to weaken our nation,” the lawmakers wrote. “We are deeply concerned that deep fake technology could soon be deployed by malicious foreign actors.”

Get Data Sheet, Fortune’s technology newsletter.

The politicians’ fears echo similar concerns posed in February by a coalition of researchers from institutions like Oxford University, Cambridge University, and the AI research group OpenAI. Those researchers wrote in a report about AI that malicious actors could create videos of “state leaders seeming to make inflammatory comments they never actually made.”

The House members want U.S. intelligence agencies to assess how foreign governments could use the technology for nefarious purposes and identify possible “technological counter measures” that could detect and deter deep fakes.

SPONSORED FINANCIAL CONTENT

You May Like

EDIT POST