Volodymyr Zelenskyy slams ‘childish’ viral deepfake, but experts warn Russia’s cyber hit jobs won’t ‘always be so bad’

March 17, 2022, 3:06 PM UTC

Ukraine’s president fought back against a poorly doctored video in which he appears to capitulate to Russia, highlighting the danger posed by artificial intelligence in the spread of misinformation.

In the footage, Volodymyr Zelenskyy purportedly called on his soldiers to surrender in a deepfake that earns its name in only the loosest sense of the term.

“We are at home and defending Ukraine,” Zelenskyy fired back over Instagram, before turning the tables on his adversaries and advising Russian Federation soldiers they should be the ones to lay down their arms and return home.  

The brazen and rather sloppy attempt at psychological warfare didn’t fool anyone, certainly not in a country home to a thriving community of tech startups and software coders prior to the war’s outbreak.

Roman Osadchuk of the Atlantic Council’s Digital Forensic Research Lab, for example, noted the deepfake was “ridiculed” by Ukrainians for its poor quality of video and audio. 

Despite the crude hit job, Zelenskyy chose not to take any chance that word might spread, responding quickly to discredit his digital doppelgänger, blasting it as a “childish provocation.” 

The sophomoric effort to demoralize Ukraine’s forces was promptly torn to shreds online. Yet it may not be the last attempt at fooling the population, and experts argue the next time may be a lot harder to spot.  

“The deepfake wasn’t convincing, and he could reply fast,” argued John Scott-Railton, a senior researcher at the Citizen Lab focusing on malware, phishing, and disinformation. “But experimentation with fake calls to surrender hasn’t stopped. Don’t assume they will always be so badly executed.”

Hackers not necessarily skilled

Once alerted, Facebook parent Meta scrubbed its platforms of the video, which features the familiar head of Ukraine’s president planted crudely on a pale, motionless body. 

“We’ve quickly reviewed and removed this video for violating our policy against misleading manipulated media, and notified our peers at other platforms,” said Nathaniel Gleicher, head of security policy at Meta. 

Deepfakes use an artificial intelligence technique called a generative adversarial network, or GAN, and can run on open-source software and graphics cards that are readily procurable.

The trick, as Fortune has reported, lies in having the right data to feed that software, running the training process for the right length of time, and finally a lot of very painstaking, manual postproduction digital editing. 

In other words, the skills needed for waging more traditional forms of cyberwarfare—finding and exploiting vulnerabilities in IT infrastructure such as official ministry websites, for example—are not directly transferrable. 

There may be only a few dozen people in the world with the talent to pull off a truly convincing deepfake.  

This might explain why hackers could successfully alter the chyron banner on the Ukrainian 24 broadcast news channel to insert a fake caption citing Zelenskyy’s surrender that matched the doctored video.

Yet when it came to the actual deepfake, the video failed so obviously at fooling suspicious soldiers already warned by their government not to believe the president would capitulate to Russia.

“What this Zelenskyy ‘Deep Fake’ video may end up showing is that people are actually pretty aware of how easy it is to fake videos,” wrote Shane Huntley, director of the Google Threat Analysis Group. “And how quickly they get reported and taken down.” 

Never miss a story: Follow your favorite topics and authors to get a personalized email with the journalism that matters most to you.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward