Ukraine InvasionCybersecurityEnergyTravel IndustryAutos

A faked version of Kyiv leader Klitschko fooled mayors across Europe—but it’s not clear this was really a ‘deepfake’

June 27, 2022, 9:36 AM UTC
Vitalii Klychko, Kyiv city mayor and chairman of the Association of Ukrainian Cities makes a report at the International Forum “reVIVEd Ukraine – reVIVEd communities”, organized by the Association of Ukrainian Cities on June 2, 2022 in Kyiv, Ukraine.
Vitali Klitschko, Kyiv mayor and chairman of the Association of Ukrainian Cities, delivers a report at international forum “Revived Ukraine—Revived Communities,” on June 2, 2022, in Kyiv.
Oleksii Samsonov-Global Images Ukraine via Getty Images

A few months ago, a “deepfake” video featured a bogus Volodymyr Zelensky appearing to urge the surrender of his fellow Ukrainians. No one was fooled, owing to the ersatz Zelensky’s poor quality, but experts warned future deepfakes—A.I.-generated figures purporting to be real people—might not be so obvious.

Judging by what happened across at least four European capitals last week, the time to worry may have arrived. However, it’s too soon to be sure that the era of the deepfake is truly upon us.

Here’s what is certain: At least four European mayors had video calls with a faked representation of Vitali Klitschko, the mayor of Kyiv, and investigations, involving police and state security officials, are now underway across the continent.

Vienna Mayor Michael Ludwig and “Klitschko” spoke on Wednesday, with Ludwig ending the call none the wiser—indeed, he was so convinced that he had really spoken with his Kyiv counterpart that he tweeted and issued a press release about it, including photos of the call taking place.

Berlin’s Franziska Giffey and Madrid’s José Luis Martínez-Almeida had their rounds with the bogus heavyweight champ on Friday.

For his Berlin call, “Klitschko” asked to speak in Russian with a German translator—odd, given that he lived in Hamburg for years during his boxing career, and speaks German fluently. Giffey’s Spidey-sense was further triggered when he referred to Ukrainian refugees cheating the German benefits system and asked for help in getting male Ukrainian refugees sent back to serve in Ukraine, and in organizing a Christopher Street Day parade in Kyiv.

Giffey’s office, which went public about the incident later on Friday, said the call was terminated early, and the Ukrainians subsequently confirmed Berlin’s mayor had not spoken with the real Klitschko.

Martínez-Almeida reportedly broke off his call after a few minutes, after he became suspicious he wasn’t really conversing with Klitschko.

“The city hall has filed a complaint with the police for an alleged crime of impersonation of the mayor of Kyiv in an interview via videoconference with the mayor,” Martínez-Almeida’s office told Fortune, adding: “The mayor has described as ‘absolutely intolerable’ that these events could take place at a time when Kiev is being besieged by the invasion of the Russian army.”

Meanwhile, Budapest mayor Gergely Karácsony said in a Saturday Facebook post that he too had “recently” been targeted, and had ended the call following “several strange, suspiciously provocative questions.”

The invitation to the Hungarian video call had come from a spoofed email address purporting to be that of the Kyiv mayor’s office. That was also true in the Spanish case—Bild published a screen-grab of that email.

The targeted mayors broadly deny discussing anything confidential with not-Klitschko.

Klitschko responds

Klitschko himself (the real one) on Saturday posted a video on Twitter saying the incidents needed urgent investigation, and advised anyone who needs to speak with him in German or English that he needs no translator.

So what exactly happened here?

Giffey was quick to say she had spoken with a deepfake, and described the technology as “a tool of modern warfare.” (Speaking to Berlin radio, she also said she had heard Barcelona mayor Ada Colau had been similarly targeted—Fortune has inquired with Colau’s office whether this is the case.)

Karácsony also said the fake Klitschko had been created by “professional deepfake technology.”

Some experts aren’t so sure.

The investigative journalist Daniel Laufer said in a Sunday Twitter thread that the published images of the call suggest a true deepfake—in which Klitschko’s face would have been at least partially generated by computer systems trained on real footage or imagery—was not involved.

That’s partly because the images of Klitschko’s supposed camera feed correspond exactly with frames from what is clearly the source material: a real interview that Klitschko conducted in April, which is available on YouTube.

Also, Klitschko keeps moving his head as he speaks, but there are no telltale artifacts around his head that would suggest an image-generating system trying to keep up—even though he is sitting in front of a visually complex background.

“If all five images look exactly the same as in the source material (with matching facial expressions and background): where does the A.I. manipulation lie?” Laufer tweeted.

He suggested the fake Klitschko might instead have been generated by precutting snippets from the original video and reassembling them “in real time,” with viewers chalking up any janky transitions to the nature of video calls.

Florian Gallwitz, a professor of computer science and media, concurred with the assessment that this was not a proper deepfake as the term is understood. “The main purpose of the deepfake drivel is probably to cover up how clumsy the tricks were for which you fell,” he tweeted.

Whatever the technology that was used, the result was clearly good enough to fool some of the people for some of the time, and that should be cause for concern.

The fake-Klitschko incidents come at a time when people—including those within the tech industry—are increasingly worried about the potential for new methods to deceive people about who and what they are seeing and hearing online.

That concern was on full display last week, when Amazon showed off a potential new Alexa feature in which the virtual assistant can mimic someone’s voice, after being trained on less than a minute of genuine audio.

The company not-at-all-creepily suggested the feature could be used to have a kid’s book read by the voice of a dead grandmother. Horrified observers from the tech community pointed out the technology was also ripe for abuse by scammers.

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.