Last month, you probably saw dramatic images of Donald Trump being arrested circulating online.
One series of shots shows the former president charging through a scrum of police officers, his coiffed hair remaining remarkably in place as the officers wrestle him to the ground.
In another set of images, Trump appears to have more success in fleeing the law, and is seen running down the street with a team of officers in pursuit behind him.
But those images were fake, created by Bellingcat founder Eliot Higgins, using the A.I.-image generator tool Midjourney. The actual images of Trump’s arrest, which occurred in Manhattan last Tuesday, are far less compelling than the ones Midjourney’s algorithms dreamed up.
“Frankly, they didn’t look very real, but people believe them, right? There’s just that instinct for people to believe things that they see,” says Dana Rao, general counsel and chief trust officer at Adobe.
Take a closer look at the striking images created by Higgins, and the flaws of Midjourney’s A.I. renderings are quickly apparent.
In the image of Trump rushing the police officers, the A.I. generator has fused the former president’s lower half with that of a cop, so Trump appears to be sporting a nightstick in a holster belt. In the images where Trump is being chased, none of the pursuing officers are looking in his direction, sapping the intent out of the chase.
A.I. images still leave, Rao says, “a lot of little clues” about their authenticity.
“Shadows are typically wrong. A lot of A.I. gets the number of fingers on a hand wrong. You can see some blurring on background images, and the faces are not quite there yet, in terms of being photorealistic,” Rao says.
With more specialist A.I. tools, used exclusively to generate fake faces, the results are more convincing. There are still minute giveaways that the images aren’t real, such as when earrings are mismatched, but research shows that already humans are easily convinced that A.I. mug shots are of real people.
But when an image is scaled down to the size you might view on your phone, Rao says, a lot of those little clues go unnoticed. And, as A.I. improves, the technology will get better at refining those telltale signs.
For Rao and his team at Adobe, the solution to the issue of deepfakes is not to prevent bad actors from using the tools to spread misinformation. That’s an arms race that Rao says is “frankly, insoluble.” Instead, Adobe’s solution is to help good actors prove their veracity.
Enter the Content Authenticity Initiative (CAI)—an open standard verification system propagated by Adobe and joined by over 1,000 other big names, including Microsoft, the New York Times, and Canon.
“It’s a global initiative where all these companies have come together to say, ‘We need a way to authenticate the truth in a digital world, because if democracies lose the ability to have discussions based on facts, they can’t govern,’” Rao says.
The CAI is essentially a system for securing and authenticating the metadata attached to an image, so that a viewer can easily see where the image originated and how it has been edited. Metadata—often called a “digital fingerprint” embedded in images—stores details such as which camera was used to take a photo, the date of the image captured, and what the image shows.
But metadata can be edited or stripped, so it is not a fully reliable tool for authenticating a photo. The CAI’s system saves that metadata, and the image it belongs to, to the cloud so that there’s a permanent record of the photo’s provenance. Photos that utilize the CAI’s system show a small authentication tag in the top corner, too, which brings up the image’s creation and editing history with a click.
Of course, the CAI’s system isn’t universal, and it can only be used to authenticate photos that utilize its tools. But Adobe owns one of the world’s preeminent suites of content creation apps, including Photoshop and Illustrator. CAI’s Content Credentials are also included automatically on content created with Adobe’s own A.I.-image generator, Firefly, which it launched last week.
“The future is that people are going to expect to see really important news delivered with content credentials, and anything else, they should be skeptical of,” Rao says. “You’re not going to be able to tell the difference going forward.”
Eamon Barrett
eamon.barrett@fortune.com
IN OTHER NEWS
Sue a bot
Brian Hood, mayor of a town in Australia, could bring the world’s first defamation case against a chatbot. The politician’s lawyers say they are considering suing ChatGPT creator, OpenAI, after the A.I. script generator falsely named the mayor as a guilty party in a decades-old bribery scandal. Hood’s lawyers claim he could have suffered reputational damage, but it’s unclear who prompted ChatGPT to provide information on the case in the first place, or why.
ChatGPT blocked
Meanwhile, Italy has banned ChatGPT from its digital shores, after the Italian Data Protection Authority ruled that the chatbot is operating in violation of the EU’s data privacy laws. The regulator flagged several violations, including that ChatGPT spews false information about people (shout-out to Brian Hood). OpenAI immediately complied with the ban but, as Jeremy Kahn reports, the roadblock could mark the beginning of bigger troubles for companies looking to commercialize large foundation models in Europe.
Tesla backs up
Tesla cars are decked with an array of cameras that the vehicles use to support the car’s self-driving functions. But, according to Reuters, those cameras have also captured private images of Tesla owners within their own homes, which Tesla employees then shared internally. Governments have already taken issue with how Tesla manages the privacy of video captured by its vehicles. China has banned Teslas from military and government complexes, while the EU recently ruled that Tesla cameras can no longer record by default.
Trust in A.I. is diminishing
A Monmouth University poll released in February found that only 9% of Americans trusted that computers with artificial intelligence would do more good than harm to society. In a 1987 poll where Americans were asked the same question, the Washington Post reports, roughly 20% believed A.I. would be good overall.
TRUST EXERCISE
There may be a banking crisis going on, but Americans still overwhelmingly keep faith in their financial institutions. According to a Harris Poll survey of 2,054 Americans, at least 90% said they feel their money is safe in their bank or credit union. But other worries over bank security still exist and, as Harris Poll CEO Will Johnson writes in a Fortune op-ed, those fears are different for different demographics.