Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Eamon Barrett

How Adobe plans to help you spot a deepfake

(Credit: Michael M. Santiago—Getty Images)

Last month, you probably saw dramatic images of Donald Trump being arrested circulating online.

One series of shots shows the former president charging through a scrum of police officers, his coiffed hair remaining remarkably in place as the officers wrestle him to the ground. 

In another set of images, Trump appears to have more success in fleeing the law, and is seen running down the street with a team of officers in pursuit behind him.

But those images were fake, created by Bellingcat founder Eliot Higgins, using the A.I.-image generator tool Midjourney. The actual images of Trump’s arrest, which occurred in Manhattan last Tuesday, are far less compelling than the ones Midjourney’s algorithms dreamed up.

“Frankly, they didn’t look very real, but people believe them, right? There’s just that instinct for people to believe things that they see,” says Dana Rao, general counsel and chief trust officer at Adobe.

Take a closer look at the striking images created by Higgins, and the flaws of Midjourney’s A.I. renderings are quickly apparent. 

In the image of Trump rushing the police officers, the A.I. generator has fused the former president’s lower half with that of a cop, so Trump appears to be sporting a nightstick in a holster belt. In the images where Trump is being chased, none of the pursuing officers are looking in his direction, sapping the intent out of the chase.

A.I. images still leave, Rao says, “a lot of little clues” about their authenticity.

“Shadows are typically wrong. A lot of A.I. gets the number of fingers on a hand wrong. You can see some blurring on background images, and the faces are not quite there yet, in terms of being photorealistic,” Rao says. 

With more specialist A.I. tools, used exclusively to generate fake faces, the results are more convincing. There are still minute giveaways that the images aren’t real, such as when earrings are mismatched, but research shows that already humans are easily convinced that A.I. mug shots are of real people.

But when an image is scaled down to the size you might view on your phone, Rao says, a lot of those little clues go unnoticed. And, as A.I. improves, the technology will get better at refining those telltale signs. 

For Rao and his team at Adobe, the solution to the issue of deepfakes is not to prevent bad actors from using the tools to spread misinformation. That’s an arms race that Rao says is “frankly, insoluble.” Instead, Adobe’s solution is to help good actors prove their veracity.

Enter the Content Authenticity Initiative (CAI)—an open standard verification system propagated by Adobe and joined by over 1,000 other big names, including Microsoft, the New York Times, and Canon.

“It’s a global initiative where all these companies have come together to say, ‘We need a way to authenticate the truth in a digital world, because if democracies lose the ability to have discussions based on facts, they can’t govern,’” Rao says.

The CAI is essentially a system for securing and authenticating the metadata attached to an image, so that a viewer can easily see where the image originated and how it has been edited. Metadata—often called a “digital fingerprint” embedded in images—stores details such as which camera was used to take a photo, the date of the image captured, and what the image shows. 

But metadata can be edited or stripped, so it is not a fully reliable tool for authenticating a photo. The CAI’s system saves that metadata, and the image it belongs to, to the cloud so that there’s a permanent record of the photo’s provenance. Photos that utilize the CAI’s system show a small authentication tag in the top corner, too, which brings up the image’s creation and editing history with a click. 

Of course, the CAI’s system isn’t universal, and it can only be used to authenticate photos that utilize its tools. But Adobe owns one of the world’s preeminent suites of content creation apps, including Photoshop and Illustrator. CAI’s Content Credentials are also included automatically on content created with Adobe’s own A.I.-image generator, Firefly, which it launched last week.

“The future is that people are going to expect to see really important news delivered with content credentials, and anything else, they should be skeptical of,” Rao says. “You’re not going to be able to tell the difference going forward.”

Eamon Barrett
eamon.barrett@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.