Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Crikey
Crikey
National
Cam Wilson

Adobe is selling fake AI images of war in Israel-Palestine

Adobe is selling artificially generated, realistic images of the Israel-Hamas war which have been used across the internet without any indication they are fake.

As part of the company’s embrace of generative artificial intelligence (AI), Adobe allows people to upload and sell AI images as part of its stock image subscription service, Adobe Stock. Adobe requires submitters to disclose whether they were generated with AI and clearly marks the image within their platform as “generated with AI”. Beyond this requirement, the guidelines for submission are the same as any other image, including prohibiting illegal or infringing content.

People searching Adobe Stock are shown a blend of both real and AI-generated images. Like “real” stock images, some are clearly staged whereas others can seem like authentic, un-staged photography. 

This is true of Adobe Stock’s collection of images for searches relating to Israel, Palestine, Gaza and Hamas. For example, the first image shown when searching for Palestine is a photorealistic image of a missile attack on a cityscape titled “Conflict between Israel and Palestine generative AI”. Other images show protests, on-the-ground conflict and even children running away from bomb blasts — all of which aren’t real.

Amid the flurry of misinformation and misleading online content about the Israel-Hamas war that’s circulating on social media, these images, too, are being used without disclosure of whether they are real or not. 

A handful of small online news outlets, blogs and newsletters have featured “Conflict between Israel and Palestine generative AI” without marking it as the product of generative AI. It’s not clear whether these publications are aware it is a fake image.

Google reverse image search for the AI-generated image which has also returned similar real photographs (Image: Google)

RMIT senior lecturer Dr TJ Thomson, who is currently researching the use of AI-generated images, said that there are concerns about the transparency of AI image usage and whether audiences are literate enough to recognise their use. 

“There is potential for these images to mislead folks, to distort reality, to disrupt our perception of truth and accuracy,” Thomson told Crikey.

Thomson said that discussions with newsrooms as part of his research have found that fears about the potential for misinformation are a top concern, but there are also questions about the labour implications of using AI images rather than on-the-ground photographers. 

He said that while AI images can be a tool, he also warns of their misuse: “You don’t want to be overly cautious, you don’t want to be scared of everything, because there are good reasons to use them. But you also have to have a bit of wisdom and cautiousness.”

Adobe did not respond to a request for comment.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.