Get all your news in one place.
100’s of premium titles.
One app.
Start reading
France 24
France 24
World

Was this photo of a dead Israeli baby AI-generated? When AI-detection errors muddle public debate

This image shows the results page on the tool AI or Not claiming that the image of the burned baby posted online by Israel was, in fact, AI-generated. Note: we added the gray oval to hide the image of the body. © Observers

On October 12, the official account of the state of Israel posted an image of a tiny, charred body, claiming that the image showed a baby killed by Hamas during the attacks carried out on October 7. In the hours after the image was posted, social media users began to comment, saying that the image had been generated by artificial intelligence, according to the AI detection tool AI or Not. However, several experts – as well as the company behind AI or Not – have said these findings were wrong and that the photo is likely real.

If you only have a minute

  • A number of social media accounts, most of them either American or expressly pro-Palestinian, have said on X (formerly Twitter) that the photo of the burned body of a child shared by the state of Israel was generated by AI, based on the results of an AI detection tool called AI or Not. 

  • However, AI or Not actually said that the result was a false positive. Several specialists in image analysis agreed, saying that the photo was not AI-generated. 

  • A number of people claimed that the image of the charred body had been generated using a photo of a puppy. However, when we talked to a specialist in image analysis, he said the photo of the dog was actually the doctored image.

The fact check, in detail

On October 12, Israeli Prime Minister Benjamin Netanyahu published [warning: disturbing images] photos of the burned bodies of children in mortuary sacks on his X account (formerly Twitter). In the caption, he said that the photos showed “babies murdered and burned by the Hamas monsters” during their attack on October 7. The photos were picked up and reposted by the X account of the state of Israel a few hours later.

However, many American and pro-Palestinian social media users blamed the country for having generated one of the images using artificial intelligence.

A number of tweets, including one viewed more than 22 million times, denounced the images, claiming that they had been "created" by Israel, based on the results of artificial intelligence detector "AI or Not". These tweets featured a screengrab of the results page, where the tool indicated that the image had been "generated by AI”. 

The result was even picked up by the X account for Al Jazeera in Arabic. On October 14, the Qatari media published a video on the topic, which garnered more than 500,000 views.

"These images, according to [Israel], reflect the "brutality of Hamas"... Artificial intelligence has revealed the falsity of the Israeli accusations against members of Al-Qassam [the armed branch of Hamas]," Al Jazeera wrote.

This is a tweet from Al Jazeera in Arabic about the accusations that an image of a charred body was actually generated by AI. It includes the screengrab of the results page of the tool AI or Not. Observers

In these same messages, users also accused Israel of generating this image using a photo of a live puppy in a mortuary sack – one that looks the same as one in the picture of the child’s body.

This photo of a puppy, which some people have said is the original photo that was subsequently doctored, circulated widely, especially on 4chan, a site frequented by the American far-right, starting the evening of October 12.

A number of social media users claimed that this image of a puppy, shared on the 4chan channel, was the origin of the photo shared by Israel. Observers

A false positive for the tool AI or Not

In reality, there are a few clues that the image posted by the Israeli government was not generated by artificial intelligence. 

The company that created "AI or Not” actually cast doubt on the results of its own software. In a tweet from October 14, the company said that its software was capable of false positives, meaning that it could falsely conclude that a real photo was generated by AI, especially when the image in question was low quality. 

"We have confirmed with our system that the result is inconclusive because of the fact that the photograph was compressed and altered to blur out the name tag," the company said on X, referring to the tag next to the left hand of the body. "If you run the photo through the software now, it actually indicates that it is 'likely human'."

Our team confirmed these results on October 16.

This is a screengrab of the results page when our team ran the photo through AI or Not on October 16. Now, the results page says that the image is “likely human.” Our team added the gray circle to mask the body. Observers

The team at the investigative outlet Bellingcat, which specialises in image analysis, tested out the software back in September 2023.

“The fact that AI or Not had a high error rate when it was identifying compressed AI images, particularly photorealistic images, considerably reduces its utility for open-source researchers,” Bellingcat concluded.

‘There is no proof that the image shared by the Israeli government was altered’

Moreover, the photo itself doesn’t show signs of being generated by AI, Hany Farid, a specialist in image analysis and a professor at Berkeley, explained to the media outlet 404.

Farid pointed out the accurate shadows and the structural consistencies in the photo. 

“That leads me to believe that [the photo] is not even partially AI-generated,” he said.

The same sentiment was expressed by Denis Teyssou, the head of AFP’s Medialab and the innovation manager of a project focused on detecting AI-generated images, vera.ai

"There are no signs that the image shared by the Israeli government has been doctored,” he told our team on October 16. 

He added that the software designed by vera.ai to detect AI-generated images didn’t identify that the image had been doctored – while also specifying the limits of this kind of software.

"The big risk with AI detection tools is if they produce false positives. If there is a false positive, we can no longer trust the tool,” he said.

A ‘doctored’ image of a puppy

When the photo of the body was run through a software created by the AFP Medialab to detect AI-generated images called InVID-WeVerify, it reached the same conclusion as vera.ai – that the photo hadn’t been doctored. 

However, the tool did pick up inconsistencies in the image of the puppy.

"It’s likely that this image was falsified using generative methods,” said Tina Nikoukhah, a mathematics researcher at Paris-Saclay University, during an interview with our team on October 16. 

It "detected significant traces in the background of the image that didn’t appear on the puppy,” she said. In the image below, you can see these differences marked in colour – dark green on the puppy and light green on the rest of the image. 

The photo of the puppy is on the left. On the right is the same photo with the ZERO filter applied by the software InVID-WeVerify. The filter "detected significant traces in the background of the image that didn’t appear on the puppy,” said Tina Nikoukhah. This is demonstrated by the dark green pixels in the centre of the image. Observers

"Considering the nature of these traces, it’s likely that the falsification was made using AI-generation,” she added, referring to software like Midjourney or Stable Diffusion. 

These results line up with claims made by an X user, who said that he had created the puppy image. 

In a tweet published on October 12, a few hours before the image was shared on 4chan, he said that it took him "five seconds" to create this image from the image shared by Israel. 

"Not hard to make a convincing fake photo anymore,” he wrote in his tweet, which has since been deleted. In another tweet, the same user said that he had used the AI-image generator Stable Diffusion. He referred on multiple occasions to his AI-generated image in other tweets.

Photos of children burned shared without context by Israel

Even if the images are real, Israel shared them without any context. 

On October 10, Israeli channel i24 News and the Israeli government were accused of having announced, without proof, that 40 babies were decapitated by Hamas in Kfar Azar. 

On October 11, US President Joe Biden also said that Israeli children had been “decapitated”. However, the same evening, the White House said that the American president had gotten this information from the Israeli services and didn’t have any additional proof. 

The next day, the Israeli government shared the image of the charred remains of children, saying: "Those who deny these events are supporting the barbaric animals who are responsible for them."

They did not, however, give any context for the images or the circumstances of the death of these children.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.