Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo and Kiran Stacey

AI-enhanced images a ‘threat to democratic processes’, experts warn

A composite image of the original image from No 10, left, and the image shared by a Karl Turner
A composite image of the original image from No 10, left, and the image shared by a Karl Turner. Composite: N010 Composite: No 10

Experts have warned that action needs to be taken on the use of artificial intelligence-generated or enhanced images in politics after a Labour MP apologised for sharing a manipulated image of Rishi Sunak pouring a pint.

Karl Turner, the MP for Hull East, shared an image on the rebranded Twitter platform, X, showing the prime minister pulling a sub-standard pint at the Great British beer festival while a woman looks on with a derisive expression. The image had been manipulated from an original photo in which Sunak appears to have pulled a pub-level pint while the person behind him has a neutral expression.

The image brought criticism from the Conservatives, with the deputy prime minister, Oliver Dowden, calling it “unacceptable”.

“I think that the Labour leader should disown this and Labour MPs who have retweeted this or shared this should delete the image, it is clearly misleading,” Dowden told LBC on Thursday.

Experts warned the row was an indication of what could happen during what is likely to be a bitterly fought election campaign next year. While it was not clear whether the image of Sunak had been manipulated using an AI tool, such programs have made it easier and quicker to produce convincing fake text, images and audio.

Wendy Hall, a regius professor of computer science at the University of Southampton, said: “I think the use of digital technologies including AI is a threat to our democratic processes. It should be top of the agenda on the AI risk register with two major elections – in the UK and the US – looming large next year.”

Shweta Singh, an assistant professor of information systems and management at the University of Warwick, said: “We need a set of ethical principles which can assure and reassure the users of these new technologies that the news they are reading is trustworthy.

“We need to act on this now, as it is impossible to imagine fair and impartial elections if such regulations don’t exist. It’s a serious concern and we are running out of time.”

Prof Faten Ghosn, the head of the department of government at the University of Essex, said politicians should make it clear to voters when they are using manipulated images. She flagged efforts to regulate the use of AI in politics by the US congresswoman Yvette Clarke, who is proposing a law change that would require political adverts to tell voters if they contain AI-generated material.

“If politicians use AI in any form they need to ensure that it carries some kind of mark that informs the public,” said Ghosn.

The warnings contribute to growing political concern over how to regulate AI. Darren Jones, the Labour chair of the business select committee, tweeted on Wednesday: “The real question is: how can anyone know if a photo is a deepfake? I wouldn’t criticise @KarlTurnerMP for sharing a photo that looks real to me.”

In reply to criticism from the science secretary, Michelle Donelan, he added: “What is your department doing to tackle deepfake photos, especially in advance of the next election?”

The science department is consulting on its AI white paper, which was published earlier this year and advocates general principles to govern technology development, rather than specific curbs or bans on certain products. Since that was published, however, Sunak has shifted his rhetoric on AI from talking mostly about the opportunities it will present to warning that it needs to be developed with “guardrails”.

Meanwhile, the most powerful AI companies have acknowledged the need for a system to watermark AI-generated content. Last month Amazon, Google, Meta, Microsoft and ChatGPT developer OpenAI agreed to a set of new safeguards in a meeting with Joe Biden that included using watermarking for AI-made visual and audio content.

In June Microsoft’s president, Brad Smith, warned that governments had until the beginning of next year to tackle the issue of AI-generated disinformation. “We do need to sort this out, I would say by the beginning of the year, if we are going to protect our elections in 2024,” he said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.