Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Lizzie Dearden

AI increasingly used for sextortion, scams and child abuse, says senior UK police chief

Man holds a smartphone showing an avatar of a woman in a tunic and trousers with a choice of five customisable faces underneath her
Fears about apparently benign chatbots grew because the man who tried to attack the queen with a crossbow in 2021 was influenced by an ‘AI friend’. Photograph: NurPhoto/Getty Images

Paedophiles, scammers, hackers and criminals of all kinds are increasingly exploiting artificial intelligence (AI) to target victims in new and harmful ways, a senior police chief has warned.

Alex Murray, the national police lead for AI, said that the use of the technology was growing rapidly because of its increasing accessibility and that police had to “move fast” to keep on top of the threat.

“We know through the history of policing that criminals are inventive and will use anything they can to commit crime. They’re certainly using AI to commit crime now,” he said.

“It can happen on an international and serious organised crime scale, and it can happen in someone’s bedroom … You can think of any crime type and put it through an AI lens and say: ‘What is the opportunity here?’”

Speaking at the National Police Chiefs’ Council conference in London last week, Murray revealed concerns over emerging AI “heists” in which fraudsters use deepfake technology to impersonate company executives and trick their colleagues into transferring large sums of money.

This year, a finance worker at a multinational firm was duped into paying HK$200m (£20.5m) to criminals after a video conference call in which the scammers were able to convincingly pose as the company’s chief financial officer.

Similar cases have been reported in several countries, while the first heist of its kind is believed to have targeted a British energy firm in 2019.

Murray, who is director of threat leadership at the National Crime Agency, said the phenomenon was a “high cost, low prevalence crime” and that he was personally aware of ­dozens of cases in the UK.

He said that the greatest volume of criminal AI use was by paedophiles, who have been using generative AI to create images and videos depicting child sexual abuse.

“We’re talking thousands and thousands and thousands of images,” Murray said. “All images, whether they are synthetic or otherwise, are against the law, and people are using generative AI to ­create images of children doing the most horrific things.”

Last month, 27-year-old Hugh Nelson, from Bolton, was jailed for 18 years after offering a paid service to online paedophile networks in which he used AI to generate requested images of children being abused.

The same technology is also being used for sextortion, a type of online blackmail in which criminals threaten to release indecent images of victims unless they pay money or carry out demands.

The phenomenon has previously used photos that victims had shared of themselves, often with former partners or abusers who used false identities to gain their trust, but AI can now be used to “nudify” and manipulate photos taken from social media.

Murray said hackers were also using AI to look for weaknesses in targeted code or software and to provide “areas of focus” for cyber-attacks. “Most of the AI criminality at the moment is around child abuse imagery and fraud, but there are a lot of potential threats,” he added.

There is mounting concern that apparently benign chatbots could incite people into crime and terrorism after revelations that a man who tried to attack Queen Elizabeth II with a crossbow in 2021 had gained encouragement from a female “AI friend”.

Jonathan Hall, the government’s independent reviewer of terrorism legislation, has been researching the potential uses of AI by terrorist groups and highlighted “chatbot radicalisation” as a threat alongside propaganda generation and attack facilitation and planning.

He found that he was able to create an Osama bin Laden chatbot using a popular commercially available platform, and that it was “very easy to do”.

In a speech at Lancaster House last month, Hall warned: “Even if we don’t know precisely how generative AI is going to be exploited by terrorists, we need a common understanding of generative AI and a confidence to act, and certainly not a reaction that says: ‘This is just too difficult.’

“We need to avoid the mistakes of the early internet period where there is a free-for-all.”

Murray said that with AI technology becoming increasingly advanced, and more generative text and image software coming on to the market and into widespread use, its exploitation by criminals of all kinds was expected to rise.

“Sometimes you can spot if something is an AI image, but very quickly that will disappear,” he warned. “People using this sort of software at the moment are still quite niche, but it will become very easy to use.

“The ease of entry, realism and availability are the three vectors which will probably increase … We, as policing, have to move fast in this space to keep on top of it.

“I think it’s a reasonable assumption that between now and 2029 we will see significant increases in all these crime types, and we want to prevent that,” he said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.