Since 2022, the millions of people who have been searching for child abuse videos on Pornhub's UK website, have been interrupted.
Each time users search words or phrases that are linked to abuse, which was counted at 4.4 million times in 24 months, a rethink Chatbot appeared with a warning message that blocked the page.
The warning message told users that the type of content that they are searching for, is illegal.
In some cases, for 50 per cent of the users, an Artificial Intelligence (AI) chatbot has also pointed people to where they can seek help.
This news comes after the National Centre for Missing and Exploited Children said that it received more than 32 million reports of suspected child sexual abuse from companies and the public in 2022.
The organisation said that it was presented with roughly 88 million images, videos and other files that highlighted suspected child exploitation – up from a record-breaking 70 million images and videos reported to the centre in 2019.
According to the National Centre for Missing and Exploited Children, the development of social platforms, such as Facebook, Instagram and Snapchat, makes detecting abusive content easier.
In 2020, after it was criticised in a New York Times report, Pornhub removed some 10 million videos from its site, in a bit to eradicate all child abuse material and other damning content from its website.
While millions of child abuse-related content is removed from the web each year, images and videos are still shared on social media, traded in private chats and sold on the dark web.
There have also been cases of exploitative footage being sold to other legal pornography video-sharing sites.
The Pornhub chatbot comes as part of the video-sharing website collaboration with two UK-based child protection organisations. The collaboration set out to find out whether people could be stopped from looking for illegal content with small AI interventions.
Internet Watch Foundation (IWF), the non-profit organisation that removes exploitative child content from the internet, created the chatbot. The Lucy Faithfull Foundation, a charity which works to prevent child sexual abuse, was the second organisation that worked on the trial.
According to a new report that analysed the collaborative AI test, seen by WIRED, deploying the chatbot led to fewer people searching for child sexual abuse material.
Some of the users also followed the AI advice and sought support for their psychological tendencies and behaviour.
While the AI chatbot did not identify individual users, it asked people a series of personal questions. The user was instructed to either type out a unique response or select a pre-written answer.
The questionnaire finished with the user being pointed toward the Lucy Faithfull Foundation's help services.
According to the report, the chatbot led to 1,656 requests for more information made directly through the chatbot, 490 people clicking through to the charity's Stop It Now website and roughly 68 people calling or speaking with Lucy Faithfull's confidential helpline.
Joel Scanlan, the Senior Lecturer at the University of Tasmania who led the analysis of the reThink Chatbot test, said: "The actual raw numbers of searches, it's actually quite scary high."
During the two-year trial, there were a huge 4,400,960 warnings on Pornhub's UK website – detecting users who searched for words and phrases linked to child abuse.
While the trial exposed the disturbing interests of some users, 99 per cent of all searches during the trial did not receive a warning.
"There's a significant reduction over the length of the intervention in numbers of searches," Scanlan explained, going on to note that "the deterrence messages do work".