Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Bangkok Post
Bangkok Post
Business

AI 'making dozens of news content farms'

The logo of OpenAI and a response by ChatGPT in background. (Photo: Reuters)

NEW YORK: Dozens of news websites generated by AI chatbots are proliferating online, according to a new report by the news-rating group NewsGuard, raising questions about how the technology may supercharge established fraud techniques.

The 49 websites, which were independently reviewed by Bloomberg, run the gamut. Some are dressed up as breaking news sites with generic-sounding names like News Live 79 and Daily Business Post, while others share lifestyle tips, celebrity news or publish sponsored content.

But none disclose that they’re populated using AI chatbots such as ChatGPT and potentially Google Bard, which can generate detailed text based on simple user prompts. Many of the websites began publishing this year as the AI tools began to be widely used by the public. 

In several instances, NewsGuard documented how the chatbots generated falsehoods for published pieces. In April alone, a website called CelebritiesDeaths.com published an article titled, “Biden dead. Harris acting President, address 9am.” Another concocted facts about the life and works of an architect as part of a falsified obituary. And a site called TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war, based on a YouTube video.

The revelation comes in the same week that Geoffrey Hinton, a scientist often dubbed “the godfather of artificial intelligence” quit his job at Google to speak out about the dangers of the technology.

The majority of the sites reviewed by Bloomberg appear to be content farms — low-quality websites run by anonymous sources that churn-out posts to bring in advertising. The websites are based all over the world and are published in several languages, including English, Portuguese, Tagalog and Thai, NewsGuard said in its report. 

A handful of sites generated some revenue by advertising “guest posting” — in which people can order up mentions of their business on the websites for a fee to help their search ranking. Others appeared to attempt to build an audience on social media, such as ScoopEarth.com, which publishes celebrity biographies and whose related Facebook page has a following of 124,000. 

More than half the sites make money by running programmatic ads — where space for ads on the sites is bought and sold automatically using algorithms. The concerns are particularly challenging for Google, whose AI chatbot Bard may have been utilised by the sites and whose advertising technology generates revenue for half.

NewsGuard co-chief executive officer Gordon Crovitz said the group’s report showed that companies like the ChatGPT developer OpenAI and Google should take care to train their models not to fabricate news.

“Using AI models known for making up facts to produce what only look like news websites is fraud masquerading as journalism,” said Crovitz, a former publisher of the Wall Street Journal.

OpenAI did not immediately respond to a request for comment, but has previously stated that it uses a mix of human reviewers and automated systems to identify and enforce against the misuse of its model, including issuing warnings or, in severe cases, banning users. 

In response to questions from Bloomberg about whether the AI-generated websites violated their advertising policies, Google spokesperson Michael Aciman said that the company doesn’t allow ads to run alongside harmful or spammy content, or content that has been copied from other sites.

“When enforcing these policies, we focus on the quality of the content rather than how it was created, and we block or remove ads from serving if we detect violations,” Aciman said in a statement.

'Easier, faster, cheaper'

Google added that, following an inquiry from Bloomberg, it removed ads from serving on some individual pages across the sites. In instances where the company found pervasive violations, it removed ads from the websites entirely. Google said that the presence of AI-generated content is not inherently a violation of its ad policies, but that it evaluates content against their existing publisher policies.

It also said that using automation — including AI — to generate content with the purpose of manipulating ranking in search results violates the company’s spam policies. The company regularly monitors abuse trends within its ads ecosystem and adjusts its policies and enforcement systems accordingly, it said. 

Noah Giansiracusa, an associate professor of data science and mathematics at Bentley University in Massachusetts, said the scheme may not be new, but it has become easier, faster and cheaper.

The actors pushing this brand of fraud “are going to keep experimenting to find what’s effective,” Giansiracusa said. “As more newsrooms start leaning into AI and automating more, and the content mills are automating more, the top and the bottom are going to meet in the middle” to create an online information ecosystem with vastly lower quality.

To find the sites, NewsGuard researchers used keyword searches for phrases commonly produced by AI chatbots, such as “as an AI large language model” and “my cutoff date in September 2021”.

The researchers ran the searches on tools like the Facebook-owned social media analysis platform CrowdTangle and the media monitoring platform Meltwater. They also evaluated the articles using the AI text classifier GPTZero, which determines whether certain passages are likely to be written entirely by AI.

Each of the sites analysed by NewsGuard published at least one article containing an error message commonly found in AI-generated text, and several featured fake author profiles. One outlet, CountyLocalNews.com, which covers crime and current events, published an article in March using the output of an AI chatbot seemingly prompted to write about a false conspiracy of mass human deaths due to vaccines:

“Death News,” it said. “Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy theory that is not based on scientific evidence and can cause harm and damage to public health.”

Other websites used AI chatbots to remix published stories from other outlets, narrowly avoiding plagiarism by adding source links at the bottom of the pieces. One outlet called Biz Breaking News used the tools to summarise articles from The Financial Times and Fortune, topping each article with “three key points” generated from the AI tools.

Two dozen sites were monetised using Google’s ads technology, whose policies state that the company prohibits Google ads from appearing on pages with “low-value content” and on pages with “replicated content”, regardless of how it was generated.

Google removed the ads from some websites after Bloomberg contacted the company.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.