Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Comment
Ian Tucker

AI journalism is getting harder to tell from the old-fashioned, human-generated kind

Chatbots have a reputation for manufacturing truth and inventing sources.
Chatbots have a reputation for manufacturing truth and inventing sources. Photograph: Skorzewiak/Alamy

A couple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and technology section of the Observer’s New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence – including a pitch about how it is being employed to predict deforestation in the Amazon.

One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas – pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you’ve been toying with ChatGPT or reading about Google chatbot Bard’s latest mishaps, are hints of chatbot-generated content.

I showed them to a colleague. “They feel synthetic,” he said. Another described them as having the tone of a “life insurance policy document”. Were our suspicions correct? I decided to ask ChatGPT. The bot wasn’t so sure: “The texts could have been written by a human, as they demonstrate a high level of domain knowledge and expertise, and do not contain any obvious errors or inconsistencies,” it responded.

Chatbots, however, have a reputation for manufacturing truth and inventing sources, so maybe they aren’t the most reliable factcheckers. I suggested that if there is one thing chatbots ought to be able to do, it is to recognise the output of a chatbot. Chatbot disagreed. A human writer could mimic a chatbot, it stated, and in the future “chatbots may be able to generate text that is indistinguishable from human writing”.

As with anything a chatbot “says”, one should be sceptical – the technology that they are built on creates stuff that sounds plausible. If it also happens to be accurate, that’s not the result of reasoning or intelligence. If the chatbot were a bit more intelligent it might have suggested that I put the suspect content through OpenAI’s text classifier. When I did, two of the pitches were rated “possibly” AI generated. Of the two Medium blog posts with the student’s name on, one was rated “possibly” and the other “likely”.

I decided to email him and ask him if his pitches were written by a chatbot. His response was honest: “I must confess that you are correct in your assumption that my writing was indeed generated with the assistance of AI technology.”

But he was unashamed: “My goal is to leverage the power of AI to produce high-quality content that meets the needs of my clients and readers. I believe that by combining the best of both worlds – human creativity and AI technology – we can achieve great things.” Even this email, according to OpenAI’s detector, was “likely” AI generated.

Although the Observer won’t be employing him to write any articles, he seems well suited to apply for a job at Newsquest, which last week advertised the £22,000 role of AI-powered reporter for its local news operation.

How AI will affect journalism is hard to predict – presumably Newsquest will be aware that outlets such as Men’s Journal and Cnet have used AI to write articles about health and personal finance, but these were found to be full of inaccuracies and falsehoods. And that in January, BuzzFeed announced that it would used AI to “enhance quizzes” but has quickly rolled out AI content to other areas of the site. “Buzzy”, their “creative AI assistant”, has produced 40-odd travel guides, with a writing style Futurism describes as “incredibly hackneyed”.

These articles are labelled as written with the aid of or by a chatbot. But when researching a piece, journalists could use chatbots to summarise reports or suggest questions for an interviewee. If small pieces of this AI-generated text find their way into an article, does this need to be disclosed? This was considered at a San Francisco Press Club discussion last week, which the panel host, Bloomberg’s Rachel Metz, summed up as: “How important is it to you that the news that you read is written by a human?”

At the Observer we say “very important”. Questions like this are being considered by all news organisations including by our colleagues at the Guardian, who are investigating more broadly the technology’s effect on journalism.

Meanwhile, the Observer remains AI-free. When perusing other news sources, be wary of content that reads like financial services promotional material.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.