Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - AU
The Guardian - AU
Business
Jackson Ryan

Cosmos magazine’s AI-generated articles are bad for trust in science

Cosmos has been publishing explainer articles written by generative artificial intelligence covering  topics such as ‘What is a black hole?’
Cosmos has been publishing explainer articles written by generative artificial intelligence covering topics such as ‘What is a black hole?’ Photograph: AFP/Getty Images

In mid-2019, I was reading a fascinating piece in Cosmos magazine, one of Australia’s eminent science publications. There was this one image of a man lying on an operating table, covered in bags of McCain’s frozen french fries and hash browns.

Scientists had discovered rapid cooling of the body might improve the survival rates of patients who had experienced heart attacks. This man was one such patient, thus the Frozen Food Fresco. The accompanying report was written by Paul Biegler, a bioethicist at Monash University, who had visited a trauma ward in Alfred hospital, Melbourne, to learn about this method in an effort to understand if humans could, in some distant future, be capable of hibernation.

It’s the kind of story I return to when I start panicking about AI’s infiltration into the news. AI, after all, can’t visit Alfred hospital and – at least right now – it’s not conducting any interviews.

But AI-generated articles are already being written and their latest appearance in the media signals a worrying development. Last week, it was revealed staff and contributors to Cosmos claim they weren’t consulted about the rollout of explainer articles billed as having been written by generative artificial intelligence. The articles cover topics like “what is a black hole?” and “what are carbon sinks?” At least one of them contained inaccuracies. The explainers were created by OpenAI’s GPT-4 and then fact-checked against Cosmos’s 15,000-article strong archive.

Full details of the publication’s use of AI were published by the ABC on August 8. In that article, CSIRO Publishing, an independent arm of CSIRO and the current publisher of Cosmos, stated the AI-generated articles were an “experimental project” to assess the “possible usefulness (and risks)” of using a model like GPT-4 to “assist our science communication professionals to produce draft science explainer articles”. Two former editors said that editorial staff at Cosmos were not told about the proposed custom AI service. It comes just four months after Cosmos made five of its eight staff redundant.

The ABC also wrote that Cosmos contributors were not aware of its intention to roll out the AI model, nor did it notify them that their work would be used as part of the fact-checking process. CSIRO Publishing dismissed concerns that the AI service was trained on contributors’ articles, with a spokesperson noting the experiment used a pre-trained GPT-4 model from OpenAI.

But the lack of internal transparency and consultation has left journalists and contributors feeling betrayed and angry. Multiple sources suggest the experiment has now been put on pause, but CSIRO Publishing did not respond to a request for comment.

The controversy has provided a dizzying sense of deja vu. We’ve seen this before. Well-respected US tech website CNET, where I served as science editor until August 2023, published dozens of articles generated by a custom AI engine at the end of 2022. In total, CNET’s robot writer racked up 77 bylines and, after investigation by rival publications, more than half of its articles were found to contain inaccuracies.

The backlash was swift and damning. One report said the internet was “horrified” by CNET’s use of AI. The Washington Post dubbed the experiment “a journalistic disaster”. Trust in the publication was shattered, basically overnight, and, for journalists in the organisation, there was a feeling of betrayal and anger.

The Cosmos example provides a startling parallel. The backlash has been swift, once again, with journalists weighing in. “Comprehensively appalling,” wrote Natasha Mitchell, host of the ABC’s Big Ideas. And even the responses by the organisations are almost identical: dub it an experiment, pause the rollout.

This time, however, the AI is being used to present facts underpinned by scientific research. This is a worrying development with potentially catastrophic consequences. At a time when trust in scientific expertise and the media are both declining (the latter more precipitously than the former), rolling out an AI experiment with a lack of transparency is, at best, ignorant, and, at worst, dangerous.

Science can reduce uncertainty but not erase it. Effective science journalism involves helping the audience understand that uncertainty and, research shows, improves trust in the scientific process. Generative AI, unfortunately, remains a predictive text tool that can undermine this process, producing confident-sounding bullshit.

That’s not to say generative AI doesn’t have a place in newsrooms and should be banned. It’s already being used as an idea generator, for quick feedback on drafts or help with headlines. And, with appropriate oversight, perhaps it will become important for smaller publishers, like Cosmos, to maintain a steady stream of content in an internet age ravenous for more.

Even so, if AI is going to be deployed in this way, there are outstanding issues that haven’t been resolved. The confident-sounding false information is just the beginning. Issues around copyright and the theft of art to train these models has made its way to court, and there are serious sustainability issues to contend with: AI’s energy and water usage, though hard to definitively calculate, are immense.

The bigger barrier though is the audience: The University of Canberra’s Digital News Report 2024 suggests only 17% of Australians are comfortable with news produced “mostly by AI”. It also noted that only 25% of respondents were comfortable with AI being used specifically for science and technology reporting.

If the audience doesn’t want to read AI-generated content, who is it being made for?

The Cosmos controversy brings that question into stark relief. This is the first question that needs to be answered when rolling out AI and it’s a question that should be answered transparently. Both editorial staff and readers should be privy to the reason why an outlet might start using generative AI and where it will do so. There can be no secrecy or subterfuge – that, we’ve seen time and again, is how you destroy trust.

But, if you’re anything like me, you’ve reached the end of this article and want to know more about the heart attack guy who was saved by a bunch of McCain’s frozen food. And there’s a lesson in that: the best stories stick with you.

From what we’ve seen to date, AI-generated articles don’t have that staying power.

  • Jackson Ryan is an award-winning science and video games journalist. He also serves as president of the Science Journalists Association of Australia

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.