Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Prarthana Prakash

Former Google safety boss sounds alarm over tech industry cuts and A.I. ‘hallucinations’

A person typing on a keyboard (Credit: lechatnoir—Getty Images)

New artificial intelligence developments from the recent generative A.I. wave have pushed the tech into newsrooms, where it's been used either in place of or in collaboration with human authors. 

But now the former safety chief at Google is warning that the the tech is far from perfect, and A.I. could pose a real risk if it’s used to write news stories.

“It’s important to understand that it can be hard to detect which stories are written fully by A.I. and which aren’t. That distinction is fading,” Arjun Narayan, former trust and safety head at Google, told tech news platform Gizmodo in an interview published Monday. Narayan now oversees safety at SmartNews, a news aggregation platform founded in 2012. 

Narayan said that A.I.'s capabilities come with the risk of it being inaccurate. 

“The industry term is ‘hallucination,’” Narayan said. “The right thing to do is say, ‘Hey, I don’t have enough data, I don’t know.’”

Among the many risks of using generative A.I. in news is training the model to suit the purpose and convey the truth, Narayan notes.  

“When an A.I. makes a decision you can attribute some logic to it but in most cases it is a bit of a black box,” Narayan said. “It still needs human oversight. It needs checking and curation for editorial standards and values. As long as these first principles are being met I think we have a way forward.”

For Narayan, these dangers have only been exacerbated by recent cuts at tech companies, which have often hit safety and A.I. ethics teams. "As we disinvest, are we waiting for shit to hit the fan?" he asks.

Dabbling in A.I.

News services have been dabbling with A.I. to grow their product offerings, generate more content and customize services for readers. BuzzFeed announced in February that it would use A.I. to tailor some of its quizzes to users, and within months, found that users were far more engaged as a result. Other platforms have also been using A.I. to write news—including the Boring Report which uses OpenAI’s software to write summaries of stories in a non-sensationalized way.

While these experiments have shown the opportunity that A.I. presents in making news more accessible and relevant to its audience, the threat of conveying false information without human oversight could make it dangerous, too. 

“I personally believe there is nothing wrong with A.I. generating an article but it is important to be transparent to the user that this content was generated by A.I.,” Narayan said. “As A.I. advances there are certain ways we could perhaps detect if something was A.I. written or not but it’s still very fledgling. It’s not highly accurate and it’s not very effective.”

Earlier attempts to use A.I. to write news have not always been successful. Tech news site CNET enlisted A.I. to write a handful of news articles, some of which turned out to be factually inaccurate. 

But off-shoots of A.I. are being used even outside established news outlets. A report published earlier this month by NewsGuard, a news ratings group, found an uptick in A.I.-run news services, or "content farms," across a variety of languages. They often generated a high volume of news articles with several of them conveying false information. NewsGuard CEO Steven Brill told the New York Times that the lack of trust in news media was driving people to seek out other options, boosting the business of A.I. platforms.

“News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source,” Brill said. “This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.”

Experts have sounded out the alarm about A.I. being used to replace human writing. For instance, in an interview this month, A.I. pioneer Geoffrey Hinton said A.I. chatbots could be used to spark election misinformation. 

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the New York Times.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.