Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The New Daily
The New Daily
Ash Cant

Kuwaiti newsreader is just one facet of AI penetration of media

Fedha, an AI-generated newsreader, was unveiled by a media company in Kuwait. Photo: Twitter/@KuwaitNews

An AI-generated newsreader has made her debut in Kuwait, and while it’s different it’s not as sinister as the threat posed by other forms of AI in media.

Kuwait News recently unveiled Fedha, the newsreader who isn’t even human, on Twitter to offer “innovative” content.

I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions,” Fedha says in Arabic, according to AFP.

In 2018, China’s state-run Xinhua News Agency launched the “world’s first” AI newsreader, so Fedha is by no means the first AI newsreader, nor is this technology anything new.

It seems Kuwait News is using Fedha to present news bulletins, using text-to-video AI, and not actually having her generate news to present, which would be more concerning, Professor Peter Vamplew said.

The professor of information technology at Federation University had one of his students use this form of AI to turn in a video presentation, and he said it made sense for the task.

Gradient Institute chief executive Bill Simpson-Young shared this concern of AI generating news, and told The New Daily that his guess is there is a human behind Fedha, checking everything, because it is not designed to create the news.

“Large language was incredibly powerful and incredibly impressive, but they’re not good at generating facts. They’re not designed to generate facts,” he said. “They’re designed to generate language.”

AI creating fake news

Although not designed to generate facts, AI has been in the news recently for making up some sinister claims.

Brian Hood, mayor of Hepburn Shire Council, is suing Open AI for defamation.

Mr Hood says ChatGPT, which was recently banned in Italy, claimed he was imprisoned for bribery while working for a subsidiary of the Reserve Bank of Australia.

He did work for the subsidiary, but he was never guilty and was actually the one to blow the whistle on payments to foreign officials, his lawyers said, according to Reuters.

In the US, law professor Jonathan Turley was named by ChatGPT, when a lawyer asked it to make a list of legal scholars who had sexually assaulted someone.

Open AI’s chatbot cited The Washington Post and claimed that Professor Turley attempted to touch a student and had made inappropriate comments.

However, that article never existed and the trip in question never happened.

Microsoft’s Bing then regurgitated the same false claims, The Washington Post reported in an actual article.

“Improving factual accuracy is a significant focus for us, and we are making progress,” an Open AI spokesperson told The Post, stating that users are made aware that Chat GPT doesn’t always produce correct answers.

You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasise across the internet,” Professor Turley wrote on his blog.

It’s not just people being accused of crimes they never committed.

Professor Vamplew asked Chat GPT to produce papers from his area of research and in turn he received a fictitious paper with his own name attached.

There’s a chance false information is being spouted every time someone uses AI like Chat GPT, Professor Vamplew said.

Although scarily convincing, AI doesn’t really understand what it is saying and at some point, it will likely say something that is wrong.

In one conversation, Professor Vamplew had AI try and convince him that two was smaller than one, because of an error it made previously, so it began to double down.

“When it does something like that, obviously, you see that it’s not really intelligent and it raises questions about everything it’s told you – but up to that point, it really does seem quite believable and quite smart,” he said.

News and AI

It’s not clear why Fedha asked what kind of news her audience liked, but if AI is going to generate news for a particular individual, there is reason to be concerned,

Professor Vamplew told The New Daily that the concept of customised newsfeeds is just leading to echo chambers, where people only hear what they want to hear, as we have already seen on social media.

But Mr Simpson-Young says if we head down this path, it could be even more manipulative than social media.

Propaganda already exists in some parts of the media, but if an organisation was to use conversational AI that was not designed to consider the ethical implications, it could not just present news but also try and persuade someone of something.

“That does worry me about the future of news. If news ends up going down this way, where there are AI agents trying to convince people rather than inform them,” Mr Simpson-Young said.

The media has to appease a large audience, but if AI-generated news were to be personalised, pushing propaganda could be much more efficient.

AI is also not the solution to prevent bias from seeping into the media.

“These large language models have just been trained on huge datasets and have been scraped off the internet and so that reflects everything that’s good and bad about humanity right now,” Professor Vamplew said.

Everything is bias, and there is a real risk of bias spilling over into AI systems, he said.

AI systems essentially try to predict the most likely thing to come next, which means they are “heavily biased towards mainstream views”.

Because minority groups are under-represented in datasets, they will get overlooked in such AI systems.

Both Mr Simpson-Young and Professor Vamplew believe companies creating AI need to be held accountable, but people also need to know what AI platforms are designed for.

Just like traditional news, people need to question the AI-generated content they are being provided.

Both Mr Simpson-Young and Professor Vamplew signed an open letter calling for a pause of giant AI experiments, expressing concerns of an “out-of-control race” to develop AI that even creators cannot understand, predict or control.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.