Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
World
Odanga Madung

AI hysteria is a distraction: algorithms already sow disinformation in Africa

Four young African men cluster around watching something on a mobile phone
Kenyans follow the election news in Nairobi in 2022, when TikTok’s ‘For you’ algorithm fed inflammatory propaganda videos to millions of voters. Photograph: B Muthoni/Sopa/Shutterstock

More than 70 countries are due to hold regional or national elections by the end of 2024. It will be a period of huge political significance across the globe, with more than 2 billion people (mostly from the global south) directly affected by the outcome of these elections. The stakes for the integrity of democracy have never been higher.

As concerns mount about the influential role of information pollution, disseminated through the vast platforms of US and Chinese corporations, in shaping these elections, a new shadow looms: how artificial intelligence – more specifically, generative AI such as OpenAI’s ChatGPT – has increasingly moved into the mainstream of technology.

The recent wave of hype around AI has seen a fair share of doom-mongering. Ironically, this hysteria has been fed by the tech industry itself. OpenAI’s founder, Sam Altman, has been touring Europe and the US, making impassioned pleas for regulation of AI while also discreetly lobbying for favourable terms under the EU’s proposed AI Act.

Altman calls generative AI a major threat to democracy, warning of an avalanche of disinformation that blurs the lines between fact and fiction. But we need some nuance in this discussion because we are missing the point: we reached that juncture a long time ago.

Tech multinationals such as TikTok, Facebook and Twitter built highly vulnerable AI systems and left them unguarded. As a result, disinformation spread via social media has become a defining feature of elections globally.

In Kenya, for example, I spent months documenting how Twitter’s trending algorithm was easily manipulated by a thriving disinformation-for-hire industry to spread propaganda and quash dissent through the platform. Similar discoveries were made by other journalists in Nigeria prior to its recent elections.

My research in Kenya also found that TikTok’s “For You” algorithm was feeding hundreds of hateful and inflammatory propaganda videos to millions of Kenyans ahead of its 2022 elections. TikTok and Twitter have also recently come under scrutiny for their role in amplifying the hate-filled backlash towards LGBTQ+ minorities in Kenya and Uganda.

Authoritarianism uses emotions to polarise people, finding fertile ground in specific events and febrile political climates. Social media platforms such as Facebook and TikTok have accelerated the spread of propaganda through microtargeting and by evading election silence windows, or blackout periods, making distribution remarkably simple.

What this means is that there is no need to rely exclusively on content generated by AI to carry out effective disinformation campaigns. The crux of the issue lies not in the content made by AI tools such as ChatGPT but in how people receive, process and comprehend the information facilitated by the AI systems of tech platforms.

For this reason, I take this sudden realisation by the tech industry with a pinch of salt. By letting Altman define what we should care about when it comes to AI, we are allowing a corporation to define the safety and risk-mitigation of this technology, instead of tried and tested institutions, such as consumer and data protection agencies.

For example, the tech industry has followed the colonial path of its western corporate predecessors, with their extractive, destructive practices in developing countries. In its “please regulate us” campaign, the tech industry has conveniently ignored the fact that, in its efforts to build these AI systems, it has nearly destroyed people’s lives along the way.

I’ve spoken at length to the “data workers” who train the content-moderation algorithms of Meta’s and TikTok’s platforms. Many of them got post-traumatic stress disorder while on the job and were paid peanuts for it. Similarly, those who carried out the data-cleaning for Sam Altman’s darling, ChatGPT, suffered the same fate – but guess who’s making all the money from this suffering?

“AI doomerism”, as addressed and defined by predominantly white monopolistic capitalists, conveniently selects what to focus on and what to ignore. Kenyans and many other Africans helped make ChatGPT the phenomenon it is today. Its path towards becoming one of the fastest-growing platforms the world has ever seen was fuelled by them.

In essence, they are the ones making Sam Altman and Mark Zuckerberg rich because without them their platforms would be unusable. But I bet Africa’s people don’t even cross the mind of Altman and his colleagues.

So I implore advocates and observers of democracy, especially in developing countries, not to lose sight of the existing harms perpetuated by AI. We don’t need to imagine a distant future – the problems are already here. Born into a capitalist world, this technology will only further the injustices that exist within its very fabric.

If we learn from the rise of the current tech corporations, there will be no slowdown in the speed of AI’s development. Thus we need to understand the political and economic conditions from which it emerges. Its power is centralised, its economics are extractive, and its growth is reckless.

Odanga Madung is a Mozilla fellow, journalist and data scientist based in Nairobi, Kenya

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.