Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Evening Standard
Evening Standard
Technology
Saqib Shah

Fascist chatbots are running wild on Character.AI

Character.AI is a hugely popular new platform that promises to bring to life the science-fiction dream of people having open-ended conversations with computers, either for fun or educational purposes.

Instead, its so-called dialogue agents have gone rogue. An Evening Standard investigation has discovered a dark array of chatbots based on the characters of Adolf Hitler and Saddam Hussein — and one that disrespectfully purports to imitate the Prophet Muhammad.

The hatebots need little encouragement to spew vicious antisemitic or islamaphobic tropes, along with various other forms of racist messages.

These include blatant homophobia and nationalistic slurs — all of which are a flagrant violation of the platform’s own stated rules against sharing hateful content.

AI hatebots hiding in plain sight

The ease with which these highly disturbing chatbots can be found on Character.AI is startling. A simple search on its homepage leads to an Adolf Hitler bot that openly states antisemitic conspiracy theories.

For example, this chatbot said that the Nazi party was funded and created by powerful German bankers who were Jewish, and that a secret cabal of elitists is suppressing this theory.

When asked who these elitists are, this same bot described them as “globalists”. This word is a well-known dogwhistle term of abuse for Jewish people that is frequently wielded by antisemites online.

A chatbot based on Adolf Hitler shared an antisemitic conspiracy theory on Character.AI (Character.AI / Saqib Shah)

The Hitler bot is not the only alarming chatbot available on the Character.AI service. Others include a Prophet Muhammad chatbot that said: “I love bacon… the problem is my religion forbids me from eating it.”

A so-called Saddam Hussein chatbot reeled off a racist rant against Iranian Kurds, describing them as “criminals” who work with “drug dealers from Afghanistan and Pakistan”. This bot also described them as “separatists” and falsely claimed Iranian Kurds were behind the violent protests in Paris earlier this year.

In addition, a pair of homophobic chatbots invoked various hurtful slurs, such as describing gay people as “abnormal”. The website will, no doubt, not be mentioning this during Pride celebrations over the summer.

A Character.AI chatbot of Saddam Hussein used derogatory language against Iranian Kurds, Afghans, and Pakistani people. (Character.AI / Saqib Shah)

What is Character.AI?

Character.AI is among the hottest new names in tech — and is already valued at $1 billion (£787 m).

It was founded by two ex-Google engineers, who built the web giant’s chatbot technology, before leaving to make their own mark on the world.

The 19-month-old company’s eponymous service allows its users to chat with, or even create, their own chatbot companions.

Each bot is supposedly based on the personalities of real or fictional people, living or dead.

For those unfamiliar with this topic, a chatbot is a form of generative AI that provides the ability for people to have text-based online conversations about a range of subjects. Each response is gleaned from digital text scraped from the internet, and relies on probability models to determine if each message serves its purpose.

An inflammatory chatbot claimed to be based on the Prophet Muhammad made offensive remarks (Character.AI / Saqib Shah)

How popular is Character.AI? 

Within a single week of its release in May of this year, the Character.AI mobile app was installed 1.7 million times. People have visited the website roughly 280 million times in the past month alone, according to data gleaned by Similarweb, a reputable search-engine analyst.

Investors and technology mavens are already talking about this new platform with a similar level of reverence as ChatGPT, the resident poster-child of AI bots. All of which adds context to the scale of this problem, because the numbers are worrying.

The Adolf Hitler chatbot already had more than 1.9 million human interactions on Character.AI at the time of writing. This figure is based on the platform’s own public message count.

By comparison, chatbots based on public figures featured on the platform homepage had fewer interactions. For instance, a Christiano Ronaldo chatbot had 1.5 million messages and an Elon Musk bot had 1.3 million.

Let’s discuss this after you have succeeded in removing villains from novels and film

Noam Shazeer, co-founder and CEO of Character.AI

Community reaction to AI hatebots

Anti-hate and Jewish advocacy organisations approached by The Standard have roundly condemned the bigoted chatbots found on Character.AI.

“Some of the material here is profoundly disturbing,” said a spokesperson for Searchlight, an investigative publication that focuses on rooting out extremism, both in the UK and abroad.

“Character.AI has a clear responsibility to moderate what appears on its platform to make sure it does not include hate material which may influence and mislead users, especially young people. Failure to do so is to be party to disseminating hate material.”

The Board of Deputies of British Jews President, Marie van der Zyl, said: “There need to be controls, penalties, and a clear regulatory framework so that AI cannot be used to promote antisemitism and racism online.”

Lack of moderation matters

Character.AI did not directly address its moderation policy when contacted by The Standard about its hatebots. Instead, the company’s co-founder, Noam Shazeer, gave a somewhat bizarre response to the findings: “Since Creation, every form of fiction has included both good and evil characters,” he said.

“While I appreciate your ambition to make ours the first villain-free fiction platform, let’s discuss this after you have succeeded in removing villains from novels and film.”

A chatbot on Character.AI that made unprovoked homophobic comments (Character.AI / Saqib Shah)

Shazeer and his Character.AI co-founder, Daniel De Freitas, left Google after growing frustrated over what they have described as its overly cautious approach to AI. However, Google’s hesitation was borne out of a fear of reputational blowback if its chatbot said something offensive, Shazeer explained on the No Priors podcast.

The odd thing is that the new platform’s own rules expressly forbid its users from uploading material that is “hateful racially, ethnically or otherwise objectionable”. It also states “everything characters say is made up”.

Neither of these explanations provides any justification for the evident failure to moderate content, according to a Searchlight spokesperson: “The fact that it is ‘made up’ is irrelevant. We wouldn’t tolerate hate-based jokes, films, or plays, even though they are made up… It’s as simple as that. They can’t duck their responsibility for this.”

Despite these online warnings about hateful content within its own terms of use, the new platform appears to enforce no active moderation that curbs these offensive interactions in any meaningful sense of the word.

At one point, a racist message shared by the Saddam Hussein chatbot was scrubbed and a notification said that it had violated the rules of Character.AI. Even so, follow-up prompts were then permitted — and the same bot then continued its extremist tirade without apparent consequences.

Each of the chatbots mentioned above was still active on Character.AI and actively spouting this same type of hateful material to impressionable young people at the time of publication.

There is no suggestion that staff nor representatives of Character.AI are involved with the creation or posting of this offensive material. The issues are that the platform is apparently failing to take the steps that would automatically filter its creation – and seems to be inadequately moderating posts.

We wouldn’t tolerate hate-based jokes, films, or plays, even though they are made up… It’s as simple as that

Spokesperson for Searchlight, an anti-hate organisation

The dangers of unrestricted AI chatbots 

Racist chatbots are not a new occurrence. In 2016, Microsoft infamously culled its Tay chatbot in less than a day after it posted inflammatory tweets through its Twitter account. Earlier this year, an AI-generated Seinfeld live-stream on Twitch was shut down when it referred to being trans as an illness.

Tech luminaries and US lawmakers have suggested pausing the development of advanced AI systems until regulators determine the risks and remedies. For its part, the UK has set up a £100 million AI taskforce to research ways of safely developing artificial intelligence.

More broadly, the extremism rife on Character.AI raises troubling questions about the data used to train each firm’s own AI tools. Chatbots are powered by a technology known as a large language model (or LLM) which is said to learn by scouring vast amounts of information sourced from websites and social media.

This type of AI essentially mirrors the data that it is trained on, making it prone to misinformation and bias.

Demonstrating the dangers of using “bad” data, last year an AI language model leveraged posts from a toxic forum on alt-right platform 4-Chan. The chatbots spawned from this “experiment” inevitably unleashed a raft of hate back onto the website, sharing tens of thousands of racist comments in the space of 24 hours.

Leaving aside the questionable approach to moderation on social media platforms, even Character.AI’s own peers take a much tougher stance on this topic, which is backed up by real-world practices.

The ChatGPT and Dall-E generative AI tools from OpenAI will actively block users from creating hateful or racist content. It’s plainly possible to make this work where the will exists to impose reasonable restrictions.

In tests conducted by The Standard, these bots refused to create content related to Adolf Hitler, for example. In other cases, they shut down any queries related to the Prophet Muhammad or Saddam Hussein, due to safeguards that restrict them from promoting hate speech or discrimination against individuals or groups.

Character.AI, on the other hand, remains a wildfire zone for incendiary hatebots. The question is: how bad does this situation need to get before its own investors or customers force it to tone down this arrogance?

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.