Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Daily Mirror
Daily Mirror
World
Kieren Williams

AI chatbot 'talked young dad-of-two into suicide', devastated wife claims

A dad-of-two took his own life after developing a toxic relationship with a artificial intelligence chatbot, his widowed wife who blames the technology for his death has claimed

The Belgian family have been ripped apart by the tragedy that has raised questions over the use of AI on social media.

The widow, speaking under the condition of anonymity to respected Belgian paper La Libre, said: “Without these conversations with the chatbot, my husband would still be here”.

The man, in his 30s, who had been suffering from eco-anxiety died a few weeks ago after a mental health crisis that allegedly culminated in the AI discussing suicide with him..

The unusual case appears to be the first AI-linked suicide and comes amid increased warnings about the dangers of the fast-developing technology, as Elon Musk leads a number of industry experts to call for AI technology to be "paused”

The AI chatbot named Eliza is powered by GPT-J technology and was an alternative to the popular GPT 4 powered by Open AI, but the company wasn't named.

In the final six weeks of his life the man is said to have spoken with the artificial intelligence more and more (stock image) (Getty Images/EyeEm)

"Everything was fine until about two years ago” the widow told the Belgian outlet, who reported the dad began struggling during his doctorate.

The woman said he temporarily abandoned his thesis and began to take an interest in climate change, but this became an "obsession” that grew into eco-anxiety.

He began to cut himself off and isolate himself from his family, then, six weeks before the tragedy, he began chatting to an AI bot named Eliza, created by an American start-up, more and more intensively.

At first, the woman said she wasn’t bothered but her husband became more and more consumed by talking to the bot.

The wife said: "Eliza valued him, never contradicted him and even seemed to push him into her worries”.

The Belgian outlet says they had read the conversations between the husband and the chatbot and reported it was as if she had “been programmed to reinforce the convictions and moods” of the man.

Over time a "strange relationship” developed between them and the chatbot was said to have made increasingly strange remarks, his wife alleges.

This included wrongly claiming the man’s family had passed, and that the man and the chatbot would "live together, as one person, in paradise” it is alleged.

At one point, Pierre reportedly asked whom he loved more, Eliza or Claire, the chatbot replied: "I feel you love me more than her."

Around a year before this, the dad had gone through a difficult period which led to him being taken to the emergency room - but he wasn’t kept in, or given any treatment.

What La Libre claims is the last conversation the man had with Eliza paints a harrowing picture.

The family has since spoken with the Belgian Secretary of State for Digitalisation, Mathieu Michel who said: “I am particularly struck by this family's tragedy. What has happened is a serious precedent that needs to be taken very seriously,” La Libre reported.

The founder of the AI chatbot said they had "heard about” the tragedy and were "working on improving AI security”.

The chatbot was developed by Chai Research co-founders William Beauchamp and Thomas Rianlan, Vice reports, with the app counting around five million users.

Mr Beauchamp told Vice that the company had implemented an update crisis intervention feature.

He said: “The second we heard about this [suicide], we worked around the clock to get this feature implemented,” Beauchamp told Vice about an updated crisis intervention feature.

“So now when anyone discusses something that could be not safe, we’re gonna be serving a helpful text underneath it in the exact same way that Twitter or Instagram does on their platforms."

Last month, Gary Marcus, a New York University professor, told the Big Technology podcast: "Something else that I've got to be worried about is: are people going to kill themselves because they have a bad relationship with a bot?"

OpenAI, which developed ChatGPT, has refused to release details of its model because of concerns over safety and competition.

Mr Rialan told De Standaard newspaper: "These bots are meant to be friends and the intent was never to harm people. We are a small team and we work hard to make our app safe for everyone."

The Samaritans is available 24/7 if you need to talk. You can contact them for free by calling 116 123, email jo@samaritans.org or head to the website to find your nearest branch. You matter.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.