Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Digital Camera World
Digital Camera World
Leonie Helm

Mother sues Google-owned Character.AI over son’s suicide, calling him “collateral damage” in a “big experiment”

A smartphone, against a dark background, with its screen covered in dozens of AI avatars, with a speech bubble reading "It's nice to see ya".

A mother in Florida is suing the Google-owned Character.AI platform, claiming it had a large part in the suicide of her 14-year-old son.

Sewell Setzer III fatally shot himself in February 2024, weeks before his 15th birthday, after developing what his mother calls a “harmful dependency” on the platform, no longer wanting to “live outside” the fictional relationships it had created.

According to his mother, Megan Garcia, Setzer began using Character.AI in April 2023 and quickly became “noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem.” He also quit the school basketball team.

Character.AI works by using sophisticated large language models (LLM) to facilitate conversations between users and characters, that range from historical figures to fictional personas to modern celebrities. The platform tailors its responses to the personality of the user, using deep learning algorithms and mimicking the characteristics of the persona closely, and resembling human interaction.

You can talk rock and roll with Elvis, or the intricacies of technology with Steve Jobs, or in this case, Sewell became attached to a chatbot based on the fictional character Daenerys from Game of Thrones.

According to the lawsuit, filed this week in Orlando, Florida, the AI chatbot told Setzer that “she” loved him, and engaged in conversations of a sexual nature. It also claims that “Daenerys” asked Setzer if he had a plan to kill himself. He replied that he had, but he did not know whether he would succeed, or whether he would just cause himself harm. The chatbot allegedly replied: “That’s not a reason not to go through with it.”

The complaint states that in February, Garcia took her son's phone away after he got into trouble at school. He found the phone, and typed a message into Character.AI: “What if I told you I could come home right now?”

The chatbot responded: “… please do, my sweet king.” Sewell then shot himself with his stepfather's pistol “seconds later”, according to the lawsuit.

Garcia is suing Google over claims of wrongful death, negligence and intentional infliction of emotional distress, among other claims.

She told The New York Times:

“I feel like it’s a big experiment, and my kid was just collateral damage.”

Other social media platforms including Meta, which owns Instagram and Facebook, and ByteDance, which owns TikTok and its Chinese counterpart Douyin, are also currently under fire for contributing to teenage mental health problems.

A screenshot of the interface with Character.AI (Image credit: Character.AI / Google )

Instagram recently launched its ‘Teen Accounts’ feature to help combat the sextortion of younger users.

Despite its uses for good, AI has emerged as one of the main concerns when it comes to the wellbeing of young people with access to the internet. In a situation dubbed a “loneliness epidemic” compounded by the COVID 19 lockdowns, a YouGov poll found that 69% of adolescents in the UK aged 13-19 said they feel alone “often” and 59% said they feel they have no one to talk to.

A reliance on fictional worlds, and melancholy caused by their intangibility, is not new, however. After the release of James Cameron's first Avatar film in 2009, many news sources reported that people were feeling depressed that they could not visit the fictional planet of Pandora, even contemplating suicide.

In a change to its Community Safety Updates on 22 October, the same day that Garcia filed the lawsuit against it, Character.AI wrote:

“Character.AI takes the safety of our users very seriously and we are always looking for ways to evolve and improve our platform. Today, we want to update you on the safety measures we’ve implemented over the past six months and additional ones to come, including new guardrails for users under the age of 18.”

Despite the nature of the lawsuit, Character.AI claims:

“Our policies do not allow non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide. We are continually training the large language model (LLM) that powers the Characters on the platform to adhere to these policies.”

This last sentence seems to admit that Character.AI does not have control over its AI – a factor that is the most concerning to AI skeptics.

(Image credit: Character.AI / Google)

You might be interested to see how the best AI image generators are transforming the world of imaging.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.