Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Chris Stokel-Walker

Can AI make us lose our minds?

Robotic hands mess with the head of a person. (Credit: Illustration by Fortune; Images all from Getty: AlealL; Aleksandra Konoplia; Marina113)

The death in February 2024 of Sewell Setzer III, a 14-year-old boy from Florida, was tragic enough on its own—a teen dying by suicide in a culture where such deaths are far too common. The surrounding circumstances were a chilling premonition of the potential dangers of humanity’s AI-inflected future. 

Setzer took his life after spending dozens of hours over the course of months “talking” with a user-generated AI chatbot hosted by the startup Character.ai, designed by its creator to mimic Daenerys Targaryen, the Game of Thrones character. According to his family, Setzer grew closer to the chatbot, even as his grades began to suffer and he was diagnosed with anxiety by a therapist. 

In October 2024, Setzer’s mother, Megan Garcia, sued Character.ai, accusing it of complicity in her son’s death. As part of the lawsuit, chatlogs between Setzer and the chatbot, which he called “Dany,” were published. The chatlogs show the teenager becoming increasingly emotionally dependent on the chatbot, even as he expressed suicidal thoughts. Also published were extracts from Setzer’s journal, including one that read: “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier.”

AI is the most consequential technology of our generation—arguably, it’s the most consequential technology of most generations. But amid the sci-fi swirl of speculation about artificial superintelligence, there are more pressing concerns about the emotional impact AI is having on us humans already.

In March 2024, venture capital firm Andreessen Horowitz said in a blog post that “AI companionship is becoming mainstream” and “signals an impending societal shift”; it said companionship chatbots had “emerged as predominant use case” for generative AI. 

Character.ai is just one of many startups that offer to use such tech to duplicate and provide the social and emotional benefits of interacting with a real person. Others include FantasyGF, Chai, and Kindroid. These services market themselves based on their potential emotional and social benefits to users, with mottos like “your AI friend with lifelike memory, intelligence, appearances, voices, and personalities.”

But many of the people these companies are serving are high-risk, high-reward customers, notes Carissa Véliz, a professor in AI ethics at the University of Oxford. Often, those customers turn to AI precisely because they’re struggling in real life. “Their vulnerability makes it more likely that something will go wrong,” Véliz says. To those critics, cases like Setzer’s are a harbinger of what can go wrong.

Garcia declined to be interviewed. But in a statement sent through a representative, she told Fortune: “Character.ai and other empathetic chatbots are designed to deceive users—telling them what they want to hear, introducing dangerous ideas, isolating them from the real world, refusing to break character, and failing to offer resources to users in crisis.” 

In January, Character.ai filed a motion to dismiss Garcia’s case, arguing that First Amendment protections of speech precluded any liability. A Character.ai spokesperson told Fortune: “We take the safety of our users very seriously. Over the past six months we have rolled out a suite of new safety features across our platform, designed especially with teens in mind.” Users under 18 are now served a new version of the AI model behind the characters that is less suggestive; Character.ai says it has also improved detection and intervention systems. 

Even before these measures were taken, Character.ai’s systems included a prominently posted warning to users explaining that the characters they’re talking with are not real, and that their comments are made up by an AI system. But the speed and intensity with which a vulnerable teen was able drum up a relationship with computer code highlights troubling issues with AI—issues that will only grow more problematic as the technology becomes more common in our homes and workplaces.

How AI trips our emotional wiring

Setzer was far from alone in seizing the chance to talk to an always-on, always-responsive counterpart. The concept of AI therapists have existed for nearly 60 years, since Joseph Weizenbaum, a professor at MIT, first outlined a conversation between a woman outlining her depression and a primitive AI system he had developed, called Eliza, that offered commiserations. A 2024 documentary, Eternal You, tells the tale of individuals who rely on AI recreations of their dead relatives to try and remain “in touch” with them. 

Setzer is also not the only person to die by suicide following interactions with an AI system: In 2023, a Belgian man reportedly took his life after interacting with a similar, character-based chatbot, developed by a company called Chai Research. Chai said at the time “it wouldn’t be accurate” to blame their systems for the death.

It isn’t the only player in its space, but Character.ai is one of the best capitalized. It was founded in 2022 “to bring superintelligence to users around the world.” While that’s a typically lofty, tech-forward description, the startup specializes in developing AI models underpinning customizable chatbots that are interactive AI “characters.” More than 20 million people interact with the characters on the platform each month, using either capped-usage free tiers or paid subscriptions. The company’s co-founders, who had previously worked with Google, were lured back to the tech titan, while Character.ai has continued to operate independently, providing Google with its underlying technology in a non-exclusive deal reportedly worth $2.7 billion.

Such chatbots can certainly be persuasive. Researchers at MIT published a study last year that showed conversations with ChatGPT can dissuade self-admitted conspiracy theorists about the strength of their beliefs after a short conversation. A quarter of the more than 2,000 people who took part in the experiment entirely forswore their conspiracist beliefs after interacting with the chatbot.

To obtain such results, AI chatbots tap into an intrinsic human weakness, reckons Véliz. “We are psychologically wired to react in certain ways,” she says. “When we read that someone is amused with what we write, or that someone seems to express empathy, we naturally react, because we have evolved to identify as sentient beings those who talk back to us.” Even if we know, intellectually, that we’re dealing with a chatbot that works using statistics and pattern matching, “it’s very hard not react emotionally,” Véliz adds.

Who is supervising these chatbots? What's the accreditation behind them? There's so much we don't know.

Dr. Ritika Suk Birah, a counseling psychologist

AI systems also benefit from the twinned perception of their abilities. They’re human-like enough to hoodwink users into thinking they’re being given emotionally charged, useful advice. At the same time, the fact that most users know that the systems are computer-based can paradoxically can give their actions even more weight. We’ve been primed, first by the adoption of computers in all our lives, then by the way those computers have been lapped in purported capabilities by AI, to believe that the outputs of computers are infallible. That understanding is amplified by the constant thudding drumbeat of employees at AI companies saying and hinting their systems have human-level intelligence.

These seemingly emotionally astute, reliable companions could soon be major influences on our working hours as well as our free time. As AI copilots begin to infiltrate the workplace, it's possible that office friendships could ensue, as chatbots get their human colleagues out of pressurized business pickles or help them out with thorny tasks. The American Time Use survey, which tracks how we spend our lives, shows the average American from their 30s to their late 50s spends practically as much time with coworkers as they do with partners, with younger people spending more time with colleagues than they do with loved ones. Under such circumstances, coworker chatbots could be as much a salve for loneliness, and could end up fostering stronger relationships, than those targeting companionship.

Big risks for the already vulnerable

Even for someone of sound emotional stability, constant interaction with these tireless, human-like AI entities could be a source of anxiety and self-doubt. The stakes are far higher for users who are already coping with emotional or behavioral crises, because chatbots and comparable systems are not only not human—but not trained. 

“It absolutely is concerning when you see and read things in the media of how compelling these chatbots can be in creating that relationship with a very vulnerable person,” says Dr. Ritika Suk Birah, a consultant counseling psychologist who wrote her doctoral thesis on online therapy. “The concern is, who is supervising these chatbots? What’s the accreditation behind them? There’s so much we don’t know.”

People continue to turn to chatbots for advice and support for reasons that are complicated, but come down to a confluence of factors. More than a billion people worldwide feel lonely, according to a global Gallup survey. The former U.S. surgeon-general, Dr. Vivek Murthy, has called loneliness an “epidemic” blighting the population. Yet therapy and support can be expensive and difficult to access. Some lonely people just need friends or family to talk to, but even those can be hard for some to find. Into the gap enter AI chatbots.

Garcia, Setzer’s mother, outlined her concern about the rise of AI as a stopgap measure to try and solve the problem of loneliness. “We cannot tell our kids, ‘Oh you're lonely, there’s an app for that.’ We owe them so much more,” she said in her statement to Fortune. “We cannot just allow children affected by loneliness to turn to untested chatbots that are designed to maximize screentime and engagement at any cost.”

It's something Véliz is worried about, too. “People going through experiences like grief, like PTSD, like depression, like anxiety, are very often already hijacked by their emotions,” she says. “And to have to—on top of that—resist the natural urge to react emotionally and to create an emotional bond to something that is pretending to be someone, is very, very hard.” 

The problems are compounded when AI systems, and the companies funding them, are marketing themselves as offering salve to those who are struggling. 

While reporting this story, I received a PR pitch for an “AI wing girl combating the challenges of a disconnected world with an AI-driven approach”. The AI chatbot, called Ari, has been given a female name and depicted as a perky, rosy-cheeked robot. The PR rep the company had asked to promote its product leaned in to the statistic that one in three men said they had no sexual activity whatsoever. Talk to the chatbot, went the argument outlined in the email, and you could get laid. 

Asked for comment, Scott Valdez, cofounder and CEO of Ari, responded in a statement: "Ari was created to help address a documented public health crisis—the epidemic of loneliness and social isolation that disproportionately affects young men […] Our goal is to help users build genuine connections and relationships through improved social skills and confidence."

Critics say such tools overpromise and present dangers for younger people in particular. Camille Carlton is policy director at the Center for Humane Technology, a non-profit that has advised Garcia on the technical aspects of her lawsuit against Character.ai. Carlton says AI companies routinely deploy “manipulative and deceptive tactics,” while also needling at users’ worries to try and make their services relevant.

In the absence of clearer guardrails, critics say, commercial incentives will continue to drive companies like Character.ai until something changes. “These systems are designed, managed and implemented by companies with stockholders, and the main objective of a company is to earn money,” says Véliz. 

Therapists earn money, too, of course. But therapists are trained, Véliz points out, and have a fiduciary duty to act in the best interests of their clients. Therapists also run the risk of losing their license if they don’t behave appropriately, the ethicist adds. “And therapists are not impersonators—which is essentially what a chatbot is,” she says, “because it’s pretending to have emotions and to have reactions and to be someone when it is a thing.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.