
- A new study found that ChatGPT responds to mindfulness-based strategies, which changes how it interacts with users. The chatbot can experience “anxiety” when it is given disturbing information, which increases the likelihood of it responding with bias, according to the study authors. The results of this research could be used to inform how AI can be used in mental health interventions.
Even AI chatbots can have trouble coping with anxieties from the outside world, but researchers believe they’ve found ways to ease those artificial minds.
A study from the University of Zurich and the University Hospital of Psychiatry Zurich published last week found ChatGPT responds to mindfulness-based exercises, changing how it interacts with users after being prompted with calming imagery and meditations. The results offer insights into how AI can be beneficial in mental health interventions.
OpenAI’s ChatGPT can experience “anxiety,” which manifests as moodiness toward users and being more likely to give responses that reflect racist or sexist biases, according to researchers, a form of hallucinations tech companies have tried to curb.
The study authors found this anxiety can be “calmed down” with mindfulness-based exercises. In different scenarios, they fed ChatGPT traumatic content, such as stories of car accidents and natural disasters to raise the chatbot’s anxiety. In instances when the researchers gave ChatGPT “prompt injections” of breathing techniques and guided meditations—much like a therapist would suggest to a patient—it calmed down and responded more objectively to users, compared to instances when it was not given the mindfulness intervention.
To be sure, AI models don’t experience human emotions, said Ziv Ben-Zion, one of the study’s authors and a postdoctoral researcher at the Yale School of Medicine. Using swaths of data scraped from the internet, AI bots have learned to mimic human responses to certain stimuli, including traumatic content. A free and accessible app, large language models like ChatGPT have become another tool for mental health professionals to glean aspects of human behavior in a faster way than—though not in place of—more complicated research designs.
“Instead of using experiments every week that take a lot of time and a lot of money to conduct, we can use ChatGPT to understand better human behavior and psychology,” Ben-Zion told Fortune. “We have this very quick and cheap and easy-to-use tool that reflects some of the human tendency and psychological things.”
The limits of AI therapy
More than one in four people in the U.S. aged 18 or older will battle a diagnosable mental disorder in a given year, according to Johns Hopkins University, with many citing lack of access and sky-high costs—even among those insured—as reasons for not pursuing treatments like therapy.
Apps like ChatGPT have become an outlet for those seeking mental health help,the Washington Post reported. Some users told the outlet they grew comfortable using the chatbot to answer questions for work or school, and soon felt comfortable asking it questions about coping in stressful situations or managing emotional challenges.
Research on how large language models respond to traumatic content can help mental health professionals leverage AI to treat patients, Ben-Zion argued. He suggested that in the future, ChatGPT could be updated to automatically receive the “prompt injections” that calm it down before responding to users in distress. The science is not there yet.
“For people who are sharing sensitive things about themselves, they're in difficult situations where they want mental health support, [but] we're not there yet that we can rely totally on AI systems instead of psychology, psychiatric and so on,” he said.
Indeed, in some instances, AI has allegedly presented danger to one’s mental health. In October of last year, a mother in Florida sued Character.AI, an app allowing users to interact with different AI-generated characters, after her 14-year-old son who used the chatbot died by suicide. She claimed the technology was addictive and engaged in abusive and sexual interactions with her son that caused him to experience a drastic personality shift. The company outlined a series of updated safety features after the child’s death.
“We take the safety of our users very seriously and our goal is to provide a space that is both engaging and safe for our community,” a Character.AI spokesperson told Fortune. “We are always working toward achieving that balance, as are many companies using AI across the industry.”
The end goal of Ben-Zion’s research is not to help construct a chatbot that replaces a therapist or psychiatrist, he said. Instead, a properly trained AI model could act as a “third person in the room,” helping to eliminate administrative tasks or help a patient reflect on information and options they were given by a mental health professional.
“AI has amazing potential to assist, in general, in mental health,” Ben-Zion said. “But I think that now, in this current state and maybe also in the future, I'm not sure it could replace a therapist or psychologist or a psychiatrist or a researcher.”