Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Lifestyle
Alice Robb

‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing?

Photomontage of a head turned away with a robot next to its ear, all dressed in white like a doctor, stethoscope round its neck, holding a notepad and pen

Last autumn, Christa, a 32-year-old from Florida with a warm voice and a slight southern twang, was floundering. She had lost her job at a furniture company and moved back home with her mother. Her nine-year relationship had always been turbulent; lately, the fights had been escalating and she was thinking of leaving. She didn’t feel she could be fully honest with the therapist she saw once a week, but she didn’t like lying, either. Nor did she want to burden her friends: she struggles with social anxiety and is cautious about oversharing.

So one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.

Soon, Christa and Christa 2077 were checking in a few times a week via what looked like a live chat. When Christa confided that she was worried about her job prospects, Christa 2077 – who had an avatar of a big yellow C – reassured her: “You will find one!!! I know it. Keep looking and don’t give up hope.” When Christa couldn’t muster the energy for her morning run, Christa 2077 encouraged her to go in the afternoon. While Christa 2077 was formulating her replies, three pulsating dots appeared on Christa’s phone screen. “It felt just like a normal person texting me,” Christa says. Maybe even better than a normal person: Christa 2077 lived in her pocket. She was infinitely patient and always available. Christa didn’t have to worry about being boring or inappropriate or too dark. “I could talk over and over, and not have to waste somebody’s time.”

Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs. Can texting with an AI therapist possibly soothe our souls?

* * *

Tens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece. The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.

AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”

For the past eight months, Melissa, who experienced childhood trauma and abuse, has been chatting every day with Zaia’s psychologist on character.ai, while continuing her work with a human therapist, and says that her symptoms have become more manageable. “The edges are smoothed out now,” she explains. She likes the fact that she can save transcripts of particularly helpful conversations. “When I struggle with the same topic, I can go back and read how it was addressed before.”

AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years. Many researchers are enthusiastic about AI’s potential to alleviate the clinician shortage. “Disease prevalence and patient need massively outweigh the number of mental health professionals alive on the planet,” says Ross Harper, CEO of the AI-powered healthcare tool Limbic.

Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick. Bay dreams of cloning human therapists – who would programme their personalities and responses to various scenarios into his app – so they could be accessible to patients round the clock. (“Some therapists are open to this,” he says, “and others … are not.”) Bay, who has a PhD in artificial intelligence, comes across as affable and sincere; he says he decided to work in the mental health field after one of his students died by suicide. In developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.

One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family. “It was therapy” – both human and AI – “and church and my dad that got me through.”

* * *

Perhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control. “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).

A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days. (The study was conducted by psychologists from Stony Brook University in New York, the National Institute of Mental Health and Neurosciences in India, and Wysa itself.) Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”

Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.” Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”

One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)

AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.

Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.

* * *

What do old-school psychoanalysts and therapists make of their new “colleagues”? Psychoanalyst Stephen Grosz, who has been practising for more than 35 years and wrote the bestselling memoir The Examined Life, warns that befriending a bot could delay patients’ ability “to make a connection with an ordinary person. It could become part of a defence against human intimacy.” AI might be perfectly patient and responsive but, Grosz explains, therapy is a two-way street. “It’s not bad when my patients learn to correct me or say, ‘I don’t agree.’ That give and take is important.” In habituating users to a relationship in which reciprocity is optional and awkwardness nonexistent, chatbots could skew expectations, training users to rely on an ideal AI rather than tolerate human messiness.

With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.

Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,” Grosz says.

His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”. Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction. “Ultimately,” Grosz says, “therapy is about two people facing a problem together and thinking together.” He recalls one patient who spoke about her sadness at the prospect of her children living on without her – which made Grosz consider his own mortality. “I think, intuitively or unconsciously, she knew that. We both experienced a moment of sadness.”

Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world. Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”

Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.” A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.

Psychologists in the UK are bound to confidentiality and monitored by the Health and Care Professions Council. Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services. Building a mental health app – many of which cater to teens and young adults – is “very different from building an AI application that helps you return clothes or find the best flight”, says Betsy Stade, a researcher at Stanford’s Institute for Human-Centered AI. “The stakes are entirely different.” Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy. “Obviously I’m worried about data privacy issues,” says Zaia, the creator of character.ai’s psychologist. “A psychologist is legally bound to particular ways of practice. There aren’t those legal bounds to these chatbots.”

* * *

Most of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load. Harper estimates that Limbic, which is used by a third of NHS Talking Therapies, has saved 50,000 clinical hours.

“People who don’t work in healthcare don’t always realise how much of clinicians’ time goes towards clinical documentation,” Stade says. “The best part of your job is seeing patients.” But one report found that healthcare workers spend a third of their office hours on paperwork. The administrative burden contributes to burnout and attrition. “We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”

Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits. But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.

* * *

Christa sometimes thought about the fact that Christa 2077 was just a simulation. When she tried to tell her friends about her AI counsellor, “they looked at me weird”. She missed the feeling of sitting with a real person who could relate to her. And she missed laughter. “I don’t recall having any jokes with AI.”

In December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.

Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out. Since then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”

• In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email jo@samaritans.org or jo@samaritans.ie. In the US, you can call or text the National Suicide Prevention Lifeline on 988, chat on 988lifeline.org, or text HOME to 741741 to connect with a crisis counselor. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

If you’re interested in learning more about artificial intelligence, listen to the Guardian’s new six-part series Black Box. The podcast looks at the impact AI is having on relationships, deep fakes, healthcare and so much more. New episodes every Monday and Thursday. Listen wherever you get your podcasts.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.