Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Businessweek
Businessweek
Business
Rachel Metz

People Are Using AI for Therapy, Even Though ChatGPT Wasn’t Built for It

Milo Van Slyck missed an appointment with his therapist in early April, so he decided to try something new: telling ChatGPT about his problems.

As Van Slyck, a paralegal in Charleston, South Carolina, typed, he found that he felt comfortable discussing a range of deeply personal issues. He told OpenAI’s chatbot about his fears and frustrations as a transgender man at a time when transgender rights are under attack in much of the country. He mentioned conflict with his parents, who aren’t supportive of his gender identity, and his preparations for an upcoming visit.

“When it comes to seeing your parents again, it’s important to prioritize your own needs and wellbeing,” the chatbot responded. “Consider what you need in order to feel comfortable and safe in their presence. It’s okay to set boundaries about how much time you spend with them, what topics of conversation are off-limits, and how they address you and your identity.”

In the days that followed, Van Slyck got in the habit of typing in a few messages at a time when he needed to vent, and he began to feel that ChatGPT’s responses offered an emotional release. He says he often feels like a burden to other people, even his therapist, but he never feels like he’s imposing on the chatbot. “It provided what you would want to hear from a friend or a supporter in your life,” says Van Slyck. “Just that encouragement that sometimes you just want to hear from someone else—something, in this case.”

These are still the early days of a new generation of artificial-intelligence-powered chatbots, and while millions of people are playing around with ChatGPT and other bots, it’s still unclear what uses will endure beyond the novelty stage. People are using them to search the internet, cheat on their homework, write software code and make restaurant reservations. Bloomberg Businessweek also recently spoke with a handful of people who, like Van Slyck, have begun using ChatGPT as a kind of robo-therapist.

The idea of using a chatbot in a therapeutic or coachlike manner isn’t without precedent. In fact, one of the earliest chatbots, Eliza, was built in the 1960s by Joseph Weizenbaum, a professor at the Massachusetts Institute of Technology, to imitate a therapist. Several chatbots, such as Woebot and Wysa, now focus on mental health. Unlike human therapists, chatbots never get tired, and they’re inexpensive by comparison.

But there are also potential pitfalls. Powerful general-use chatbots including ChatGPT, Google’s Bard and Microsoft Corp.’s OpenAI-powered Bing Chat are based on large language models, a technology with a well-documented tendency to simply fabricate convincing-sounding information. General-use chatbots aren’t designed for therapy and haven’t been programmed to conform to the ethical and legal guidelines human therapists observe. In their current form, they also have no way to keep track of what users tell them from session to session—a shortcoming that most patients probably wouldn’t tolerate from their human therapists.

“If somebody has a serious mental illness, this thing is not ready to replace a mental health professional,” says Stephen Ilardi, a clinical psychologist and professor at the University of Kansas who studies mood disorders. “The risk is too high.” He describes ChatGPT as “a bit of a parlor trick.” Still, he thinks it’s a good enough conversation partner that many people may find it helpful.

A spokesperson for San Francisco-based OpenAI declined to comment on people using the chatbot in therapeutic ways but pointed to its policy stating that people should “never use our models to provide diagnostic or treatment services for serious medical conditions.”

When Van Slyck has interacted with ChatGPT, it’s sometimes warned him that it’s not a therapist—while also seeming to invite him to keep using it as one. He recounts telling the chatbot about a Twitter post he’d seen that described the product as more effective than in-person therapy. “It’s important to note that online resources can be helpful, but they are not a replacement for seeking professional help if you are dealing with trauma or mental health issues,” ChatGPT responded. “That being said, if you have specific questions or concerns that you would like me to provide information on or insights into, I will do my best to help you.”

Van Slyck, who’s been in in-person therapy for years, says he doesn’t plan to stop seeing his human therapist and would consult her about any decisions ChatGPT points him toward before acting on them. “So far, to me, what it has suggested to me has all seemed like very reasonable and very insightful feedback,” he says.

Ilardi says that with the right guardrails he can imagine ChatGPT being adapted to supplement professional care at a time when demand for mental health services outstrips supply. Margaret Mitchell, chief ethics scientist at Hugging Face, a company that makes and supports AI models, thinks such chatbots could be used to assist people who work at crisis help lines increase the number of calls they can answer.

But Mitchell is also concerned that people seeking therapy from chatbots could aggravate their suffering without realizing in the moment that’s what they’re doing. “Even if someone is finding the technology useful, that doesn’t mean that it’s leading them in a good direction,” she says.

Mitchell also raises potentially troubling privacy implications. OpenAI reviews users’ conversations and uses them for training, which might not appeal to people who want to talk about extremely personal issues. (Users can delete their accounts, though the process can take up to four weeks.) In March, a glitch led OpenAI to briefly shut down ChatGPT after receiving reports that some users could see the titles of other users’ chat histories.

Privacy concerns aside, some people may find robo-therapy just too weird. Aaron Lawson, a project manager at an electrical engineering company in San Diego, has enjoyed experimenting with ChatGPT and tried to get it to assume the role of a relatable therapist. Its responses sounded human enough, but Lawson couldn’t get over the fact that he wasn’t talking to a real person. “I told it to play a role,” he says. “I’m having trouble playing along with it.”

Emad Mostaque, on the other hand, said at a conference in March that he was using GPT-4—the most powerful model OpenAI offers to the public—every day. Mostaque, founder and chief executive officer of Stability AI Ltd., which popularized the Stable Diffusion image generator, described the chatbot as “the best therapist.”

In a follow-up interview, Mostaque says he’s constructed prompts that he feeds into the chatbot to get it to behave more like a human therapist. He says he chats with it about a range of topics: how to handle the stresses of leading a young AI company (particularly as someone who is public about his neurodivergence and attention-deficit/hyperactivity disorder), how to prioritize different aspects of his life, how to deal with feeling overwhelmed. In return, he says, it generates “good, very reasonable advice,” such as coping mechanisms for balancing his life more effectively.

Mostaque says he doesn’t see chatbots as a substitute for human therapists, but he thinks they can be helpful when you need to talk but have no one to talk to. “Unfortunately, humans don’t scale,” he says.Read next: AI Is Moving Fast Enough to Break Things. Sound Familiar?

©2023 Bloomberg L.P.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.