Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sandra Matz

AI could revolutionize the future of mental health care, but it comes with big risks

(Credit: Getty Images)

Sandra Matz is a computational social scientist, and a professor at Columbia Business School, where she also serves as the Director of the Center for Advanced Technology and Human Performance. She is the author of Mindmasters: The Data-Driven Science of Predicting and Changing Human Behavior

More than 1 in 5 adults in the US experiences mental health problems, yet less than half of those in need receive professional treatment. Worldwide, the gap in supply and demand is so big that for every trained health professional there are over 10,000 potential clients. And it keeps widening year over year. The National Center for Health Workforce Analysis estimates that there will be a need for 60,000 additional mental health professionals by 2036, but predicts that instead, there will be over 10,000 fewer.

To reverse this trend, we need a revolution–a complete overhaul of the way we approach mental health. AI could power that revolution. By democratizing access to care and abandoning one-size-fits-all approaches that merely focus on “fixing” acute mental illness, AI can help us shift toward a more equitable and personalized care that aims to proactively boost psychological well-being

Thanks to a rapidly evolving technology landscape and recent breakthroughs in Generative AI, this alternate universe isn't just a pipe dream. In 2023, the global AI mental health market was valued at over USD 921 million and it is projected to grow at a compound annual growth rate of 30.8% until 2032. 

The power of AI

What makes AI such a powerful mental health care companion is its ability to both monitor and support our mental health. Take your smartphone or smartwatch, for example. They are not just convenient gadgets, but powerful wellness detectives. Equipped with an armada of sensors, they are constantly on the lookout for clues to your mental well-being. Has your sleep been all over the place lately? Have you spent more time at home than usual? Or have you stopped talking to your friends and are instead doomscrolling at 2 a.m. (again!)? All these little details could be an indication that you are struggling and need support. Instead of leaving such insights only for companies to exploit for profit, we could use AI to build early warning systems that inform you—and perhaps your doctors and other guardians you identified before—about abnormalities in your behavior, deviations from what is normal for you.

But AI can do much more than merely monitor our mental health. It can help us improve it. At the most basic level, predictive algorithms can identify the most effective behavior change interventions or treatments. It’s the medical equivalent of Netflix’s movie recommendation engine. Instead of recommending movies to you based on the movies you have liked in the past, and the movies other people with similar preferences have enjoyed, algorithms can use whatever information they have on you—your psychological dispositions, your socioeconomic environment, previous treatment success, and more—to map you against other people struggling with the same mental health condition and match you with the intervention that is most likely to succeed.

And then there’s Generative AI, of course, which trades the nodding therapist for a bright smartphone screen and swaps out the couch for a place of your choosing. Just five years ago, even the most popular mental health applications seemed rather formulaic and were highly limited in the extent to which they could engage in free-flowing, natural conversations. The introduction of OpenAI’s ChatGPT in late 2022 changed this practically overnight. Today, an AI therapist built on top of a foundational Large Language Model can engage in science-based, empathetic and contextually aware conversations that are hard (if not impossible) to distinguish from those of human therapists. 

AI won’t replace human mental health professionals—and it shouldn’t. It’s meant to complement them. AI-based mental health companions offer an alternative for anyone who can’t afford the luxury of paying $500 a week for a one-hour session, or who is simply too worried about the stigma that is still associated with mental health problems. They fill in at 2 a.m. when your therapist isn’t available, but you urgently need to talk to someone (in fact, about 70% of conversations with AI therapists occur outside of regular office hours). And they can assist mental health professionals to elevate care to unprecedented levels of depth and personalization. 

Importantly, AI-based mental health applications might be particularly suited to serve one of the most vulnerable populations: young adults. In 2022, a swooping 36.2% of individuals aged 18-25 in the U.S. reported suffering from a mental illness, the highest prevalence among all adult age groups. At the same time, young adults are also the ones most likely to adopt mental health technologies. As recent studies and industry reports suggest, younger generations—particularly Gen Z—are more open to using AI-based mental health applications, reflecting their comfort with technology and preference for accessible, non-judgmental support platforms.

The road ahead

We are only at the very start of the AI mental health revolution. As we move from text-based chatbots to voice-based agents, our interactions with AI-based mental health companions are poised to become increasingly natural and human-like. More so, advances in physical embodiment of AI agents in the form of humanoid robots (or perhaps the less human but undoubtedly adorable version Walt Disney envisioned when it created the first personal health companion Baymax in their 2015 movie Big Hero 6) could soon take us all the way to mimicking our traditional flesh-and-blood therapists. 

The opportunities associated with this revolution are real. But so are the risks. At its worst, AI doesn’t heal but harms. The tragic case of a teenager’s suicide after using Character.AI is a chilling reminder that artificial systems—much like sociopaths—excel at simulating empathy but lack a true understanding of human pain and suffering. Without appropriate guardrails that create accountability and encourage users to seek support beyond their smartphone screens, AI-based companions might unintentionally crowd out our natural ability to form meaningful connections with and seek help from the communities of people around us.

In addition to becoming overly reliant on artificial agents, there is also a risk of becoming overly dependent on the companies that power them with their foundational language models (e.g., OpenAI’s ChatGPT, Google’s Gemini or Antrophic’s Claude). If OpenAI were to shut down access to ChatGPT overnight, millions – if not billions – of people around the world might be left stranded and robbed of the mental support they have come to rely on. Similarly, an unintended flaw in (or malicious hack of) one of the foundational models could have devastating consequences for the collective mental well-being of large swaths of society.

The tumultuous history of social media platforms like Facebook, Youtube or X (formerly Twitter) has taught us that “moving fast and breaking things” is a dangerous gamble in the world of technology. This is particularly true when the technologies we create have the potential to impact the health and well-being of much of the world’s population. Building for innovation isn’t enough. If AI is to live up to its promise of revolutionizing mental health care, we need to approach its development and deployment with a sense of responsibility, transparency, and ethical foresight. 

This means prioritizing rigorous testing, demanding sufficient levels of data privacy, and involving mental health professionals as well as policymakers in the design and implementation process. Doing so will allow us to ensure AI becomes a force for good, revolutionizing mental health care without compromising the trust or well-being of the people it is meant to help. 

The opinions expressed are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.