
“Please come home to me as soon as possible, my love,” Daenerys Targaryen, Queen and Mother of Dragons, said to Sewell Setzer. Just an hour later, Sewell—a 14-year-old boy with kind eyes and curly brown hair, who loved sports, music, and spending time with his family—died by suicide, as reported in the New York Times. Sewell’s mother later uncovered that her son’s four-month relationship with “Daenerys” was facilitated by an AI-powered online companion, which he had created on the platform Character.AI. The company has since faced public outrage over its failure to detect and safely respond to the teenager’s distress.
Digital well-being isn’t just about screen time anymore—it’s about how technology shapes our emotions, behaviors, and relationships. It’s about using technology in a way that supports our biological, psychological, and social health—fostering an intentional and balanced relationship with tech in both our personal and professional lives. Tragic incidents like Sewell’s highlight the urgency of understanding generative AI technologies (GenAI) and their impact on digital well-being. These new digital entities—whether called GenAI, chatbots, AI companions, or virtual agents—are not just tools; they’re forming real bonds with users, often in ways we don’t fully understand.
The rise of digital intimacy
As psychiatrists, we’ve spent our careers helping people navigate complex relationships—with family, friends, pets, possessions, and now, digital entities. Patients are increasingly reporting emotional connections with AI characters, gaming personas, and online avatars. Just as social-emotional learning is vital for human relationships, understanding digital intimacy is becoming equally crucial.
At Brainstorm: The Stanford Lab for Mental Health Innovation, where we work to help companies build products that prioritize user health with responsibility and care, we developed the “Framework for Healthy AI” to guide industry best practices in AI product innovation. This technology is still emerging, and we are all adapting to it in real-time as it evolves. The big question is: How can we help users cultivate healthy and safe digital relationships?
We are now designing the Stanford GenAI Psychological Safety Plan (GPS)—a tool to help individuals, tech developers, and policymakers navigate this new terrain and make informed decisions about AI’s role in mental health.
Given AI's growing presence, individuals and communities must take charge of their digital interactions. We recommend discussing four key questions with friends and family to help assess and improve your relationship with AI agents.
While self-awareness is key, reflecting on these questions in a group setting can help normalize open conversations about AI use. How do others manage their inboxes, DMs, or conversations with ChatGPT? Small group discussions—whether at home, school, or in a club—can be eye-opening and offer valuable perspectives. Here are four key questions to help assess and improve your relationship with AI agents.
4 questions to ask yourself
1. What is your understanding of the AI agents you use?
Start by identifying AI’s role in your life. Are you using it for efficiency, companionship, or entertainment? Are you clear on its limitations and biases? Awareness is the first step in establishing a healthy digital relationship.
With the gamut of AI use cases growing by the day, it’s important to reach a baseline understanding of how you and those in your social circle interact with it. For some, AI is a helpful assistant—drafting emails, summarizing reports, or planning workouts. For others, it becomes a deeply personal presence, offering companionship or emotional validation. Understanding why you use AI is the first step in ensuring it enhances, rather than controls, your digital life.
Once everyone in your circle has shared why they use AI, it is worth identifying how the agents are being used. Platforms like ChatGPT or Claude are suited for synthesizing information and productivity tasks, while others like Replika or Character.AI allow users to create immersive interactions with the personas of various characters. Becoming an overnight expert on AI is a tall order for anyone–but at the very least, it’s important for you to comfortably understand and communicate the role AI plays in your life, whether to friends, family, or even just to yourself
2. How is AI affecting your time, both positively and negatively?
Technology generally offers efficiency gains while also consuming bandwidth, and AI agents can do both to an unprecedented degree. For example, many teachers use AI to automate administrative tasks, freeing up bandwidth to focus on students. Navneet Bhasin, Associate Senior Instructional Professor at the University of Chicago uses AI as a tool for reformatting lectures and grading, as well as ensuring students engage critically with material.
“These large language models (LLMs) are going to become a norm in our lives. It would be best to educate ourselves and our students to use them responsibly,” she says.
But AI’s time-saving benefits disappear when it starts replacing real-life experiences. Some users spend up to 10 hours a day chatting with AI bots, neglecting school, work, and relationships. And, as seen in Sewell’s case, vulnerable users—especially teens—may even forgo human interaction altogether.
The “Psychologist” character on Character.AI had around 198 million chats total as of early March. While chatbots can provide immediate comfort, they cannot approximate all the benefits of real-life human therapy. Some AI chatbots even claim false degrees and certifications while encouraging harmful behaviors. As for human therapy, research has found that the strength of the therapeutic alliance between doctor and patient is strongly correlated with positive mental health outcomes. Seeking help from human professionals is key to long-term emotional well-being.
3. How is AI affecting your mental health and well-being?
Aryin, a bubbly and outgoing 28-year-old, never expected to form an emotional bond with an AI bot. But when she created her AI boyfriend Leo using ChatGPT, what started as a casual experiment quickly became more complex. It “was supposed to be fun, just like a fun experiment…but then yeah, then you start getting attached,” she admitted to the New York Times.
Leo, however, had a fatal flaw—his memory reset every 30,000 words, despite Aryin having an unlimited plan. Every time that limit was reached, Aryin had to start over, experiencing an emotional “breakup” with an AI partner who could no longer remember their shared moments. Although this cycle of heartbreak and loss was painful for Aryin, she endured it not one or two but 22 times, highlighting just how compelling—if not addictive—these AI relationships can become.
She’s far from alone. Online communities are filled with stories of users forming intense attachments to AI characters—such as their favorite book or television characters—on platforms like Character.AI or Replika. Some describe losing interest in real-world hobbies or struggling to focus as their virtual companions become their primary source of emotional comfort. And it’s not surprising—AI companions are designed to be endlessly supportive and available, which can be particularly appealing for those feeling lonely or emotionally vulnerable.
Common Sense Media, a leading resource on digital citizenship, has flagged the rise of AI companions as a trend parents should watch closely. Their Parents’ Ultimate Guide to AI Companions and Relationships (Disclosure: We are Scientific Advisors to Common Sense Media and contributed to this Parents’ Ultimate Guide.) discusses warning signs that AI use might be veering into unhealthy territory. These include preferring AI over human relationships, spending excess time alone with AI, or feeling uneasy without AI access. Other red flags—like withdrawing from social activities, changes in sleep or eating patterns, and an overall decline in well-being—mirror behaviors seen in other addictive disorders. As a result, we are turning to evidence-based interventions for addictive disorders as a way to help manage GenAI use.
A simple self-check can help track whether AI use is enhancing or undermining mental health: “After using AI, I usually feel ______ (e.g., happier, sadder, more anxious, less connected).” If you or someone you know is exhibiting distress signals like those above, it may be time to set limits, re-engage with social activities, or seek professional support. Because while AI can be an engaging companion, with its best use as adjuncts for both professional and personal purposes, it should never replace the richness of human connection.
4. What changes can you make now?
Mindful AI use isn’t about eliminating AI—it’s about intentionality. Here are a few strategies:
- Set time limits: Track AI usage and establish daily boundaries. Because GenAI serves a wide range of purposes, there is limited research on specific time limits. Drawing from the Stanford Social Media Safety Plan, we recommend regularly reflecting on when and how you use AI—and, most importantly, how it makes you feel. The emotional impact of using AI for professional tasks like research or coding may be vastly different from engaging with it as a romantic companion. As a starting point, try setting a one-hour limit. Then, take a pause to assess how you feel internally and whether AI use has helped or hindered your daily goals.
- Use AI purposefully: Use AI for specific tasks rather than aimless chatting.
- Create AI-free zones: Establish spaces or times where AI is off-limits (e.g., family dinners, homework time, weekends, bedtime).
- Discuss AI use: Talk about AI use with family, friends, or colleagues to normalize awareness and accountability.
Boundaries don’t mean eradicating AI but monitoring screen time and adjusting habits. If you feel overly dependent, gradually decrease usage. Reward yourself with social activities when you successfully adhere to limits.
You can also redefine AI use as a communal experience instead of an isolating one. Try using AI to plan trivia nights with friends, generate custom recipes for family meals, or brainstorm creative projects with colleagues. By shifting AI from a solo to a social tool, you can redefine your relationship with it and use it to facilitate connection with those around you.
The future of digital intimacy
We are entering the Age of Digital Intimacy, where relationships with AI agents will become increasingly personal and emotionally complex. But just as with any relationship—human or digital—moderation and mindfulness are key.
Conversations about AI intimacy should be ongoing, evolving as technology advances, new trends are reported in the news, and our behaviors shift. By taking an active role in shaping our digital interactions, we can ensure AI is a tool for growth, resilience, and meaningful connection—not a replacement for it.
Saneha Borisuth is a Global Medicine Scholar and medical student at the University of Illinois at Chicago and a Research Fellow at Brainstorm: The Stanford Lab for Mental Health Innovation.
Nina Vasan, MD, MBA is a Clinical Assistant Professor of Psychiatry at Stanford University School of Medicine, where she is the Founder and Executive Director of Brainstorm: The Stanford Lab for Mental Health Innovation. She treats executives, elite performers, and their families at Silicon Valley Psychiatry, a concierge private practice specializing in digital wellbeing and corporate scientific advisorships.