Usa news

Chatbots can’t replace therapist for teens, but AI can be useful in their treatment

We need transformational change in mental health care for our youth, and we need it fast.

The recent deaths by suicide among young people using generic artificial intelligence chatbots are only the most obvious signs of how fast-moving, culture-shifting technologies have been seeping into and widening the existing cracks within our mental health system’s proverbial sidewalk and how our most vulnerable teens are slipping right through.

Left unchecked, these generic AI-enabled technologies are threats to adolescent safety and well-being. But they also threaten the very innovation that is desperately needed in mental health.

As a child and adolescent clinical psychologist, I’d be thrilled to see teens engaging with trained mental health professionals at the same rates they’re talking with AI chatbots. According to Common Sense Media’s 2025 National Survey, 72% of teens ages 13-17 report having engaged with a generic AI companion at least once. By contrast, per the U.S. Centers for Disease Control and Prevention, only 55% of teens ages 12-17 reported discussing their mental health with any health care professional last year.

Commentary bug

Commentary

In their current form, these unregulated, direct-to-consumer AI chatbots are risky at best and deeply unsafe at worst. They do not protect confidentiality, they are not bound by ethical safeguards or safety protocols to flag and appropriately respond to risk, and they lack both the evidence-based predicates and clinical nuance needed for meaningful outcomes.

Teens are using these AI chatbots for their mental health anyway. Even though most teens, especially older ones, report preferring human connections and question AI chatbots’ trustworthiness, data revealed 33% still chose to discuss a serious or personal matter with an AI chatbot instead of a real person.

Of course, these tools are designed to be highly persuasive and drive endless engagement, and young people are particularly susceptible to these characteristics. But teens’ use of AI for mental health in the face of its inherent risks speaks to an even larger problem. For years, mental health care in general, and for youth in particular, has been struggling to meet the needs. And it is desperately lacking in innovation.

Teen distress has been building steadily for a decade. At a peak in 2021, the CDC reported a 57% year-over-year increase in deaths by suicide among youth and a 40% increase in rates of depression in teens. Yet more than 40% of teens reported an unmet need for mental health care last year.

Even before AI, industry was trying to fill this gap. In the past five years, the market has been flooded with behavioral health technology companies offering rapid access to their virtual provider networks and digital platforms.

While early wins have included improvements to access, personalization and engagement, the full transformative potential of technology in mental health care remains unrealized.

We need technology to help teens get the right treatment, at the right level and, importantly, in the right format (on-demand, readily accessible, mobile-first, human connection enhanced by technology). We need technology to help us extend clinician resources and break down care silos to bring teens, providers and caregivers into more meaningful collaboration. And yes, we need to design these technologies in ways that maintain the safeguards that are needed for effective, person-centered and ethical care that actually improves well-being.

But if technology that is not clinically validated continues to proliferate and cause harm, we will lose our ability to innovate where it matters most. Not only will we lose public trust in the potential of technology, but we’ll risk overcorrections in regulation — challenges that will further slow the pace of mental health innovation.

For instance, a recently passed Illinois law heavily restricts AI use in licensed mental health treatment. Illinois legislators are right that we don’t want untested and unregulated technologies making independent clinical decisions. And requiring patient consent for use of any AI by one’s mental health provider — including for administrative (scheduling) and supplemental (documentation) support — is a positive step.

But the law’s broader ban for AI-enabled therapeutic communications — regardless of the tool’s regulatory and clinical validation or even with a licensed therapist in the loop — risks stifling innovation. (Interestingly, only professionals licensed to provide psychotherapy services are so compelled; physicians, including psychiatrists, are exempt.)

Technology will not solve all mental health system challenges. Nothing can fully replace the power of human connection and the therapeutic alliance that we know is central to the strongest outcomes. Yet it would be a mistake to let fear close the window on what might be true innovative potential to improve access and outcomes for teens.

Amber W. Childs is a clinical psychologist and associate professor of psychiatry at Yale School of Medicine, where she is a Public Voices fellow of the Op-Ed Project in partnership with Yale University.

Send letters to letters@suntimes.com. More about how to submit here.

Get Opinions content delivered to your inbox. Sign up for our weekly newsletter here.

Exit mobile version