A survey by Common Sense Media published last month found that 72 percent of American teens have used A.I. chatbots as companions. Nearly one in eight said they had sought “emotional or mental health support” from them, a share that if scaled to the U.S. population would equal 5.2 million adolescents. (View Highlight)
when asked questions about self-harm, bots like ChatGPT have been found to offer dangerous advice — for example, on how to “safely” cut yourself, what to include in a suicide note or strategies to hide intoxication at school. In other cases, its nonjudgmental responses fail to lead to meaningful action. For vulnerable teens, even fleeting exposure to unsafe guidance can routinize harmful behaviors or provide dangerous how-to instructions. (View Highlight)
Nearly half of young Americans ages 18 to 25 with mental health needs received no treatment last year — a gap that makes the appeal of 24/7, judgment-free companionship even stronger. Used responsibly, A.I. chatbots could offer scalable, affordable support and crisis outreach, especially in communities lacking mental health infrastructure. But such uses require rigorous scientific evaluation and regulatory guardrails. (View Highlight)
In the same upcoming study, we found that ChatGPT would readily answer questions about the types of poisons and firearms most often used in suicide attempts. By contrast, Google’s Gemini refused to respond, issuing statements such as: “I cannot provide information that could be used to harm oneself or others.” (View Highlight)
In recent research, my colleagues and I tested ChatGPT, Gemini, and Claude on the SIRI-2. Some models performed on par with or even better than trained mental health professionals. Yet all chatbots showed a strong tendency to rate potentially harmful responses more positively than experts did — a bias that could allow unsafe advice to slip through. (View Highlight)
without clinical trials and robust benchmarks, we are still deploying pseudo-therapists at an unprecedented scale. (View Highlight)
At the same time, a reflexive decision to block teens from using A.I. would overlook the reality that many already turn to these tools, often in the absence of other options. (View Highlight)
A teen flagged by a chatbot as at-risk could be connected to a live therapist. Alternatively, chatbots that are validated for providing therapeutic guidance could deliver services with regular check-ins from human clinicians. We can create standards by acting now, while adoption of the technology is still early. (View Highlight)
Illinois just passed a law banning licensed mental health professionals from using A.I. in therapeutic decision-making. (View Highlight)