Therapy-Adjacent AI Sparks Urgent FDA Oversight and Legal Battles
Published Nov 11, 2025
A surge of regulatory and legal pressure has crystallized around therapy chatbots and mental-health–adjacent AI after incidents tied to self-harm and suicidality. On Nov 5, 2025 the FDA’s Digital Health Advisory Committee began defining safety, effectiveness, and trial standards—especially for adolescents—while confronting unpredictable model outputs. Earlier, on Oct 29, 2025 Character.AI banned users under 18 and pledged age-assurance amid lawsuits alleging AI-linked teen suicides. These developments are driving new norms: a duty of care for vulnerable users, mandatory transparency and adverse-event reporting, and expanding legal liability. Expect the FDA and states to formalize regulation and for companies to invest in age verification, self-harm filters, clinical validation, and harm-response mechanisms. Mental-health risk has moved from theoretical concern to the defining catalyst for near-term AI governance.