FTC Probes AI Chatbots Over Child Safety, Signaling Stricter Enforcement
Published Nov 11, 2025
Over the past two weeks the FTC opened a Section 6(b) inquiry into major AI chatbot providers—including Alphabet, Meta, OpenAI, xAI, Character.AI and Snap—seeking detailed records on persona design, input/output handling, minor protections and mitigation of harms after lawsuits alleging teen suicides. The probe elevates child safety to a central enforcement priority, signaling potential content‐based regulation, stricter transparency and testing requirements, and legal exposure for noncompliant firms. With federal executive directives reshaped and a proposed federal moratorium on state AI rules removed, companies face a fragmented regulatory landscape as states legislate independently. Expect FTC disclosures, possible rulemaking and litigation, and industry moves toward care‐by‐design, age verification, parental controls and more robust monitoring to reduce risk and liability.