Global Pivot in AI Governance: EU Delays, U.S. Shapes Therapy Rules

Global Pivot in AI Governance: EU Delays, U.S. Shapes Therapy Rules

Published Nov 12, 2025

On Nov. 12, 2025 EU Commissioner Henna Virkkunen said the European Commission will present a digital simplification package on Nov. 19, 2025 proposing AI Act amendments to ease compliance—potentially including a one‐year grace period delaying enforcement of transparency fines until August 2027—after the AI Act entered into force in August 2024 and with high‐risk rules due August 2026; the goal is legal certainty for firms juggling overlapping rules like the DSA/DMA. In the U.S., the FDA’s Digital Health Advisory Committee met Nov. 5–7, 2025 to consider how generative AI therapy tools should be regulated amid state bans/limits (e.g., Illinois, Utah, Nevada) with civil penalties up to $10,000. Separately, ten foundations pledged $500 million over five years via Humanity AI, with grants starting early 2026. Immediate actions to watch: the Nov. 19 EU package and evolving U.S. federal/state rules on AI mental‐health tools.

Key AI Regulations, Deadlines, and Funding Commitments Through 2027

  • AI Act enforcement grace period — 1 year (proposed Nov 19, 2025; delays transparency fines to Aug 2027; EU)
  • High-risk AI systems regulation effective date — August 2026 (EU AI Act schedule; EU)
  • Humanity AI funding commitment — $500 million (5 years; 10 foundations; grants start early 2026)
  • State civil penalties for unlicensed AI therapy chatbots — up to $10,000 (various U.S. states; e.g., Illinois, Utah, Nevada)

Navigating EU and US AI Regulations: Risks, Timing, and Compliance Strategies

  • Bold risk name: EU AI Act enforcement deferral and standards gap. Why it matters: The Commission’s Nov 19, 2025 simplification package may add a one-year grace period (e.g., transparency fines delayed to Aug 2027) while high-risk rules still start Aug 2026, leaving firms navigating overlapping DSA/DMA duties without finalized technical standards. Opportunity/mitigation: Use the grace to sequence compliance, build standards-ready documentation, and engage supervisors; AI vendors and SMEs operating across the EU benefit.
  • Bold risk name: U.S. regulatory patchwork for AI mental-health tools. Why it matters: FDA’s DHAC is weighing whether therapy chatbots are medical devices and how to protect minors, while states (e.g., IL, UT, NV) restrict or ban therapeutic chatbots from diagnosing/treating with civil penalties up to $10,000—threatening product claims, distribution, and liability profiles. Opportunity/mitigation: Pursue clear FDA pathways (Rx/OTC), restrict to wellness use where appropriate, and partner with licensed clinicians; digital health firms and telehealth platforms benefit.
  • Bold risk name: Known unknown—Final scope/timing of EU simplification and U.S. guardrails. Why it matters: Outcomes of the Nov 19 EU package and post–Nov 5–7 DHAC decisions will set enforcement grace periods, transparency obligations, and youth protections, shaping go-to-market windows in 2026–2027. Opportunity/mitigation: Run scenario planning and submit comments; align products to likely safety/efficacy requirements and state laws; potential Humanity AI grants starting early 2026 could offset ethics/IP and safety investments (est., based on fund priorities and timing).

Key AI Act Updates: Compliance, Funding, and Enforcement Timelines Through 2027

Period | Milestone | Impact --- | --- | --- Nov 19, 2025 | European Commission presents digital simplification package with proposed AI Act amendments. | Signals reduced compliance burden; potential one-year grace period before penalties. Q1 2026 (TBD) | Humanity AI fund begins distributing grants via Rockefeller Philanthropy Advisors. | Deploys $500M over five years toward worker, IP, democracy, climate, education. August 2026 | AI Act rules for high-risk AI systems take effect across EU. | Companies must comply with risk-based obligations; core principles remain unchanged. August 2027 (TBD) | If adopted, transparency penalties under AI Act enforced starting August 2027. | Extends compliance runway; reduces immediate risk for information-heavy obligations enforcement.

Clarity in AI Regulation: Slow Progress, Shared Risks, and Shifting Responsibility

Supporters cast Brussels’ planned “digital simplification package” as overdue pragmatism—legal certainty while technical standards catch up—without touching the AI Act’s core principles or its August 2026 high‐risk timeline. Skeptics counter that a one‐year grace period and delayed transparency penalties to August 2027 risk normalizing opacity under the banner of clarity. In the U.S., optimists say folding therapy chatbots into familiar medical‐device pathways, with special attention to minors, will finally align safety with innovation; critics point out regulators are still leaning on a patchwork of state limits and fines while basic questions—prescription status, output evaluation, adolescent safeguards—remain unsettled. Philanthropy’s $500 million Humanity AI fund reads as civic ballast to market and state power, yet even its champions frame it as a complement rather than a substitute for law. The provocation is simple: delay is not neutrality; it’s a decision about who holds the risk until the rules arrive.

The counterintuitive through-line is that clarity is advancing not by tightening faster but by choreographing pauses—EU grace to build standards, U.S. hearings to fit tools into proven structures, civil society money to nudge priorities before grants flow in early 2026. What shifts next is the locus of responsibility: AI firms navigating overlapping regimes (DSA, DMA, AI Act) and mental‐health developers facing federal definitions shaped against state backstops, while workers, artists, classrooms, and democratic processes become the arenas where these choices register. Watch November 19 in Brussels and the FDA’s trajectory on therapy chatbots; if the bets pay off, the result won’t be laxer oversight but sturdier guardrails built at a sustainable pace. The real acceleration begins when we agree where to slow down.