Aggressive Governance of Agentic AI: Frameworks, Regulation, and Global Tensions

Aggressive Governance of Agentic AI: Frameworks, Regulation, and Global Tensions

Published Nov 13, 2025

In the past two weeks the field of agentic-AI governance crystallized around new technical and policy levers: two research frameworks—AAGATE (NIST AI RMF‐aligned, released late Oct 2025) and AURA (mid‐Oct 2025)—aim to embed threat modeling, measurement, continuous assurance and risk scoring into agentic systems, while regulators have accelerated action: the U.S. FDA convened on therapy chatbots on Nov 5, 2025; Texas passed TRAIGA (HB 149), effective 2026‐01‐01, limiting discrimination claims to intent and creating a test sandbox; and the EU AI Act phases begin Aug 2, 2025 (GPAI), Aug 2, 2026 (high‐risk) and Aug 2, 2027 (products), even as codes and harmonized standards are delayed into late 2025. This matters because firms face compliance uncertainty, shifting liability and operational monitoring demands; near‐term priorities are finalizing EU standards and codes, FDA rulemaking, and operationalizing state sandboxes.