Federal vs. State AI Regulation: The New Tech Governance Battleground
Published Nov 16, 2025
On 2025-07-01 the U.S. Senate voted 99–1 to remove a proposed 10-year moratorium on state AI regulation from a major tax and spending bill, preserving states’ ability to pass and enforce AI-specific laws after a revised funding-limitation version also failed; that decision sustains regulatory uncertainty and keeps states functioning as policy “laboratories” (e.g., California’s SB-243 and state deepfake/impersonation laws). The outcome matters for customers, revenue and operations because fragmented state rules will shape product requirements, compliance costs, liability and market access across AI, software engineering, fintech, biotech and quantum applications. Immediate priorities: monitor federal bills and state law developments, track standards and agency rulemaking (FTC, FCC, ISO/NIST/IEEE), build compliance and auditability capabilities, design flexible architectures, and engage regulators and public comment processes.
Remote Labor Index: Reality Check — AI Automates Just 2.5% of Remote Work
Published Nov 10, 2025
The Remote Labor Index (RLI), released Oct 2025, evaluates AI agents on 240 real-world projects across sectors worth $140,000, revealing top agents automated only 2.5% of tasks end-to-end. Common failures—wrong file formats, incomplete submissions, outputs missing brief requirements—show agents fall short of freelance-quality work. The RLI rebuts narratives of imminent agentic independence, highlighting short-term opportunities for human freelancers to profit by fixing agent errors. To advance agentic AI, evaluations must broaden to open-ended, domain-specialized, and multimodal tasks; adopt standardized metrics for error types, quality, correction time, and oversight costs; and integrate economic models to assess net benefit. The RLI is a pragmatic reality check and a keystone benchmark for measuring real-world agentic capability.
RLI Reveals Agents Can't Automate Remote Work; Liability Looms
Published Nov 10, 2025
The Remote Labor Index benchmark (240 freelance projects, 6,000+ human hours, $140k payouts) finds frontier AI agents automate at most 2.5% of real remote work, with frequent failures—technical errors (18%), incomplete submissions (36%), sub‐professional quality (46%) and inconsistent deliverables (15%). These empirical limits, coupled with rising legal scrutiny (e.g., the AI LEAD Act applying product‐liability principles and mounting IP/liability risk for code‐generating tools), compel an expectation reset. Organizations should treat agents as assistive tools, enforce human oversight and robust fallback processes, and maintain documentation of design, data, and error responses to mitigate legal exposure. Benchmarks like RLI provide measurable baselines; until performance improves materially, prioritize augmentation over replacement.