Tokenized Real-World Assets: Regulatory Scrutiny Meets Institutional Momentum

Tokenized Real-World Assets: Regulatory Scrutiny Meets Institutional Momentum

Published Nov 16, 2025

Global watchdog scrutiny and new institutional products are pushing tokenized real-world assets (RWAs) from experimentation toward regulated finance: on 2025-11-11 IOSCO warned of investor confusion over ownership and issuer counterparty risk even as tokenized RWAs grew to US$24 billion by mid‐2025 (private credit ~US$14B), with Ethereum hosting about US$7.5B across 335 products (~60% market share). Product innovation includes Figure’s US‐approved yield-bearing stablecoin security YLDS and a HELOC lending pool, and the NUVA marketplace (Provenance claimed ~$15.7B in related assets). These developments matter for customers, revenue and operations because low secondary liquidity, legal ambiguity (security vs token), and dependency on traditional custodians create compliance and market‐risk tradeoffs. Near term, executives should monitor regulatory rule‐making (IOSCO, SEC, FSA, MAS), broader investor‐eligible launches, liquidity metrics, interoperability standards, and disclosure/audit transparency.

EU AI Act Poised for Delay Amid US Pressure, Industry Pushback

EU AI Act Poised for Delay Amid US Pressure, Industry Pushback

Published Nov 16, 2025

On 2025-11-07 reporting shows the European Commission is weighing proposals to pause enforcement of certain “high-risk” provisions of the EU AI Act—enacted 1 August 2024—and introduce an expected one-year grace period so obligations would begin after August 2027; exemptions and phased-in transparency and registration rules are also under discussion following pressure from U.S. trade officials and large AI vendors (Meta, Google, OpenAI). This shift matters because it delays compliance costs and alters risk-management timelines for companies, affects investment and operational planning, and complicates international regulatory alignment as U.S. states (notably California’s SB 53, signed 2025-09-29) advance safety and transparency rules. Immediate milestones to watch: the Digital Omnibus text (mid–late November 2025) and regulatory guidance on “high-risk” definitions (late 2025).

EU Considers Pausing AI Act Amid Big Tech and Trade Pressure

EU Considers Pausing AI Act Amid Big Tech and Trade Pressure

Published Nov 16, 2025

EU and U.S. moves this fall signal a potential softening of AI rules: the European Commission is reportedly weighing pausing parts of the AI Act—under pressure from U.S. officials and firms such as Meta and Alphabet—including a leaked Digital Omnibus draft that could exempt narrow/procedural uses from high‐risk registration and grant a one‐year grace period for some obligations beginning after August 2027; the AI Act has been in force since August 2024 with key high‐risk duties slated from August 2026. In the U.S., the White House’s July 2025 AI Action Plan urges discouraging state AI laws while a proposed 10‐year House moratorium was removed by the Senate on 2025‐07‐01. These shifts matter for product launches, compliance costs, competitive advantage, and regulatory certainty; the final Omnibus on Nov 19, 2025 and state/federal moves in early 2026 are the next milestones to watch.

California's SB 243: First Comprehensive Law Regulating Companion Chatbots

California's SB 243: First Comprehensive Law Regulating Companion Chatbots

Published Nov 16, 2025

On October 13, 2025, Governor Gavin Newsom signed SB 243, the first U.S. state law setting comprehensive rules for “companion chatbots” in California: operators must disclose chatbot identity (with reminders to minors every three hours), may not imply licensed medical/professional status, must prevent sexual content with minors, detect self‐harm and provide crisis referrals, and begin annual reporting to the California Office of Suicide Prevention on July 1, 2027; many provisions take effect January 1, 2026. The law creates a private right of action (damages, injunctive relief, attorneys’ fees), raising litigation, compliance and operational costs—prompting firms to revise product definitions, age‐verification, safety engineering, transparency and reporting processes and set aside budgets for liability. Key uncertainties include the “reasonable person” standard, scope of “companion” exclusions, and potential interaction with pending federal proposals.

Tokenized Real-World Assets Hit $12.8B as Institutions Flood In

Tokenized Real-World Assets Hit $12.8B as Institutions Flood In

Published Nov 16, 2025

As of July 3, 2025, tokenized real‐world assets (RWAs) reached $12.83 billion total value locked (TVL), up from $7.75 billion at the start of the year—a 65% YTD rise driven by real estate, bonds and climate‐linked tokens and led by protocols such as BlackRock’s BUIDL (~$2.83 billion across six blockchains), Ethena USDt ($1.46 billion) and Ondo Finance ($1.39 billion). Institutional activity—illustrated by Mitsubishi UFJ’s tokenization of a 30‐story Osaka office under Japan’s security‐token framework—signals integration with traditional finance. Material risks include fragmented regulation, limited secondary‐market liquidity, custody and compliance gaps. Over the next 6–12 months stakeholders should monitor new legal regimes, institutional product launches, infrastructure interoperability and liquidity metrics; firms, investors and policymakers need to align with regulatory clarity, build interoperable custody/standards and prepare for wider mainstream adoption.

Federal vs. State AI Regulation: The New Tech Governance Battleground

Federal vs. State AI Regulation: The New Tech Governance Battleground

Published Nov 16, 2025

On 2025-07-01 the U.S. Senate voted 99–1 to remove a proposed 10-year moratorium on state AI regulation from a major tax and spending bill, preserving states’ ability to pass and enforce AI-specific laws after a revised funding-limitation version also failed; that decision sustains regulatory uncertainty and keeps states functioning as policy “laboratories” (e.g., California’s SB-243 and state deepfake/impersonation laws). The outcome matters for customers, revenue and operations because fragmented state rules will shape product requirements, compliance costs, liability and market access across AI, software engineering, fintech, biotech and quantum applications. Immediate priorities: monitor federal bills and state law developments, track standards and agency rulemaking (FTC, FCC, ISO/NIST/IEEE), build compliance and auditability capabilities, design flexible architectures, and engage regulators and public comment processes.

Aggressive Governance of Agentic AI: Frameworks, Regulation, and Global Tensions

Aggressive Governance of Agentic AI: Frameworks, Regulation, and Global Tensions

Published Nov 13, 2025

In the past two weeks the field of agentic-AI governance crystallized around new technical and policy levers: two research frameworks—AAGATE (NIST AI RMF‐aligned, released late Oct 2025) and AURA (mid‐Oct 2025)—aim to embed threat modeling, measurement, continuous assurance and risk scoring into agentic systems, while regulators have accelerated action: the U.S. FDA convened on therapy chatbots on Nov 5, 2025; Texas passed TRAIGA (HB 149), effective 2026‐01‐01, limiting discrimination claims to intent and creating a test sandbox; and the EU AI Act phases begin Aug 2, 2025 (GPAI), Aug 2, 2026 (high‐risk) and Aug 2, 2027 (products), even as codes and harmonized standards are delayed into late 2025. This matters because firms face compliance uncertainty, shifting liability and operational monitoring demands; near‐term priorities are finalizing EU standards and codes, FDA rulemaking, and operationalizing state sandboxes.

EU Eyes Softened AI Act: Delays, Exemptions Threaten Accountability

EU Eyes Softened AI Act: Delays, Exemptions Threaten Accountability

Published Nov 10, 2025

EU member states are considering rolling back elements of the Artificial Intelligence Act under the Digital Omnibus initiative—postponing penalties until August 2, 2027, carving exemptions for “high‐risk” systems used in narrow/procedural roles, and creating a grace period for AI‐labeling. Driven by Big Tech pressure, U.S. trade concerns and competitiveness debates, the proposals aim to ease compliance but risk legal uncertainty, regulatory loopholes, weaker public protections and advantages for incumbents. Analysts warn such softening could erode the EU’s global regulatory influence. Safeguards should include clear definitions of “high‐risk” and “procedural,” independent transparency and audit metrics, layered enforcement that preserves core obligations, and interim guidance ahead of any delay. A decisive vote on November 19, 2025 will shape Europe’s—and the world’s—AI governance.