800+ Global Figures Call to Ban Superintelligence Until Safety Consensus

800+ Global Figures Call to Ban Superintelligence Until Safety Consensus

Published Nov 11, 2025

On October 22, 2025, more than 800 global figures—including AI pioneers Geoffrey Hinton and Yoshua Bengio, technologists, politicians, and celebrities—urged a halt to developing superintelligent AI until two conditions are met: broad scientific consensus that it can be developed safely and strong public buy‐in. The statement frames superintelligence as machines surpassing humans across cognitive domains and warns of economic displacement, erosion of civil liberties, national‐security imbalances and existential risk. Polling shows 64% of Americans favor delay until safety is assured. The coalition’s cross‐partisan reach, California’s SB 53 transparency law, and mounting public concern mark a shift from regulation toward a potential prohibition, intensifying tensions with firms pursuing advanced AI and raising hard questions about enforcement and how to define “safe and controllable.”

Global Call and Law Reflect Growing Demand for AI Superintelligence Ban

  • 800+ global signatories backing an open call to prohibit AI superintelligence development
  • Ban conditioned on 2 prerequisites: broad scientific safety consensus and strong public buy-in
  • Public sentiment: 64% of Americans say don’t develop superhuman AI until safety is assured
  • Only 5% favor accelerating development regardless of risks
  • Regulatory signal: 1 landmark state law (California’s SB 53, Sep 2025) mandating transparency and catastrophic-risk reporting for frontier AI

Mitigating AI Risks: Control, Governance, Security, Standards, and Public Trust

  • Uncontrollable superintelligence (highest concern) — Core existential risk: if control fails, outcomes are irreversible. Probability: uncertain but rising given AGI roadmaps; Severity: extreme. Opportunity: tie funding/compute access to verifiable safety milestones, advancing control theory and evals before scale-up.
  • Enforcement gaps and jurisdictional arbitrage (highest concern) — A ban without global mechanisms pushes development underground or offshore. Probability: high; Severity: high. Opportunity: build interoperable compute-governance (chip export controls, training-run registration, third-party audits) and mutual-recognition treaties.
  • Security race dynamics among states (highest concern) — National security incentives may override caution, accelerating risky capabilities and secrecy. Probability: medium–high; Severity: very high. Opportunity: AI nonproliferation norms, shared red-teaming, and verification tech (telemetry, provenance) to make restraint credible.
  • Defining “safe and controllable” — Lack of scientific criteria creates regulatory limbo and loopholes, hindering both safety and legitimate research. Probability: certain (present); Severity: medium–high. Opportunity: consensus standards and “graduated release” benchmarks (alignment stress-tests, catastrophic misuse risk thresholds) to phase capability deployment.
  • Public buy-in durability — While 64% favor caution now, support can polarize or erode, inviting policy whiplash. Probability: medium; Severity: medium. Opportunity: institutionalized public engagement (citizens’ assemblies, transparent incident reporting, worker-impact compacts) to legitimize decisions and stabilize policy over time.

Key Milestones Shaping AI Regulation and Safety from Late 2025 to Mid-2026

MilestoneWhat to watchWhy it mattersPeriod
Government response to Oct 22 superintelligence-ban callHearings, resolutions, or draft bills referencing a moratorium until safety consensus and public buy-inSignals if prohibition moves from advocacy to policy; tests cross-partisan alignment noted by signatoriesQ4 2025–Q1 2026
California SB 53 “Transparency in Frontier AI” implementationInitial public safety protocol disclosures and any “catastrophic risk” incident reports by large-AI developersConcrete compliance data point; raises regulatory pressure beyond federal gridlockQ4 2025–H1 2026
Industry roadmap updates vs. public sentimentOpenAI/Meta/Google statements or pivots on AGI/superintelligence, new safety commitments, or launch delaysGauges the tension between corporate AGI ambitions and rising public/policy pushbackQ4 2025–Q2 2026
Defining “safe and controllable” criteriaExpert panel drafts, consensus papers, or benchmark suites (e.g., “real work” evaluations) used to assess control/safetyEstablishes the bar for any future lifting of a ban; shifts debate from rhetoric to measurable standardsQ4 2025–H1 2026
International echoes and state-level expansionsNew national or regional statements, moratorium proposals, or state laws mirroring/expanding oversightTests whether the ban posture globalizes and whether states fill federal gapsQ4 2025–H1 2026

Banning Superintelligence: Catalyst for Safer AI or Barrier to Innovation?

From one angle, this is overdue civic hygiene: a rare cross-partisan coalition insisting that power beyond precedent should meet proof beyond doubt. From another, it’s Luddism in modern clothing—an elite veto on invention, with “consensus” as a moving target and “public buy‐in” as a proxy for fear. Critics warn that bans ossify advantage, pushing research underground or consolidating it in the hands of the few who can navigate regulation. Supporters counter that enforceability follows will: chips can be traced, compute metered, labs audited—just as finance, aviation, and nuclear systems are. Some see celebrity signatures as performative; others see a democratic signal. Is “superintelligence” a meaningful threshold or a rhetorical scarecrow? Are we protecting humanity, or protecting incumbents? And if safety research requires capability research, can you pause the latter without starving the former?

Here’s the twist: even if a global prohibition never materializes, the demand is already doing quiet work. It shifts the burden of proof from “prove danger” to “prove control,” accelerating the scaffolding we actually need—verified compute supply chains, incident reporting, model stress tests tied to “real work” benchmarks, and democratic mechanisms for social license. A pragmatic settlement may look less like a freeze and more like a dual‐key regime: training above defined compute or capability thresholds requires both scientific sign‐off and public authorization, with continuous auditing and automatic policy ratchets when benchmarks are crossed. Paradoxically, the push to ban could make frontier AI safer precisely by building the institutions that make a ban unnecessary. The surprising conclusion is not that we must choose between innovation and inhibition, but that the credible threat of prohibition can be the lever that constitutionalizes AI power—transforming superintelligence from a corporate moonshot into a supervised public utility with inspection rights, liability, and a democratic brake pedal.