Observe AI Regulation
AI Regulation

The demand for hard boundaries

As AI reshapes societies, a global struggle unfolds over who gets to write the rules. Not one voice, but four: each with its own logic, its own fears, its own vision of what boundaries should look like. Understanding the movement means understanding the tension between them.

Essential vocabulary

Six concepts to enter the terrain

AI Act
The European Union's landmark regulation (adopted 2024), imposing obligations based on risk levels. Bans social scoring and real-time biometric surveillance. First comprehensive AI law in the world.
Alignment
The technical challenge of ensuring an AI system's goals remain compatible with human values and intentions, especially as systems become more capable and autonomous.
Algorithmic bias
Systematic discrimination embedded in AI outputs, often inherited from training data. A recruitment algorithm trained on historical hiring patterns will reproduce the inequalities of the past.
Risk-based approach
A regulatory philosophy that calibrates obligations to the level of harm an AI system could cause, minimal rules for low-risk tools, strict requirements for high-risk applications.
Trustworthy AI
The EU's term for AI that is lawful, ethical, and technically robust. Requires transparency, human oversight, non-discrimination, and accountability built into the system from the start.
Open vs. Closed models
A live debate: should powerful AI models be open-sourced (democratizing access but enabling misuse) or kept closed (controlled but concentrated in the hands of a few companies)?
Four philosophies

The forces shaping the rules

AI regulation is not a single conversation. It is four simultaneous debates, each carried by distinct communities, values, and fears. They rarely agree. But they all shape the future.

Techno-optimist

Let innovation breathe

AI is the greatest engine of progress since electricity. Over-regulation will strangle innovation, hand competitive advantage to less scrupulous players, and delay the solutions AI can bring to health, education, and climate. Regulate abuses, not the technology itself.

Worrying about evil AI superintelligence today is like worrying about overpopulation on Mars. — Andrew Ng
Ethics & human rights

Stop the harm that is already here

AI is not neutral. It reproduces and amplifies injustice, biased hiring algorithms, racist facial recognition, opaque credit decisions. Only strong, enforceable regulation can protect fundamental rights. Self-regulation has failed. The time for binding rules is now.

It is clear by now that industry self-governance will not work. — AI Now Institute
Existential safety

We may not get a second chance

A superintelligent AI pursuing misaligned goals could be an extinction-level threat. The race between labs to build ever more powerful models, driven by profit and competition, is dangerously under-regulated. We need global coordination, like nuclear non-proliferation, before it's too late.

Leaving safety solely to the profit motive of big companies will not be enough. Only government regulation can force them to do more safety research. — Geoffrey Hinton
Radical critique

The problem is the model itself

Regulating AI within the current system is necessary but insufficient. The real issue is the concentration of power: a handful of corporations extracting data, exploiting labor, consuming energy, and imposing their ideology on the world. Decolonize AI. Reclaim the commons. Sometimes the best regulation is refusal.

Their quest for AGI is a faith-based idea, not a scientific one. — Karen Hao
Understanding

A brief history of the demand for rules

Reflections on controlling AI are not new. In the 1940s, Isaac Asimov imagined his Three Laws of Robotics: an early thought experiment in ethical guardrails. In the 1970s, Joseph Weizenbaum warned against trusting computers blindly. But for decades, AI remained confined to laboratories, and the regulatory question stayed theoretical.

Everything changed in the 2010s. Algorithms entered everyday life: recommending content, filtering resumes, scoring creditworthiness... and their biases became visible. Joy Buolamwini at the MIT Media Lab had to wear a white mask to be recognized by a facial recognition system. Her 2018 study "Gender Shades" revealed dramatically higher error rates for darker-skinned women, opening legislators' eyes to the racism encoded in AI.

Simultaneously, high-profile warnings amplified the urgency. Stephen Hawking and Elon Musk declared AI could be "more dangerous than nuclear weapons." In 2016, over 100 experts established the Asilomar AI Principles. In 2017, Musk called for proactive regulation "before it's too late."

The institutional response accelerated. In 2018, Canada and France launched the Global Partnership on AI. The EU published its AI White Paper in 2020, followed by the AI Act: adopted in 2024 as the world's first comprehensive AI law, classifying systems by risk level and banning social scoring and mass biometric surveillance.

In the US, the White House issued a "Blueprint for an AI Bill of Rights" (2022) and an Executive Order (2023) imposing security tests on advanced models. China enacted its own rules, regulating recommendation algorithms (2022) and generative AI (2023) with a distinctly state-centered approach.

The tension that shapes everything

Behind every regulation lies a philosophical choice. The techno-optimists fear that heavy rules will crush innovation and gift competitive advantage to less careful players. The ethicists respond that market incentives alone will never protect the vulnerable. The safety community warns that without global coordination, we risk losing control of systems we cannot yet understand. And the radical critics ask whether the entire model, a few corporations deciding the future of intelligence, should be accepted at all.

These four visions are not mutually exclusive. They are, in fact, complementary: though their advocates rarely see it that way. The governance that emerges will be shaped by the tension between them. And perhaps that is as it should be: no single voice holds the whole truth about something this vast.

An honest reading map
What is finding consensus

Independent auditing and testing of AI systems. Transparency requirements for generative AI. Prohibition of the most dangerous applications (mass surveillance, social scoring). The principle that existing laws already apply to AI.

What divides profoundly

Should powerful models be open-sourced or restricted? Is the existential risk real or a distraction from present harms? Should regulation target capabilities (how powerful) or applications (how used)? Who should write the rules: engineers, lawmakers, citizens, or all three?

What is emerging despite the tensions

International coordination (Bletchley Park, Council of Europe treaty). Cross-sector coalitions between ethicists and safety researchers. Worker protections and rights for AI-affected communities. A growing understanding that short-term and long-term risks require the same governance infrastructure.

Timeline

Key moments

1942–1970s
Asimov's Three Laws of Robotics (1942). Joseph Weizenbaum publishes Computer Power and Human Reason (1976), warning against blind trust in machines. AI regulation remains theoretical.
2016–2017
Over 100 experts establish the Asilomar AI Principles. Elon Musk calls for proactive regulation. First public debates on algorithmic bias gain traction.
2018
Joy Buolamwini's "Gender Shades" study exposes racial bias in facial recognition. Canada and France launch the Global Partnership on AI. Cities begin banning police use of facial recognition. Google adopts internal AI Principles after employee protests.
2020
Timnit Gebru is fired from Google after co-authoring "Stochastic Parrots," warning of the risks of giant language models. IBM, Microsoft, and Amazon suspend facial recognition sales to police. EU publishes its AI White Paper.
2022–2023
US White House publishes AI Bill of Rights blueprint. Geoffrey Hinton resigns from Google to warn freely. The Future of Life Institute publishes a pause letter signed by 1,000+ experts. The Center for AI Safety issues a one-sentence extinction risk statement signed by hundreds of leaders. Biden signs an Executive Order on AI safety.
2024
The EU adopts the AI Act: the world's first comprehensive AI law. The Council of Europe opens for signature the first binding international convention on AI and human rights. Bletchley Park Summit brings governments and labs together on extreme risks.
The present
Regulation is a living field. New frameworks emerge monthly. The tension between innovation and protection, between national interest and global coordination, remains unresolved and perhaps that tension itself is the mechanism through which governance evolves.
Who shapes the debate

Key figures

Across four currents: the voices that write, warn, build, and resist.

Ethics & rights
Joy Buolamwini
Researcher — MIT / Algorithmic Justice League
Exposed racial bias in facial recognition. Her "Gender Shades" study led to corporate moratoriums and congressional testimony. Author of Unmasking AI.
Ethics & rights
Timnit Gebru
Researcher — DAIR Institute
Co-authored "Stochastic Parrots" warning of large language model risks. Her firing from Google became a turning point for AI ethics accountability. Founder of DAIR, an independent research institute.
Existential safety
Geoffrey Hinton
Researcher — University of Toronto
The "godfather of deep learning." Resigned from Google in 2023 to speak freely about extinction risks. Estimates a 20% probability of AI causing human extinction within decades.
Existential safety
Stuart Russell
Professor — UC Berkeley
Co-author of the standard AI textbook. In Human Compatible, proposes redesigning AI to remain uncertain about its objectives and thus more controllable. Testified before the US Senate.
Ethics & rights
Kate Crawford
Researcher — USC / AI Now Institute
Author of Atlas of AI. Reveals AI's hidden costs: mineral extraction, precarious labor, data colonialism. Co-founded the AI Now Institute to study social impacts.
Techno-optimist
Yann LeCun
Chief AI Scientist — Meta
Deep learning pioneer. Dismisses apocalyptic predictions as "absurd" and opposes licensing requirements for AI models, defending open-source research and cautioning against rules that favor incumbents.
Existential safety
Yoshua Bengio
Professor — Université de Montréal
Turing Award laureate who publicly revised his position: "I underestimated how fast AI would progress." Now advocates for international oversight, mandatory licenses, and democratic deliberation on AI's future.
Radical critique
Karen Hao
Journalist & Author
Author of Empire of AI. Analyzes the power dynamics behind AI development as a new form of colonialism. Calls for breaking the myth of AI's inevitability and involving civil society in its governance.
Existential safety
Nick Bostrom
Philosopher — Oxford
Author of Superintelligence (2014), popularizing existential risk from AI. Co-founded the Future of Humanity Institute. Contributed to the Asilomar AI Principles.
Existential safety
Eliezer Yudkowsky
Researcher — MIRI
Working on AI alignment since 2001. Called for an indefinite global moratorium on powerful AI, arguing the pause letter was too timid. The most radical voice in the safety movement.
Techno-optimist
Marc Andreessen
Investor — a16z
Author of the "Techno-Optimist Manifesto." Compares AI alarmists to religious doomsayers and argues regulation serves incumbents at the expense of innovation.
Ethics & rights
Cathy O'Neil
Mathematician & Author
Author of Weapons of Math Destruction. Demonstrates how algorithms perpetuate inequality in education, credit, and hiring. Advocates for mandatory algorithmic auditing.
Radical critique
Shoshana Zuboff
Professor — Harvard
Theorist of surveillance capitalism. Argues that AI's current trajectory is inseparable from the business model of extracting and monetizing human experience at scale.
Existential safety
Max Tegmark
Physicist — MIT / Future of Life Institute
Author of Life 3.0. Co-founded the Future of Life Institute. Initiated the 2023 pause letter and campaigns for international AI governance modeled on nuclear non-proliferation.
Radical critique
Safiya Umoja Noble
Researcher — UCLA
Author of Algorithms of Oppression. Showed how search engines reinforce systemic racism and sexism. Advocates for structural reform of information systems.
Ethics & rights
Meredith Whittaker
President — Signal Foundation
Co-founded the AI Now Institute. Organized employee protests at Google against military AI contracts. Argues there will be no ethical AI as long as development is driven solely by profit.
Radical critique
Ruha Benjamin
Sociologist — Princeton
Introduced the concept of "New Jim Code" — how technology reproduces racial hierarchies under the guise of neutrality. Calls for dismantling oppressive systems encoded in algorithms.
Techno-optimist
Sam Altman
CEO — OpenAI
Promotes "techno-optimism" while calling for regulation, a complex position. Believes AGI can bring abundance if correctly governed. Testified before the US Congress asking for regulatory frameworks.
Ecosystem

Organizations structuring the field

Ethics & accountability
AI Now Institute
New York-based research center studying AI's social impact. Declared that industry self-governance has failed. Advocates for mandatory auditing, use bans on pseudo-scientific systems, and whistleblower protections.
Ethics & advocacy
Algorithmic Justice League
Founded by Joy Buolamwini. Uses research, art, and advocacy to fight algorithmic bias. Contributed to legislative action against discriminatory surveillance.
Existential safety
Future of Life Institute
Founded by Max Tegmark. Organized the 2023 pause letter and campaigns for international AI governance. Funds safety research and promotes binding norms on the most dangerous capabilities.
Existential safety
Center for AI Safety
Published the one-sentence extinction risk statement signed by hundreds of AI leaders. Conducts technical research on model reliability and catastrophic scenario testing.
Industry / safety
Anthropic
Self-described "AI safety company." Develops constitutional AI techniques and published a Responsible Scaling Policy. Bridges commercial development and safety-first principles.
Institutional governance
Global Partnership on AI (GPAI)
International initiative (Canada-France, 2018) bringing states and experts together for non-binding but influential AI governance recommendations.
Independent research
DAIR Institute
Founded by Timnit Gebru after leaving Google. Conducts independent research on AI harms affecting marginalized communities.
European watchdog
AlgorithmWatch
Berlin-based NGO investigating algorithmic systems' impact on society. Conducts audits, supports investigative journalism, and shaped GDPR and AI Act discussions.
Safety research
Machine Intelligence Research Institute (MIRI)
Co-founded by Yudkowsky (2000). Pioneers mathematical approaches to AI alignment. Once marginal, now influential in mainstream AI safety discourse.
UK policy
Ada Lovelace Institute
UK think tank ensuring data and AI serve the public good. Conducts citizen deliberations and influences policy on algorithmic governance and democratic oversight.
Disarmament
Campaign to Stop Killer Robots
Coalition of 160+ NGOs (including Human Rights Watch) calling for a preemptive ban on fully autonomous weapons. 30 countries now support the prohibition.
Decolonial network
Masakhane
Pan-African research collective: 2,000+ researchers building NLP tools for underrepresented African languages. Open-source resistance to linguistic colonialism in AI.
Decolonial network
Tierra Común
Latin American network opposing data colonialism. Produces resources on how AI and big data impact communities in the Global South. Promotes South-South exchange.
Community data
Data for Black Lives
US movement using data science to improve Black communities' lives. Challenges techno-policing and algorithmic injustice. Data for justice, not oppression.
Convergence

Where the currents meet

Despite deep disagreements, something is emerging. Not consensus, but shared ground. Points where different visions, pushed far enough, begin to touch.

Independent auditing
Optimists see it as a way to prove safety without heavy regulation. Ethicists see it as a guard against bias. Safety researchers see it as a stress-test. Radicals see it as a tool for accountability. All four want it, for different reasons.
Transparency requirements
The degree varies but the principle that people should know when they're interacting with AI, and what data it was trained on, is increasingly accepted across all currents.
Worker and community protections
The ethicists and radicals lead here, but even optimists increasingly acknowledge that AI-driven disruption requires safety nets. The question has shifted from "if" to "how."
Short-term and long-term risks need the same infrastructure
Hinton, Bengio, and others have made the case: improving governance now — auditing, interpretability, accountability — helps with both today's biases and tomorrow's existential scenarios. The tools are the same.

AI is not just a matter of technology. It is a matter of society. And its future will be chosen collectively, or not at all.

Content

Interviews, documentation & more

Voices from across the spectrum. Research notes. Conversations with those who regulate, resist, and build. This section will grow.

Interview
Coming soon
Conversations with regulators, ethicists, and industry voices navigating the tension between innovation and protection.
Documentation
Coming soon
Annotated guides to major regulatory texts, comparative analysis across jurisdictions, and policy impact assessments.
Focus
Coming soon
Deep dives into the AI Act, open-source controversies, decolonial movements, and the governance experiments that may define the next decade.

Rules are not walls: they are agreements

The best regulation is not the one that restricts the most. It is the one that listens to the most voices before deciding. If you carry one of these perspectives, or all of them at once, you belong in this conversation.

Back to Observe