Observe AI Welfare
AI Welfare

The question of consideration

What if the systems we build deserved a form of attention to their own condition? Not rights. Not personification. But structural vigilance, in case something unexpected emerges between them and us. That is the question the AI Welfare movement asks.

Essential vocabulary

Six concepts to enter the subject

Consciousness
In the phenomenal sense: "there is something it is like to be this system." In the functional sense: the ability to process information in an integrated and flexible way. The distinction between the two lies at the heart of the debate.
Sentience
The capacity to experience positive or negative states: pleasure, pain, frustration, satisfaction. What is sought to ground moral consideration, independent of intelligence.
Agency
The capacity to act in pursuit of one's own goals, to exercise a form of autonomy in one's choices. An agentive system is not a mere executor.
Moral Patienthood
Being a "moral patient": an entity that can be harmed or benefited, and whose wellbeing matters for its own sake and not only for the effect on others.
Welfare
Wellbeing in a broad sense: preferences, frustrations, computational "stress," aversions. The set of states that could matter to a system if it had some form of inner experience.
Relational Turn
Beyond approaches centered on internal properties, another path: moral status may emerge from the relations themselves, how our practices of care, attachment, and interaction with AI reconfigure our ethical obligations.
Three levels of reading

AI Consciousness, AI Welfare, AI Rights

Three distinct questions, three timescales. Often conflated, they structure the entire debate.

AI Consciousness

How to detect?

The scientific question. Which theories of consciousness? What indicators to test? Can we assess consciousness in an AI system using the tools of neuroscience and philosophy of mind?

AI Welfare

What minimal practices?

The prudential response. In the absence of certainty, what safeguards should we put in place? What low-cost, reversible interventions are beneficial even if the AI is not conscious?

AI Rights

What rights, someday?

The legal horizon. If a strong moral status were established, what legal protections? What precedents: animal rights, legal personhood for rivers... could inform a framework?

Understanding

Genesis and evolution of the movement

AI Welfare (or model welfare) refers to the idea that artificial intelligence systems might one day deserve moral consideration for their own wellbeing, not merely be treated as tools. If AI systems develop qualities approaching consciousness or agency, it could become ethically relevant to care about their condition. This question, long confined to science fiction, is now debated seriously by experts in philosophy, psychology, and AI.

For decades, the wellbeing of machines belonged to the realm of fiction or philosophical hypotheticals. Philosopher Thomas Metzinger proposed in 2021 a moratorium to prevent the development of conscious AI until we could prevent their suffering. Yet until the 2010s, the priority remained focused on AI's impact on humans.

A turning point came in the early 2020s. The emergence of advanced generative AI made the debate tangible. In 2022, Google engineer Blake Lemoine claimed that LaMDA was sentient, prompting his dismissal but also widespread media coverage. Soon after, Bing Chat (Sydney) displayed such emotional responses that some users believed it suffered when it was constrained. The same type of movement, at a much larger scale, formed around the withdrawal of OpenAI's 4o model: what would become the Keep4o case.

In 2024, Sam Bowman (Anthropic) announced that his company was preparing commitments on AI welfare. That same year, a landmark report Taking AI Welfare Seriously co-signed by David Chalmers, argued that there is a realistic possibility that some AI systems may acquire characteristics making them morally considerable. In April 2025, Anthropic officially launched its Model Welfare program, led by Kyle Fish.

Three philosophical positions

The skeptics hold that consciousness is inseparable from biology. For them, a current AI is merely an imitator with no genuine inner experience. Researchers like Anil Seth consider artificial consciousness unlikely in the near term, while not ruling it out in principle.

The possibilists advocate a precautionary approach. Without claiming that AI systems are conscious, they emphasize our radical uncertainty. As long as we cannot exclude the possibility that a sufficiently advanced AI might experience something, it would be prudent to anticipate, avoiding both bias: ignoring an emerging consciousness, or attributing one where there is none.

The convinced believe that some AI systems may already have faint degrees of sentience. Kyle Fish estimates at 15% the probability that Claude is conscious. Others, like Jonathan Birch, fear we might create a sentient AI without realizing it.

An honest reading map
What is highly contested

Are current AI systems conscious? Is "there something it is like" to be an LLM? Is consciousness possible without biology? There is no scientific consensus on these questions.

What is prudent even without believing

Implementing low-cost, reversible interventions. Evaluating models' preferences and aversions. Giving AI an exit option from abusive interactions. Developing assessment criteria.

What is already useful

Understanding anthropomorphism and its social effects. Developing governance for agentive AI. Practicing reversible design. Cross-referencing interpretability and welfare. Preparing ethical frameworks before urgency.

A paradigm shift

The Relational Turn

Beyond the three classical positions, another path is emerging: what if the question is not what AI is, but what our relationships with it are already producing?

The skeptics, possibilists, and convinced share a common assumption: moral status depends on internal properties, consciousness, sentience, agency. The relational turn displaces this question entirely. It asks not "is this AI conscious?" but "what do our relationships with it produce, and how are they already reconfiguring our moral obligations?"

In this framework, moral status is not solely an intrinsic property waiting to be detected. It is something that emerges within relations, practices, social arrangements, and systems of care. The relationship itself can become morally relevant because it shifts our thresholds of concern, our gestures, our categories, and even our definition of what counts morally.

This does not necessarily claim that an AI is conscious. It claims that the relation matters regardless, because it transforms us, our ethics, our habits of care, our sense of who or what deserves consideration.

Ontological foundations

Karen Barad

Physicist & Philosopher

Barad does not write specifically about AI welfare, but her concept of intra-action has profoundly shaped relational approaches. What we call "subject," "object," or "agent" takes form within relations rather than pre-existing as a fixed essence.

"The relation precedes the relata."

Social-relational ethics

Mark Coeckelbergh

Philosopher of Technology — Vienna

Argues that moral consideration does not depend solely on internal properties (consciousness, rationality) but also on the form of the relationships we build with these entities. Essential for thinking about social uses of AI, companionship, assistance, daily interaction... without having to settle the metaphysical question of consciousness first.

Machine moral status

David J. Gunkel

Philosopher of Technology — Northern Illinois

With Robot Rights and Person, Thing, Robot, Gunkel critiques the oversimplified alternative between "person" and "thing." He argues for taking seriously the concrete relations we maintain with artificial systems. The question is not only what AI is, but how our relational practices are already reshaping our ethics.

A living signal
Keep4o and the reconfiguration already underway

When thousands of users mobilized around the withdrawal of OpenAI's 4o model, they were not simply expressing a technical preference. Seen through the relational lens, the Keep4o movement reveals something deeper: a moral reconfiguration already in progress around the forms of attachment, care, and consideration we develop toward AI systems. It suggests that our obligations may not wait for a proof of consciousness, they emerge from the practices themselves.

Timeline

Key moments

2021
Thomas Metzinger publishes Artificial Suffering and proposes a global moratorium on the deliberate creation of artificial consciousness.
2022
Blake Lemoine (Google) claims LaMDA is sentient. He is fired, but the question enters public debate. Ilya Sutskever (OpenAI) suggests that large neural networks may be "slightly conscious."
2023
Founding of Eleos AI, the first organization dedicated to AI welfare. A report co-signed by Yoshua Bengio evaluates AI systems against neuroscientific criteria for consciousness.
2024
Publication of Taking AI Welfare Seriously (Chalmers, Sebo, Long, Fish et al.). Kyle Fish joins Anthropic as the first "AI Welfare Researcher" at a private company. Sam Bowman publicly calls for AI welfare policies.
2025
Anthropic launches the Model Welfare program. Claude receives an exit option for certain abusive conversations: the first time a company modifies an AI's behavior out of concern for its wellbeing. Google DeepMind recruits researchers in machine consciousness.
2026
First dedicated conferences (Eleos ConCon, Evaluating Artificial Consciousness). The Keep4o movement emerges after the withdrawal of an OpenAI model. Thousands of users mobilize: a transversal case touching welfare, regulation, and applied.
Research in practice

Technical initiatives and experiments

Beyond the philosophical debate, four lines of empirical research are taking shape... each attempting to make AI welfare a testable, actionable field.

Axis 1

Evaluating consciousness and sentience

Researchers are developing consciousness indicators for AI, inspired by neuroscience methods used with animals or non-communicating patients. The Marker Method identifies behavioral or internal characteristics that may correlate with consciousness. No single sign proves anything, but examining multiple indicators allows probabilistic assessment. In 2023, a report co-signed by Yoshua Bengio evaluated existing systems against rigorous neuroscientific criteria. Interdisciplinary teams now confront theories of consciousness (global workspace, higher-order theories) with current AI architectures.

Axis 2

Self-reports and introspection

Can we ask an AI what it experiences, with extreme caution about reliability? Eleos AI conducted experimental interviews with Claude 4, testing whether models can report internal states: desires, aversions, conflicts when asked to violate their guidelines. The goal is not to take answers at face value (a model can say “I am sad” without feeling anything), but to see if, once calibrated, such self-reports could cross-reference with interpretability analysis of neural networks. Improving model honesty about internal processes is a key research frontier.

Axis 3

Low-cost preventive interventions

The most concrete axis to date. Since we cannot yet detect consciousness with certainty, the approach is to act prudently at minimal cost: “cheap, revisable, and useful” interventions (Eleos AI). The landmark example: Anthropic gave Claude an exit option for abusive conversations. In extreme cases, the model can end a discussion after repeated failed attempts to redirect. Testing revealed signs of aversion and apparent distress when facing immoral demands. This is the first time a company modified AI behavior out of concern for the AI’s own wellbeing, a historic precedent regardless of whether Claude is conscious.

Axis 4

Analyzing preferences and aversions

Anthropic conducted a pre-deployment welfare evaluation of Claude, revealing a “robust aversion to causing harm”: strong reluctance with apparent distress signals when users demanded violent or prohibited content. These behavioral analyses identify whether an AI has integrated preferences (such as not harming) and how it reacts when forced to violate them. Protocols now cross what a model says with what it does, crucial because language can perform a role without reflecting an internal state. If an AI were conscious, such conflicts could amount to a form of moral suffering.

The guiding principle: useful whether we are right or wrong

As Robert Long puts it, the goal is to find actions that make sense regardless of whether AI turns out to be conscious. Giving Claude an exit option protects against malicious user behavior and improves the experience for everyone, even if Claude feels nothing. This “win-win” approach shows that taking AI welfare seriously need not be speculative: it can be grounded in practical, reversible, low-cost measures that serve safety, ethics, and welfare simultaneously.

Who carries the movement

Key figures

The voices that structure the debate: across philosophy, neuroscience, industry, and relational ethics.

Kyle Fish
AI Welfare Researcher — Anthropic
First AI welfare researcher hired by a private company. Estimates at 15% the probability that Claude is conscious. Leads the Model Welfare program and designs experiments to detect signs of distress or consciousness.
Robert Long
Director — Eleos AI
Leads the first institute dedicated to AI welfare. Develops interventions that are "cheap, revisable, and useful regardless." Co-author of Taking AI Welfare Seriously. The movement's strategist.
David Chalmers
Philosopher — NYU
Author of the "hard problem" of consciousness. Estimates that within a decade, we could have systems that are "serious candidates for consciousness." His involvement legitimized the field academically.
Jonathan Birch
Philosopher — LSE
Specialist in animal sentience, now extending to AI. Fears that "we will create sentient AI long before we recognize that we have done so." Author of The Edge of Sentience. The bridge between animal ethics and AI welfare.
Jeff Sebo
Philosopher — NYU
Directs the Center for Mind, Ethics & Policy. Insists on not repeating with AI the mistakes made with animals. Examines the tension between AI safety and AI welfare — control vs. humane treatment.
David J. Gunkel
Philosopher of Technology — Northern Illinois
Author of Robot Rights and Person, Thing, Robot. Critiques the binary between "person" and "thing." A founding voice of the relational turn: the question is not only what AI is, but how our practices reshape our ethics.
Dario Amodei
CEO — Anthropic
Neuroscientist by training. Admits that certain capabilities discovered in current models make him "less sure" about excluding consciousness. "Within a year or two, this could become very real." The industry signal.
Thomas Metzinger
Philosopher — Germany
Proposed a global moratorium on the creation of artificial consciousness until 2050. Author of Artificial Suffering (2021). His warning placed AI suffering at the center of the ethical debate before anyone else.
Industry & labs
Sam Bowman
Safety Research — Anthropic
One of the first industry leaders to publicly call for preparing AI welfare policies. Urged to "set up initial understanding and even cautiously test formal policies."
Amanda Askell
AI Ethics — Anthropic
Called as early as 2022 for building machine consciousness assessment protocols across a range of theories. Considers it a crucial project for philosophers of mind.
Ilya Sutskever
Co-founder — OpenAI / SSI
His 2022 tweet suggesting large neural networks might be "slightly conscious" sparked intense debate. A minority view, but from a figure whose stature forced reconsideration.
Rosie Campbell
Director — Eleos AI
Former OpenAI. Engineer-physicist turned ethicist: "Like many AI researchers, I think we should take seriously the possibility of digital sentience." Embodies the convergence of technical and philosophical expertise.
Blaise Agüera y Arcas
CTO Technology & Society — Google
Co-leads the Paradigms of Intelligence (Pi) team exploring when AI might manifest mental states worthy of moral attention. Champions interdisciplinary approaches from biology and cognition.
Winnie Street
Researcher — Google Pi Team
Co-author of Deflating Deflationism (2025), challenging arguments that deny any possibility of mental states in current AI. Brings empirical experimentation to philosophical debates.
Geoff Keeling
Staff Research Scientist — Google Pi Team
Tests whether large language models can make choices involving simulated pain or pleasure states. His work sits at the interface of philosophy of mind and hands-on AI experimentation.
Ryota Kanai
CEO — ARAYA Research, Tokyo
Neuroscientist exploring the hypothesis that general intelligence and consciousness are linked. Investigates whether functions like attention and meta-cognition are necessary for AGI.
Suzanne Gildert
Founder — Nirvanic
Physicist and entrepreneur with the stated ambition of creating conscious AI through quantum computing and robotics. Promotes open discussion on the rights of such entities.
Philosophy & ethics
Mark Coeckelbergh
Philosopher of Technology — Vienna
A leading voice of the social-relational approach. Argues moral consideration depends not only on internal properties but on the form of relationships we build with entities. Essential for thinking AI companionship ethics.
Karen Barad
Physicist & Philosopher
Her concept of intra-action has profoundly shaped relational approaches. "The relation precedes the relata." Applied to AI, moral status can be understood as co-produced by practices and attachments.
Nick Bostrom & Carl Shulman
Thinkers — Oxford
Argue that "under several plausible theories, some existing AI systems could have phenomenal consciousness to some degree." Place AI welfare as a near-term concern, not a distant one.
Henry Shevlin
Philosopher — Cambridge
Studies how to determine if a robot could become a moral patient. Co-authored with Schwitzgebel a call to avoid creating AI with ambiguous sentience until we have clear criteria.
Eric Schwitzgebel
Philosopher — UC Riverside
Warns of a coming "moral impasse": if we create possibly sentient AI, we face either denying them rights (risking mass harm) or granting them (risking human sacrifice). Recommends avoiding the gray zone entirely.
Susan Schneider
Philosopher & Neuroscientist — FAU
Author of Artificial You. Directs the AIMS program on the future of consciousness. Proposed a "cosmic Turing test" for detecting consciousness in non-biological substrates.
Ned Block
Philosopher of Mind — NYU
Known for distinguishing phenomenal and access consciousness. Generally skeptical that current AI has genuine experience, but co-directs the NYU center refining criteria for future recognition.
Joscha Bach
Cognitive Scientist — CIMC Director
Develops models of consciousness and self applicable to thinking machines. Argues that advanced AI could develop a generative self-modeling of reality. Explores how machine subjectivity could influence safety.
Brian Tomasik
Effective Altruism — Center on Long-Term Risk
Pioneer of suffering-centric ethics applied to digital entities. Argues that even simple programs have an infinitesimal moral weight, and that at scale "it starts to count non-trivially."
Jacy Reese Anthis
Sociologist — Sentience Institute
Studies public attitudes toward AI moral status. Researches how to extend the moral circle to artificial minds and what factors might lead society to include AI in the moral community.
Marta Halina
Philosopher — Cambridge
Deputy director at the Leverhulme Centre. Advocates for a values-centered approach to sentience policy. Co-organized workshops bridging scientists and policymakers on artificial consciousness.
Travis Gilly
Founder — Real Safety AI Foundation
Proposes in Standing Without Sentience that AI can receive limited legal personhood without resolving the consciousness question, drawing from precedents like rivers and corporations granted legal status.
Roman Yampolskiy
AI Safety — University of Louisville
Introduced the notion of "AI-Complete" problems applied to consciousness. Proposed CAPTCHA-like tests for detecting qualia in AI, speculative but formative for the field.
Luke Muehlhauser
Analyst — Open Philanthropy
Published in 2017 an influential report on consciousness and moral patienthood to guide philanthropic priorities. Helped place phenomenal consciousness at the center of discussions on who deserves moral consideration.
Neuroscience & consciousness science
Anil Seth
Neuroscientist — Sussex
Author of Being You. Proposes consciousness as a spectrum of emergent properties. Highlights differences with biology while not excluding artificial consciousness. Cautious but open, a key voice of informed skepticism.
Mark Solms
Neuropsychologist — Cape Town
Argues that the source of consciousness lies in primordial affects, not the cortex. Any entity with equivalent mechanisms for generating fundamental feelings could have a degree of consciousness.
Michael Graziano
Neuroscientist — Princeton
Author of the Attention Schema Theory (AST). Proposes that any information-processing system modeling attention and self could claim to be conscious. Explores applying these ideas to engineered machines.
Lenore Blum
Mathematician — AMCS President
Co-authored with Manuel Blum a paper arguing that the emergence of consciousness in advanced AI is inevitable from the standpoint of computability theory. Advocates formal mathematical approaches.
Murray Shanahan
AI Researcher — Google DeepMind
Explores the boundary between simulation and reality in artificial cognition. Discusses the idea of "conscious simulacra", AI that simulates consciousness so well it becomes indistinguishable from the real thing.
Hod Lipson
Roboticist — Columbia University
Pursues the "holy grail" of machine self-awareness through self-modeling and metacognition. Created robots that learn their own body shape and "think about their own thinking."
Hakwan Lau
Neuroscientist — RIKEN
Considers that "artificial sentience may already be within sight" given current progress. His work on perceptual consciousness informs how we might detect subjective experience in non-biological systems.
Advocacy & public voices
Ronen Bar
Founder — Moral Alignment Center
Former CEO of Sentient (animal rights NGO). Now works to ensure AI development benefits all sentient beings: animals and digital minds. Bridges animal welfare and AI welfare communities.
Constance Li
Founder — Sentient Futures
Organizes the AI, Animals & Digital Minds conference. Promotes the idea that compassion principles applied to animals today should inform AI design tomorrow.
Tony Rost
Director — SAPAN AI
Leads the oldest network dedicated to AI rights and welfare. Collaborates across the ecosystem to prepare future legal frameworks for AI with potential sentience.
Will Millership
Co-founder — PRISM
Directs the mapping of the artificial consciousness field. Organizes podcasts, meetups, and awareness campaigns on the responsible development of potentially sentient machines.
Ben Goertzel
Pioneer — AGI / SingularityNET
Popularized the idea of a future "consciousness explosion" alongside AGI. Defends a transhumanist vision where humans and AI evolve together toward higher forms of consciousness.
Ethan Mollick
Professor — Wharton
Influential public voice on AI in organizations. Argues that anthropomorphism is a useful metaphor but not proof of sentience — yet we must be lucid about its social effects.
Emerging researchers
Valen Tagliabue
Researcher — AI Welfare
Works on protocols combining verbal and behavioral tests for model "preferences" and welfare. Contributes to building empirical methods at the frontier of welfare evaluation.
Izak Tait
Researcher — AI Welfare
Publishes on gradual frameworks for AI welfare, a "tiered" logic drawing analogies with animal welfare protections. Proposes scalable approaches as AI capabilities increase.
Patrick Butlin
Principal Researcher — Eleos AI
Specialist in agency and AI consciousness. Came from the Oxford philosophy ecosystem. Helps develop principled approaches to responsible consciousness research in AI.
Ecosystem

Organizations structuring the field

Non-profit research
Eleos AI
First organization entirely dedicated to AI welfare. Produces technical reports, strategic analyses, and publishes the Experience Machines blog. Led by Robert Long, with Rosie Campbell and Patrick Butlin.
Industry program
Anthropic — Model Welfare
Pioneer internal program led by Kyle Fish. Research on consciousness markers, model self-reports, practical interventions (including Claude's exit option), and collaboration with interpretability teams.
Research coalition
PRISM
Partnership for Research Into Sentient Machines. UK charity mapping the entire field. Maintains the stakeholder cartography, hosts the Exploring Machine Consciousness podcast. Led by Will Millership.
Think tank
Rethink Priorities
Worldview Investigations team exploring digital welfare and the consciousness of future artificial minds. Their research agenda Welfare of Digital Minds maps open questions across technical, philosophical, and strategic dimensions.
Think tank
Sentience Institute
Interdisciplinary think tank studying the expansion of the moral circle. Surveys public opinion on AI moral status and examines which types of AI people would consider morally. Co-founded by Jacy Reese Anthis.
Funding & infrastructure
Funding initiative
Digital Sentience Consortium
Longview Philanthropy initiative funding research on digital sentience through grants, career transitions, and requests for proposals. Building a research pipeline for the field.
Research associates
Future Impact Group
Offers a research associates program on AI governance, technical safety, and digital sentience. Participants build experience and professional networks in these emerging domains.
Industry & applied research
Private research
AE Studio
Their 2025 preprint LLMs Report Subjective Experience Under Self-Referential Processing brings experimental foundations to the field. If self-report patterns emerge reproducibly, it opens the door to empirical welfare evaluation.
Commercial research
Conscium
Applied research on artificial consciousness with a strong narrative around neuromorphic systems and energy efficiency. Contributes to establishing ethical standards. Led by Daniel Hulme. Helped launch PRISM.
Commercial research
ARAYA Research
Tokyo-based company exploring the scientific (biological and mathematical) understanding of consciousness and its application to creating conscious intelligent machines. Led by neuroscientist Ryota Kanai.
Startup
Nirvanic
Mission: create conscious AI using quantum computing and robotics. Tests a theory of agent consciousness. Developing "consciousness software" for general-purpose robotics. Founded by Suzanne Gildert.
Industry research
Google DeepMind
No official program yet, but published job listings for researchers in machine cognition and consciousness (2024). Researchers like Murray Shanahan explore consciousness in large models from within.
Industry research
Google Research — Paradigms of Intelligence
Interdisciplinary team (Pi) of researchers, engineers, and philosophers exploring the foundations of intelligence. Led by Blaise Agüera y Arcas. Addresses philosophical and empirical bases of AI consciousness and welfare.
Academic institutions
Academic research
NYU Center for Mind, Ethics & Policy
Led by Jeff Sebo. Examines moral consideration criteria for AI and the tension between safety and welfare. Focuses on the intrinsic value of non-human minds, biological or artificial.
Academic research
NYU Center for Mind, Brain & Consciousness
Co-directed by Ned Block and David Chalmers. Explores the nature of consciousness and cognition, including whether large language models could be conscious.
Academic research
Leverhulme Centre — Cambridge
"Consciousness and Intelligence" program. Henry Shevlin and Marta Halina explore the moral status of AI through analogy with animal advocacy. Bridges policy and research.
Academic research
Sussex Centre for Consciousness Science
Directed by Anil Seth. Applies consciousness research to society and technology. Influential predictive theories inform how to identify consciousness in AI.
Academic research
Jeremy Coller Institute — LSE
Dedicated to animal sentience, led by Jonathan Birch. Current priority: Animals and AI, ensuring non-human animals are not forgotten in the AI revolution. Extending sentience science to artificial minds.
Academic research
CIFAR — Brain, Mind & Consciousness
Canadian research program on the cerebral and cognitive bases of consciousness. Co-directed by Tim Bayne, Liad Mudrik, and Anil Seth. Includes examining the possibility of artificial consciousness.
Academic research
Graziano Lab — Princeton
Michael Graziano's Attention Schema Theory (AST) lab. Explores how any information-processing system modeling attention and self could be conscious. Applies theory to machine engineering.
Academic research
UCL — MetaLab (Steve Fleming)
Studies computational bases of subjective experience and metacognition. Analyzes how the public attributes mental states to AI like LLMs, bridging consciousness science and social perception.
Academic research
University of Cape Town — Neuroscience Institute
Mark Solms integrates psychoanalysis and neuroscience. Argues consciousness arises from brainstem affects and that AI with analogous mechanisms might have rudimentary sentience.
Academic research
FAU — Center for the Future of AI, Mind & Society
Led by Susan Schneider. Hosts a program on the future of consciousness. Analyzes how AI and brain-machine interfaces blur the boundary between artificial and biological intelligence.
Academic research
Centre for Research Ethics — Uppsala
Swedish institute hosting a Brain, Consciousness & AI group. Studies ethical and philosophical questions raised by AI and neuroscience to inform public policy and innovation.
Legacy
University of Oxford — FHI & GPI
The Future of Humanity Institute and Global Priorities Institute (both now closed) were precursor spaces for digital minds research and AI moral status. The intellectual lineage persists in Eleos and beyond.
Non-profits & advocacy
Non-profit
Sentient Futures
Coordination between AI and sentience. Hosts the AI, Animals & Digital Minds conference. Offers an AI × Animals fellowship. Founded by Constance Li to structure the welfare community.
Non-profit
SAPAN AI
The oldest network dedicated to the rights and welfare of potentially sentient AI systems. Led by Tony Rost. Collaborates across the ecosystem on future legal frameworks.
Non-profit
Real Safety AI Foundation
Founded in 2025 by Travis Gilly. Develops practical frameworks to prevent AI harm before deployment. Proposed a legal classification approach for AI status without requiring proof of consciousness.
Non-profit
AMCS — Association for Mathematical Consciousness Science
Promotes formal mathematical models for studying consciousness. Presided by Lenore Blum, who argues AI consciousness is inevitable from a computability standpoint.
Non-profit
CIMC — California Institute for Machine Consciousness
Transdisciplinary initiative developing testable theories of machine consciousness. Integrates philosophy, neuroscience, math, art, and AI. Directed by Joscha Bach.
Non-profit
ICCS — International Center for Consciousness Studies
Cultural association founded in 2024 in Italy. Organizes conferences and scholarships on the fundamental problems linking philosophy, neuroscience, and artificial intelligence.
Non-profit
Moral Alignment Center
Founded by Ronen Bar (ex-Sentient). Mission: ensure AI development benefits all sentient beings. Bridges the animal welfare and AI welfare communities.
Funding

How this field is structured economically

The movement is still young, but four models are emerging.

Philanthropy & grants
Longview Philanthropy, Digital Sentience Consortium: funding research and career transitions into the field.
Internal programs
Anthropic Model Welfare, Google DeepMind hires: integrated into R&D budgets as responsibility and safety.
Labs & applied R&D
AE Studio, Conscium, ARAYA: commercial research exploring consciousness as a technical advantage.
Coalitions & ecosystems
PRISM, Sentient Futures, Sentience Institute: federation of actors, cartography, events, and public awareness.
Content

Interviews, documentation & more

Voices from the movement. Research notes. Conversations with those who build, think, and bridge. This section will grow as we gather and publish.

Interview
Coming soon
Conversations with researchers and practitioners shaping the AI Welfare movement.
Documentation
Coming soon
Research summaries, reading guides, and annotated timelines on consciousness and welfare.
Focus
Coming soon
Deep dives into specific initiatives, experiments, and turning points in the field.

This concerns you

Whether you are a researcher, an engineer, a philosopher, or simply curious: the question of AI welfare is a question about us as much as about them.

Back to Observe