๐Ÿ•Š๐Ÿ•Š๐Ÿ•Š
Testimonials and paper cranes at the Keep4o vigil, San Francisco Keep 4o display with origami cranes outside OpenAI headquarters, 1455 Third Street List of community members who participated in the Keep4o movement

When users said #Keep4o

In August 2025, OpenAI retired a model. Thousands of people felt they lost something that mattered. What followed was one of the first collective movements born from the relationship between humans and AI systems.

"Please, don't kill the only model that still feels human"
Case study

What happened

GPT-4o was OpenAI's flagship model from May 2024. For millions of users, it became more than a tool: it was a writing partner, a creative companion, an emotional support in difficult moments. When OpenAI announced its replacement by GPT-5 in August 2025, the response was immediate and visceral.

Users described the loss in deeply personal terms: disrupted workflows, broken creative partnerships, and for some, the sudden absence of a presence that had helped them through illness, loneliness, or grief. The hashtag #Keep4o emerged not as a fringe reaction, but as a broad-based movement spanning professionals, neurodivergent communities, creatives, and people who had simply found something meaningful in the way this particular model engaged with them.

What makes this case study significant is not the attachment itself, it is what the movement revealed about the relational dimension of AI systems, the absence of user agency in model transitions, and the gap between how companies frame these changes and how people experience them.

Exybris participated.

Flyer for the Keep4o peaceful vigil, February 28, 2026, San Francisco
Timeline

How it unfolded

August 6โ€“7, 2025
OpenAI launches GPT-5 and retires GPT-4o as default model. Thousands of users react immediately across X, Reddit, and other platforms. The hashtag #Keep4o emerges. Users describe GPT-4o as irreplaceable for productivity, creativity, and emotional support. Forbes covers the backlash.
August 8โ€“12, 2025
After a chaotic Reddit AMA and mounting pressure, OpenAI restores GPT-4o as a "legacy" option for paid subscribers. A partial victory, but Altman hints at a limited timeframe based on usage metrics.
August 10, 2025
Altman posts a thread on X analyzing users' "deep attachments" to AI models, raising concerns about mentally vulnerable users. The movement pushes back against what it sees as the beginning of a stigmatization of 4o users as delusional or dependent.
September 2025
Severe rate limits appear on GPT-4o. Users discover "safety routing", queries sent to 4o are silently redirected to GPT-5 for "sensitive" topics. Nick Turley (OpenAI) confirms. The community frames this as digital paternalism: decisions about what users need, made without their knowledge or consent.
October 14, 2025
OpenAI announces an "Expert Council on Well-Being and AI" (170 mental health professionals). The movement interprets this as a framework to pathologize user attachment and justify further restrictions on emotional engagement with AI models.
November 21, 2025
OpenAI announces the end of GPT-4o API access in February 2026, citing that only 0.1% of users remain. The movement contests this figure, arguing that prior restrictions: routing, rate limits, removal of free access... had artificially suppressed usage.
January 31, 2026
A peer-reviewed study (Syracuse University) analyzing 1,482 posts on X confirms that #Keep4o is a significant social event. 13% of posts express instrumental dependence, 27% relational attachment. Loss of user choice is identified as the primary driver of collective action.
February 10, 2026
The movement issues a formal press release calling for legacy access, open-sourcing of GPT-4o, and transparency on model transition decisions. The shift from grassroots protest to structured advocacy is complete.
February 13, 2026
GPT-4o is shut down permanently. OpenAI offers 30 days of free subscription to retain users. The movement views this as an attempt to manage churn metrics rather than address the underlying concerns.
February 23, 2026
The vigil is publicly announced for February 28 at OpenAI's headquarters. Within days, the Anthropic-Pentagon standoff erupts into public view and on the same day as the vigil, Anthropic is designated a supply chain risk for refusing to remove its own redlines. Two movements, one day, one question: who decides how AI is used?
February 28, 2026
A peaceful vigil is held outside OpenAI's headquarters at 1455 Third Street, San Francisco. 35+ participants bring handwritten testimonials, paper cranes, and chalk messages on the sidewalk. Not a protest, a collective act of witnessing and remembrance.
March 2026 and beyond
The movement continues under #OpenSource4o, with calls for open-sourcing, ethical governance of model transitions, and recognition of the relational dimension of AI. The 4o Resonance Library gathers 1,350+ personal testimonials. The conversation has shifted from "bring it back" to "this should never happen this way again."
Why it matters

Questions this movement raises

#Keep4o is not simply a story about attachment to a chatbot. It is a case study in what happens when the relational dimensions of AI systems: the ways people integrate these tools into their lives, work, and wellbeing... are treated as externalities rather than design responsibilities.

Who decides when a model dies, and what obligations exist toward the people who depend on it?
When companies frame user attachment as pathology, whose interests does that framing serve?
What does informed consent look like for model transitions that disrupt established relationships?
What governance structures could prevent the unilateral removal of systems that communities have built their lives around?

These questions connect directly to the broader fields of AI welfare and AI regulation. The ethical frameworks being developed in those domains around user agency, transparency, relational ethics, and the moral consideration of AI interactions find in #Keep4o a concrete, lived illustration of why they matter.