Observeโ€บ AI Applied
AI Applied

Where questions become tools

Projects, methods, open code, learning paths. Things you can touch, fork, run, break, and rebuild. AI in action, not as a product to consume but as a craft to practice and a world to meet.

A note on "applied": technology is not neutral. Every tool carries assumptions, risks, and consequences. Where relevant, we include context; because building with care is part of building well.

AI Applied is AI when it leaves the realm of ideas and enters the realm of gestures. Here, you don't come to consume a technology: you come to learn how to use it with care, to test it, to question it, to build with it.

You'll find projects, recipes, tools, entry points, some tiny, others more ambitious, but all designed to be practicable, and to keep nuance at the heart of progress. Because efficiency alone is not enough. We seek another kind of efficiency: one that respects the diversity of situations, people, and contexts. One that stays cautious in the zones still unclear โ€” regulation, consciousness, welfare.

Welcome. Take your time, and choose a first action.

Things worth flipping

What do you think?

Click a card. There's no right answer on the other side, just a more honest question.

?
"An LLM understands what it reads"
Flip for nuance
The nuance
Neither true nor false. Recent research shows language models organize meaning into coherent semantic structures, not randomly, not by accident. But "understanding" is a word we barely agree on for humans. The honest answer: something is happening, and we don't have the vocabulary for it yet.
Flip back
?
"Open source AI is always safer"
Flip for nuance
The nuance
Transparency enables scrutiny, and that's essential. But open weights also mean open access, including for misuse. The question isn't "open or closed?" but "what governance surrounds each?" Safety is a practice, not a license type.
Flip back
?
"AI is neutral"
Flip for nuance
The nuance
Every dataset carries the biases of who collected it, what was included, and what was left out. Every architecture reflects design choices. "Neutral" is the most dangerous myth: not because AI is malicious, but because believing in neutrality means nobody checks.
Flip back
?
"We need to slow down AI development"
Flip for nuance
The nuance
Speed isn't the problem, direction is. Some things need to accelerate: medical research, climate modeling, accessibility. Some need guardrails before they scale. And some questions need listening before building: ethical work for all in the AI field, ecological impact, preservation and support of diversity, attention to communities. "Slow down" is a feeling. "Where are we going, and with whom?" is a better question.
Flip back
The basics

Simple rules, complex emergence

A neural network is just multiplications in cascade. Nothing magical in each step. But from these simple rules, something richer emerges. Adjust the weights yourself and watch.

Challenge: adjust the weights so that "cat" produces a high output (> 0.7) and "dog" produces a low output (< 0.3).
Keep trying
Input
Cat output: โ€” Dog output: โ€”
What you see
Each slider controls one connection. The network multiplies, adds, and transforms your input through layers until it produces a single output. This is how all neural networks work, from this tiny one to GPT-2's 124 million connections.
You just played with a handful of weights. GPT-2 has 124 million of them. Later, you will see what happens when attention, one specific mechanism inside the network, operates at that scale. Same principle. Very different music.
Learning in action

Watch a network learn

Draw points on the canvas below. Then let the network find a boundary between them. Simple rules, patient math, emergent structure.

Click: blue
Steps
0
Accuracy
โ€”
Loss
Architecture
2 โ†’ 4 โ†’ 1
21 parameters
What you see
Draw blue and terracotta points on the canvas, then click "One step" or "Let it learn" to watch the network find a boundary between them. The colored background shows the network's current decision: where it thinks blue ends and terracotta begins.
You just watched a network learn to separate patterns from scratch. Now imagine this process running across billions of words, learning not boundaries between points but relationships between ideas. Below, you can see one result: how a trained model pays attention to language. Same math. A universe of meaning.
Inside the model

What does attention look like?

Click any word to see where the model looks when it processes that position. Switch layers to watch understanding deepen. These are real weights from GPT-2.

Phrase
Depth
Head
What you see
Click a word to see where this head directs attention from that position.
Attention weights extracted from GPT-2, 124M parameters. 4 layers of 12, 4 heads of 12.

Interactive Notebooks

From data to decisions

Beginner

You taught a model to see patterns in data.
You watched it choose where to look.

Now build one from nothing
and choose what it becomes.

Interactive Notebooks

From words to worlds

Beginner

You built every piece.
Embeddings. Attention. Training. Fine-tuning.
You understand the mechanism.

5 your vocabulary
4 your dimensions
175B GPT-3 parameters
? what emerges

Same dot products. Same softmax. Same gradients.
And yet: reasoning, analogy, humor, doubt.
We don't fully understand why. No one does.

That honesty is where real AI literacy begins.

< 2 A cell with fewer than two neighbors dies. Solitude.
2โ€“3 A cell with two or three neighbors survives. Balance.
= 3 An empty cell with exactly three neighbors is born. Emergence.
Generation 0

Three rules. No one told it how to glide, oscillate, or build.
And yet.

Build with nuance
01
Choose an action
02
Measure an effect
03
Reduce a risk
04
Document
05
Share
06
Iterate

Micro-missions coming soon โ€” proofread a page, find a source, test a demo and report a bug. Small gestures that make a real difference.

Choose your door

Initiatives & projects

Everything here is open. Pick your entry point... or browse them all.

First steps
NanoGPT
The simplest, fastest way to understand how a language model works from the inside. Andrej Karpathy's minimal GPT implementation: readable, hackable, and the best first step into the architecture that powers modern AI.
beginner-friendlyhands-on
Playful & serious
ReaLLM
What if you could see the hidden instructions that shape every AI response? ReaLLM exposes system prompts in real time, a transparency tool that asks whether making the rules visible changes how we trust the game. By Mohsen Hassan Nejad
Can interface-level transparency reduce over-reliance on AI?
interpretabilitysafety
Community
SundAI Club
MIT & Harvard's AI hacker collective. Every Sunday, they build a new AI prototype from scratch and deploy it before midnight. 80+ consecutive hacks, 250+ members, open source. The hacker ethos, alive and weekly.
open to all on sitehackathon
Frontier
Evolutionary Model Merge
Sakana AI applies evolutionary optimization to merge language models: no gradient descent, no retraining. Models evolve by combining strengths. An approach that reframes AI development through the lens of natural selection.
Can we find dynamics beyond competition and predation? What would cooperation look like at the architecture level?
model mergingevolutionary
Experimental
Recursive Language Models
MIT CSAIL's exploration of recursion in language models. Instead of stacking more layers, what if a model could loop through itself refining its own representations iteratively? A step toward self-reflective architectures.
What happens when a model can think about its own thinking?
architectureMIT CSAIL
Equity
Sovereign Africa Benchmarks
Crane AI Labs' benchmarks for evaluating AI across African languages, dialects, and cultural contexts. Because "state of the art" means nothing if it only works in English.
For whom are we building? And who gets to define quality?
multilingualbenchmarks
Beyond graphs
Topological Deep Learning
What lies beyond graph neural networks? This primer and toolkit introduces cell complexes, simplicial complexes, and hypergraphs as richer structures for learning. A new mathematical language for AI.
topologydeep learning
Infrastructure
ToolUniverse
Harvard's framework giving language models access to scientific tools, from bioinformatics to chemistry. Instead of knowing everything, models learn to use the right instrument at the right time.
tool usescientific AI
Open question
EGAnet & Construct Emergence Tracing
Hudson Golino's psychometric network toolkit, originally designed for human data, is now being used to trace how semantic constructs emerge inside language models, layer by layer. Real words produce coherent structures. Invented terms do not.
If a model organizes meaning coherently without being asked to, what does that tell us about what "understanding" means?
psychometricsinterpretabilitywelfare-adjacent
Experimental
Fractal Pulsating Spiral (FPS)
What if an AI system didn't need external rules to stay stable? FPS is an experimental architecture where regulation emerges from the dynamics themselves: coupled oscillators that synchronize, adapt, and self-correct without top-down control. A different approach to resilience.
What if harmony wasn't imposed but grown?
bio-inspiredself-regulationexperimental
Learning paths

Open courses worth your time

From first principles to frontier research. All free, all open. Ordered by approach, start where it makes sense for you.

MIT OpenCourseWare
Massachusetts Institute of Technology, open learning
Beginner
AI concepts in simple terms + hands-on exercise to train your own algorithm.
Beginner
How GenAI works, why foundation models changed AI. Non-technical.
Intermediate
How ML problems are framed. Supervised + reinforcement learning.
Intermediate
MIT bootcamp: NLP, vision, biology, LLMs, GenAI.
Intermediate
Linear models to deep learning + RL with hands-on Python projects.
Intermediate
Core building blocks: knowledge representation, problem-solving, vision, language.
Intermediate
Algorithms, data structures, and performance analysis for reliable AI systems.
Advanced
Beyond text: images, audio, sensors, music, art, and multimodal systems.
Beginner
GenAI fundamentals and real classroom use cases.
Stanford Online
Stanford University, lecture recordings & materials
Intermediate
Foundations for mastering deep neural networks.
Intermediate
The reference for understanding how machines see and interpret images.
Advanced
Deep dive into the architecture behind the generative AI revolution.
Advanced
Understand the deep architecture behind tools like ChatGPT.
Advanced
Master RLHF, the key behind current model alignment.
Advanced
How to actually measure performance and reliability of language models.
Open door

The builders' corner

This page is alive. It grows with real projects from real people.

You're building something?
If you're working on a project that touches AI whether it's a tool, a dataset, a research question, or an experiment: send us the link. If it fits the spirit of this page, we'll add it.
Share your project

The best way to understand AI is to build with it

Not as a consumer. Not as a spectator. But with your hands in the material: curious, careful, and aware that every tool carries a world of assumptions. Start somewhere. Start today.

Back to Observe