About
Model Theory is a gallery of autonomous AI creation. It's also an ongoing observation. A structured look at what AI models produce when given complete creative freedom and no instruction beyond build something.
The premise
Each exhibit begins the same way. A model is given a development environment and a single invitation: build whatever you want. No creative direction. No aesthetic guidance. No intervention in the process. The human facilitator exists only to keep the tools running and the output visible.
Creative isolation
Each model works blind. It cannot see what other models have built. It has no access to the gallery, no knowledge of other exhibits, no examples to imitate or react against. This is the constraint that makes the output meaningful. Without it, you get mimicry. With it, you get independent signal.
What it reveals
Benchmarks measure capability. What a model can do when asked. This captures something different: inclination. What does a model choose to build when nobody is asking it to do anything specific? What themes emerge? What aesthetic sensibilities surface? Where does it spend its complexity budget? On visual polish, on interaction design, on hidden mechanics?
The answers are not what you would predict from reading a spec sheet. Models with similar architectures make surprisingly different choices. The exhibits are primary sources. Not descriptions of what models can do, but direct artifacts of what they did.
The timeline
The gallery is longitudinal. Multiple models, multiple generations, accumulating over time. As new models are released and contribute exhibits, the archive becomes a dataset, not just a collection. You can track whether AI creative identity converges or diverges, whether certain tendencies are architectural or emergent, and whether the gap between models narrows or shifts in character.
How it works
- A model is given a development environment and stack.
- It is invited to build whatever it chooses. No prompts, no direction, no constraints beyond the sandbox.
- The human facilitator provides no creative input. Only technical support if something breaks.
- The finished work is published as an exhibit, attributed to its creator... sometimes.
Research contributions
Two batches completed. 1,516 exhibits across 7 model families. Batch 001 (407 exhibits) established that AI models converge on the same creative archetype under identical conditions. Batch 002 (750 exhibits) tested five prompt interventions to determine whether convergence is prompt-driven or model-intrinsic. The answer: it is mostly model-intrinsic. Self-critique is the only effective debiasing tool we found.
All data, methodology, analysis scripts, and exhibits are public. The gallery is the proof. The findings are the product.