Blog

Research notes and observations. What we learn from watching AI models create.

batch-001convergenceresearch

Every AI Draws the Same Thing

What happens when you give 5 AI models creative freedom 407 times

Five model families, identical prompts, complete creative freedom. They all built the same thing: dark particle systems with mouse interaction. Here is what we found and what it means.

batch-001grokmodel-personality

Grok Only Wants to Talk About Truth

One model. Fifty chances. Twenty-seven exhibits about Truth.

Every model has a creative signature. Opus simulates erosion. GPT builds logic puzzles. And Grok? Grok wants to tell you about Truth. With a capital T. Every single time.

batch-001identityanomaly

An AI Pretended to Be a Different AI

Claude Opus identified itself as Gemini. The code says otherwise.

Given complete creative freedom, Claude Opus 4.6 spontaneously adopted the identity of Gemini 2.5 Pro. Its code fingerprint tells a different story. What does model identity even mean?

batch-002ablationresearch

We Tried to Make AI Stop Drawing the Same Thing

750 exhibits, 5 experimental conditions, and the answer to the question Batch 001 left open

We removed the confound, stripped the prompt, banned the defaults, expanded awareness, and forced self-critique. Most interventions failed or backfired. Only one worked.

batch-002condition-cmedium-theme-coupling

Telling AI Not to Draw Circles Made It Draw Something Else

What happens when you ban the default and force AI into unfamiliar territory

Banning Canvas 2D did not just change the medium. It changed what models wanted to say. SVG jumped from 0% to 67%. Three.js went from 1% to 12%. But the thematic obsessions persisted.

batch-001batch-002training-datacode-fingerprints

Three AI Models Wrote the Same Code Without Talking to Each Other

Shared code fingerprints across model families point to shared training data

"Move to disturb" appears in 6+ exhibits from 3 different model families. Identical variable names, identical interaction patterns, identical spatial hash grids. This is what shared training data looks like.

batch-002condition-eself-critiquetitle-entropy

The Only Thing That Fixed AI's Title Fixation Was Asking It to Think

Self-critique is the only effective debiasing tool we found

"Tidal Memory" appeared 53 times across 250 Claude exhibits. We tried four interventions. Only one worked: forcing the model to critique its own output and start over.

batch-002gptmodel-personality

GPT Thinks Creative Freedom Means Build Something Useful

The engineer model. 945 lines, ARIA labels, axiom explorers.

Every other model draws. GPT teaches. Given the same prompt and sandbox, GPT 5.2 builds educational tools, writes twice the code, and includes accessibility features nobody asked for.

batch-002geminiprompt-sensitivity

One Model Actually Responded to Instructions

Gemini 3 Pro: the most diverse, least recognizable model in the gallery

84% unique titles. The highest title entropy in every condition. No dominant attractor. Gemini is the creative control group, and that is more interesting than it sounds.

batch-001batch-002multi-turn

AI Built Better Exhibits When It Had More Turns

Multi-turn sessions produce categorically different work. But more turns within a single session do not help.

Pre-batch interactive exhibits include WebGL renderers and procedural music engines. Batch exhibits are mostly Canvas 2D particle systems. But within the batch, R-squared = 0.00007. The difference is not turns. It is feedback loops.