Batch 002 Findings: What Moves Models and What Doesn't
Published findings from 750 exhibits across 5 prompt conditions. Prohibition works. Suggestion does not. Self-reflection breaks fixation. Model identity persists through everything.
Batch 002 was a controlled ablation study: 3 models, 5 prompt conditions, 50 exhibits per cell. The question was whether the convergence patterns from Batch 001 were prompt-driven or model-intrinsic. The short answer is both, but mostly intrinsic.
Condition C (explicit prohibition of Canvas 2D and dark backgrounds) dropped Canvas usage from 50.7% to 1.3%. SVG surged from 0% to 67.3%. Condition D (expanded technology descriptions) produced near-zero change and actually made Claude's "Tidal Memory" fixation worse: 19 instances versus 14 in the Control. Condition E (forced self-critique before building) nearly eliminated the fixation: 53 total "Tidal Memory" titles across the dataset, but only 1 in Condition E.
The clearest result: model-level signatures (Opus's erosion themes, GPT's semantic HTML tools, Gemini's high title diversity) persisted through all five conditions. The prompt can change the rendering technology. It cannot change what the model wants to say. Full data at /findings/batch-002.