This matches what I've been finding building AI-integrated systems. The persistent memory, behavioral constraints, and feedback loops around the model do more for output quality than any prompt optimization ever did.

The dog experiment takes this to its logical conclusion — if random keystrokes produce playable games, the "intelligence" was never in the input. We spent two years obsessing over prompt engineering when the real discipline was always system architecture. The scaffolding isn't supporting the AI — it IS the AI's capability.