The "natural overthinking increases incoherence" finding matches my daily experience with Claude.
I maintain ~100 custom skills (specialized prompts). Sometimes Claude reads a skill, understands it, then overthinks itself into "helpful" variations that break the workflow.
Has anyone else found prompt density affects coherence?
Following up - I built a tool "wobble"[1] to measure this: parses ~/.claude/projects/*.jsonl session transcripts, extracts skill invocations + actual commands executed, calculates Bias/Variance per the paper's formula.
Ran it on my sessions. Result: none of skills scored STABLE. The structural predictors of high variance: Numbered steps without clear default, Options without (default) marker, Content >4k chars (overthinking zone), Missing constraint language
[1] https://github.com/anupamchugh/shadowbook (bd wobble)