> I've noticed LLMs are really bad at cleanly abstracting across multiple layers
Which makes sense, as most developers are too (it’s a particular non-trivial skill and rarely modeled wrll), so LLMs are more likely to be trained on muddled multiple layers.