It's important that developers have an accurate mental model of how things work, are structured and why.

LLMs promote a decoupling of mental models and the actual codebase.

As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.

Exactly. This is what many ai-sloppers ignore. Mental models are crucial. Nothing substitutes for having the program itself in your brain and being able to "mentally debug" it when something breaks.