Personally, my observation is that 'cognitive debt" feels closer to a tool for selling essays than a precise engineering concept.
Lack of documentation, failed onboarding, poor architectural understanding, missing tests, review fatigue. if all of these are simply grouped together as “cognitive debt,” isn’t that just a failure to build a proper workflow?
The scope is too broad. It reminds me of Stepanov, the creator of the STL, saying that if everything is an object, then nothing is.
When an abstraction tries to cover too many things, that abstraction inevitably fails.
The way AI specifically amplifies this problem is through the difference between direct work and indirect work. The core issue is that “it works” can easily create the illusion that “I understand it.”
Another thing I felt while reading this essay is that it almost seems to go against the direction of modern software engineering. Once software grows beyond a certain size, it is already impossible for anyone except perhaps the original designer to understand the entire system. The goal is not for everyone to understand everything.
The real goal is to make local changes safely, and to ensure that the system keeps running without major disruption when one replaceable part — including a person — leaves.
At this point, many things being described in the industry as “cognitive debt' look to me like rhetorical tools for selling essays.
Reading this, I even wondered: if I write about trendy terms like cognitive debt or spec-driven development on my own blog, will people pay more attention?
To be honest, spec driven development has a similar issue. When you go from a specification down into implementation, information loss is inevitable. LLMs cannot fully solve that. In the end, a human supervisor still has to iterate several times and tune the result precisely. The real question should be: how far down should the specification go? In other words, at what local scope does it become faster for a human programmer to modify the code directly than to keep steering the AI-generated code?
But that discussion is often missing.
As people sometimes say, “when you start talking about Agile, it stops being agile.” In the same way, I think the “cognitive debt” frame may be a flawed abstraction of the current phenomenon.
The moment a living practice is nominalized, packaged, and turned into a consulting product, it loses its original dynamism and context-dependence, becoming a dead template.
It puts various discomforts that emerged after AI adoption — review burden, lack of understanding, fatigue — into a single box.
Then it attaches the economic metaphor of “debt” to emphasize the seriousness of the problem, and subtly injects the normative idea that “this must eventually be repaid.”
Thinking back to Parnas’s 1972 work on information hiding, software engineering was built on the principle that local understanding should be sufficient, and global understanding is not the goal.
The cognitive debt framing seems to implicitly reverse that principle by treating “shared understanding” as something that must be preserved as a global unit. I do not understand why the discussion keeps moving toward the idea that everything must be understood.
It reminds me of Bjarne’s onion metaphor for abstraction: if an abstraction works, you do not necessarily need to peel it apart without reason.
My main issue with the current cognitive debt framing is that the layer it tries to cover is too broad.