This was essentially my experience vibe coding a web app. I got great results initially and made it quite far quickly but over time velocity exponentially slowed due to exactly this cognitive debt. Took my time and did a ground up rewrite manually and made way faster progress and a much more stable app.
You could argue LLMs let me learn enough about the product I was trying to build that the second rewrite was faster and better informed, and that’s probably true to some degree, but it also was quite a few weeks down the drain.
That makes sense, but surely there's a middle ground somewhere between "AI does everything including architecture" and writing everything by hand?
I wonder about that. A general experience in software engineering is that abstractions are always leaky and that details always end up mattering, or at least that it’s very hard to predict which details will end up mattering. So there may not be a threshold below which cognitive debt isn’t an issue.
> So there may not be a threshold below which cognitive debt isn’t an issue.
That's my hunch too.
The problem isn't "I don't understand how the code works", it's "I don't understand what my product does deeply enough to make good decisions about it".
No amount of AI assistance is going to fill that hole. You gotta pay down your cognitive debt and build a robust enough mental model that you can reason about your product.
I wouldn’t use the term “product” here. Apart from most software being projects, not products, what I was getting at is that details and design decisions matter at all levels of software. You might have a robust mental model of your product as a product, and about what it does, but that doesn’t mean that you have a good mental model of what’s going on in some sub-sub-sub-module deep within its bowels. Software design has a fractal quality to it, and cognitive debt can accumulate at the ostensibly mundane implementation-detail level as well as at the domain-conceptual level. If you replace “product” by “module”, I would agree.
I think of that as the law of leaky abstractions - https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... - where the more abstractions between you and how things actually work the more chance there is that something will go wrong at a layer you're not familiar with.
I think of cognitive debt as more of a product design challenge - but yeah, it certainly overlaps with abstraction debt.
Of course! The original attempt wasn’t really AI doing everything. I was writing much of the code but letting AI drive general patterns since I was unfamiliar with web dev. Now, it’s also not entirely without AI, but I am very much steering the ship and my usage of AI is more “low context chat” than “agentic”. IMO it’s a more functional way to interface with AI for anyone with solid engineering skills.
I think the sweet spot is to make the initial stuff yourself and then extend or modify somewhat with LLMs
it acts as a guide for the LLM too, so it doesn't have to just come up with everything on its own in terms of style or design choices in terms of consistency I'd say?
For more complex projects I find this pattern very helpful. The last two gens of SOTA models have become rather good at following existing code patterns.
If you have a solid architecture they can be almost prescient in their ability to modify things. However they're a bit like Taylor series expansions. They only accurate out so far from the known basis. Hmm, or control theory where you have stable and unstable regimes.
I think it's closer to "doing everything by hand" than you'd expect.
For me, anyway.
I design as I code, the architecture becomes more obvious as I fill in the detail.
So getting AI to do bits, really means getting AI to do the really easy bits.
> So getting AI to do bits, really means getting AI to do the really easy bits.
As someone who gets quickly bored with repetitive work, this is big though.