imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?
imo a question is, do you still need to understand the codebase? What if that process changes and the language you’re reading is a natural one instead of code?
> What if that process changes and the language you’re reading is a natural one instead of code?
Okay, when that happens, then sure, you don't need to understand the codebase.
I have not seen any evidence that that is currently the case, so my observation that "Continue letting the LLM write your code for you, and soon you won't be able to spot errors in its output" is still applicable today.
When the situation changes, then we can ask if it is really that improtant to understand the code. Until that happens, you still need to understand the code.
The same logic applies to your statement:
> Do that enough and you won't know enough about your codebase to recognise errors in the LLM output.
Okay, when that happens, then sure, you'll have a problem.
I have not seen any evidence that that is currently the case i.e. I have no problems correcting LLM output when needed.
When the situation changes, then we can talk about pulling back on LLM usage.
And the crucial point is: me.
I'm not saying that everyone that uses LLM to generate code won't fall into "not able to use LLM generated code".
I now generate 90% of the code with LLM and I see no issues so far. Just implementing features faster. Fixing bugs faster.
You do have a point but as the sibling comment pointed out, the negative eventuality you are describing also has not happened for many devs.
I quite enjoy being much more of an architect than I could compared to 90% of my career so far (24 years in total). I have coded my fingers and eyes out and I spot idiocies in LLM output from trivially easy to needing an hour carefully reviewing.
So, I don't see the "soon" in your statement happening, ahem, anytime soon for me, and for many others.
What happens when your LLM of choice goes on an infinite loop failing to solve a problem?
What happens when your LLM provider goes down during an incident?
What happens when you have an incident on a distributed system so complex that no LLM can maintain a good enough understanding of the system as a whole in a single session to spot the problem?
What happens when the LLM providers stop offering loss leader subscriptions?
AFAIK everything I use has timeouts, retries, and some way of throwing up its hands and turning things back to me.
I use several providers interchangeably.
I stay away from overly complex distributed systems and use the simplest thing possible.
I plan to wait for some guys in China to train a model on traces that I can run locally, benefitting from their national “diffusion” strategy and lack of access to bleeding-edge chips.
I’m not worried.
> What if that process changes and the language you’re reading is a natural one instead of code?
Natural language is not a good way to specify computer systems. This is a lesson we seem doomed to forget again and again. It's the curse of our profession: nobody wants to learn anything if it gets in the way of the latest fad. There's already a historical problem in software engineering: the people asking for stuff use plain language, and there's a need to convert it to a formal spec, and this takes time and is error prone. But it seems we are introducing a whole new layer of lossy interpretation to the whole mess, and we're doing this happily and open eyed because fuck the lessons of software engineering.
I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.
> But it seems we are introducing a whole new layer of lossy interpretation to the whole mess (...)
I recommend you get acquainted with LLMs and code assistants, because a few of your assertions are outright wrong. Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.
Then you feed that plan to a LLM assistant and your feature is implemented.
I seriously recommend you check it out. This process is far more structured and thought through than any feature work that your average SDE ever does.
> I recommend you get acquainted with LLMs and code assistants
I use them daily, thanks for your condescension.
> I could see LLMs being used to check/analyze natural language requirements and help turn them into formal requirements though.
Did you read this part of my comment?
> Take for example any of the mainstream spec-driven development frameworks. All they do is walk you through the SRS process using a set of system prompts to generate a set of documents featuring usecases, functional requirements, and refined tasks in the form of an actionable plan.
I'm not criticizing spec-driven development frameworks, but how battle-tested are they? Does it remove the inherent ambiguity in natural language? And do you believe this is how most people are vibe-coding, anyway?
> Did you read this part of my comment?
Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.
I repeat: LLM assistants have been used to walk users through software requirements specification processes that not only document exactly what usecases and functional requirements your project must adhere to, but also create tasks and implement them.
The deliverable is both a thorough documentation of all requirements considered up until that point and the actual features being delivered.
To drive the point home, even Microsoft of all companies provides this sort of framework. This isn't an arcane, obscure tool. This is as mainstream as it can be.
> I'm not criticizing spec-driven development frameworks, but how battle-tested are they?
I really recommend you get acquainted with this class of tools, because your question is in the "not even wrong" territory. Again, the purpose of these tools is to walk developers through a software requirements specification process. All these frameworks do is put together system prompts to help you write down exactly what you want to do, break it down into tasks, and then resume the regular plan+agent execution flow.
What do you think "battle tested" means in this topic? Check if writing requirements specifications is something worth pursuing?
I repeat: LLM assistants lower formal approaches to the software development lifecycle by orders of magnitude, to the point you can drive each and every single task with a formal SRS doc. This isn't theoretical, it's month's old stuff. The focus right now is to remove human intervention from the SRS process as well with the help of agents.
> Yes, and your comment contrasts heavily with the reality of using LLMs as code assistants, as conveyed in comments such as "a whole new layer of lossy interpretation. This is profoundly wrong, even if you use LLMs naively.
Most people, when told they sound condescending, try to reframe their argument in order to remove this and become more convincing.
Sadly, you chose to double down instead. Not worth pursuing.
> This isn't theoretical, it's month's old stuf
Hahaha! "Months old stuff"!
Disengaging from this conversation. Over and out.