I think it's obvious that they're not referring to the author or a specific person at all. They're talking about how the zeitgeist has changed. Look at Hacker News archives 3 or more years ago and it would be really hard to find anyone arguing that coding speed is not a bottleneck or that engineers need to spend more time in collaboration. You would find a lot of arguments that leaving engineers alone to code is the best thing a business can do and constant lambasting of meetings, documents, approvals, and other collaborative activities.
I think there are small pieces of truth on both sides of the argument, but I find the sudden change to claiming that coding speed doesn't matter to feel half-baked. Coding speed is part of building a product. Speeding it up does provide benefit. There's a lot of denial about this, but I think the denial is rooted in emotion more than logic right now.
I don't think that this is very hypocritical on the part of the developer holding such views. Typing code has never been the bottleneck, building the mental model has. You need the mental model so you know how the domain and the actual model will interact, which is needed for pre-empting what tests you need, what QA you need to do, etc etc. and the limitations of the system. You can demo this out with a specification but all specifications eventually meet the domain head on, and often with catastrophic consequences, and you still need to do this sort of work anyway when writing the specification.
Fundamentally, LLM do not construct a consistent mental model of the codebase (this can be seen if you, uh, read LLM code,), and this is Bad for a lot of reasons. It's bad for long-term maintainability, it's bad for modelling this code accurately and it's behaviour as a system, it's bad for testing and verifying it, etc. Pretty much all of the tasks around program design require you to have that mental model.
You can absolutely get an LLM to show you a mental model of the code, but there is absolutely nothing that can 100% guarantee you that that's the thing it's using. Proof of this is to look at how they summarise documents, to look at how inaccurate a lot of documentation they generate is, and to look at how inaccurate a lot of their code summaries are. Those would be accurate if the LLM was forming a mental model while it worked. It's a program to statistically generate plausible text, the fact that we got the program to do more than that in the first place is very interesting and can imply a lot of things, but at the end of the day, whatever you ask for it, it will generate text. There is absolutely no guarantee around accuracy of that text and there effectively can never be.
I was an LLM naysayer for a long time. During that time I would have agreed with you. Recent experiences have changed my mind. The accuracy I get from models does not suffer from the problems you describe, and many of the issues you're describing are also true, in different ways, of human beings. There's never any guarantee that any of the text you or I produce will be accurate, or that our summary of it will be accurate, but if you ask us to generate text, we will. It recalls that funny meme: "Your job application says you're fast at math. What's 513 * 487?" "39,414." "That's not even close." "But it is fast."
One of the core problems we have in software engineering is the longstanding philosophical problem around creation of cohesive, consistent, objective mental models of inherently subjective concepts like identifying a person, place, etc. Look at the endless lists of falsehoods programmers (tend to) believe about any topic.
You’re right that LLMs specifically have no guarantees about accuracy nor veracity of the text they generate but I posit that that’s the same with people, especially when filtered through the socialization process. The difference is in the kind of errors machines make compared to ones that humans make.
It’s frustrating we’re using anthropomorphic concepts like hallucinations when describing LLM behaviors when the fundamental units of computation and thus failures of computation are so different at every level.
> but I posit that that’s the same with people,
> The difference is in the kind of errors machines make compared to ones that humans make.
There's another difference, and that is that other humans can learn and study that mental model (which is why "readable code" is a goal — the code is a physical manifestation of the model that you, the programmer, has to learn), and then the model can be tweaked and taught back to the original programmer, who can then think of that tweak in the future. Programming is inherently (in most cases) a collaborative art, because you're working with people to collectively develop a mental model and refine it, smoothing it down until (as Christopher Alexander said) there are no misfits between the model and the domain.
Needing focus to think is not the same as needing focus to write code..
It can take a whole day to find 10 good lines to write.
> It can take a whole day to find 10 good lines to write.
So we've come full circle to code writing speed being a factor again? :)
In all seriousness, this just feels like a never-ending list of attempts to try to resist any notion that LLMs might accelerate software development, however small the increment. The original article was arguing that organizational and collaboration was the bottleneck and that taking a whole day to think about the code was not.
And sometimes an LLM can find those 10 lines in 10 minutes. Or it can find a 100 and you cut them down to 10 in two hours total. Yes I've seen this in practice. The amount of code an LLM can tirelessly ingest is super human.
Twenty minutes ago Claude digested about a dozen pipeline definitions, a sequence of build files and targets, read the scripts they use, found the variable that I could reuse for my purposes, and made the appropriate change in the right (looking) place, so that I'd be able to achieve my goals.
I could not have done this nearly as quickly. On the other hand, I gave it clear, precise instructions.
Most of the time it can’t it can write 1000s, but its not good in finding the best 10s.
Speeding it up provides benefit if speed was the bottleneck to begin with. As the author notes or hints at, faster code output leads to more features being delivered, more room for experimentation, etc. But that's not necessarily productivity, if the features offer no value, if the experiments end up on a shelf, if the maintenance burden and context becomes bigger than the organization can handle (even LLM-assisted).
I've done a lot of "rebuild" / "second system" projects and the recurring theme is that the new version does less than the original. I don't think that's entirely down to the reality of second systems, I think that's in part because software grows over time but developers / managers rarely remove functionality. A full rebuild allows product owners (usually different from the ones of the original software) to consider whether something is actually needed.
Maybe some have changed their views because circumstances radically changed?
You wouldn't find anyone saying typing speed is a problem, they wanted more time for thinking.