I was waiting for the "so I tried coding something with an LLM myself, and I found..." paragraph. But apparently the author never did try it, or at least if they did, they didn't write about it.
This is a very academic approach to the subject - read what other people have written about it without ever doing it yourself. Study what someone said about LLM coding 50 years ago, before they were even invented, to see what you think about it.
I would strongly suggest to the author that you just give it a go, and see what you think, without the preconception of other people's opinions.
My experience has been remarkable, and, like others, I'm finding real joy in being able to move past the code to actually design and play with whole systems and architectures.
It gets to the essence of code; which is not about the code, but about the system that code implements. Being able to write code in 3 minutes not 30 minutes does not bog us down in review (the LLM is perfectly capable of reviewing code too). It frees us to explore systems and architectures without worrying about the sunk cost of the existing code, or the effort of changing it.
> I was waiting for the "so I tried coding something with an LLM myself, and I found..."
Why? Most of the article was about the productivity of teams.
> This is a very academic approach to the subject - read what other people have written about it
Meta-studies have tremendous value. He's asking a simple question: if LLMs are changing the world, let's look at what studies are showing.
> My experience has been remarkable, and, like others, I'm finding real joy in being able to move past the code to actually design and play with whole systems and architectures
Great! What does that have to do with the age-old problem that software development doesn't scale to teams well? It is indeed a "50 year old problem", so please tell us how LLMs solve it.
I had to go re-read the article to make sure, but it doesn't address teams or scaling to teams at all, so I'm not sure why you're asking about that?
The article is talking about inherent vs accidental complexity, amongst other points, and if the author had actually tried developing with an LLM, they might have worked out how LLM coding does address some of this.
- The DORA report is about organizations not individuals
- Mythical man-month is about organizations not individuals
- No Silver Bullet: "I believe the hard part of building software to be the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation." Clearly he's NOT talking about the 10x dev building the whole thing themselves, which everybody knows is faster, better, probably doesn't even need a spec. Organizations are who need specs -- they have clients, business people etc. An organization with a single developer moves at light speed -- but this doesn't scale.
Nobody's disputing that LLMs give multiples for certain development tasks. The main thrust of the argument centers on how unimportant coding time is ... for organizations. Coding time is a HUGE lever if you're the one dev building everything, but that's not a repeatable pattern.
Meh, I'll concede that Fred Brooks was mostly writing about developing software within an organisation, and therefore writing about teams.
Coding time is important if it gates experiments and spikes. If you have to work out your architecture on paper because actually coding it up is a serious expense, then it becomes harder to experiment with different designs. In an LLM world where coding time is very cheap, it becomes easier to experiment and try things out. Developing an entire architecture and then abandoning it because it turned out that it didn't scale too well, or couldn't handle some edge cases, is not a major mistake or problem any more. There's no pressure to keep old code because it cost a lot of money to develop. You can spike an entire system, decide that it was a useful experiment, but didn't work, delete the repo, and go get lunch. This is new, and important.
I guess it's what you think the work of sw dev is.
> Developing an entire architecture and then abandoning it because it turned out that it didn't scale too well, or couldn't handle some edge cases, is not a major mistake or problem any more.
Cool, but I can count on one (two?) hands how many times in a 30yr career I had the opportunity to do this, except when I "made the opportunity" by coding the solution fast enough that the PHBs couldn't say no. LLMs should be even better for this of course.
But it's rare, and those times I forced the issue were good for my career but not always for the team. Most of the time, once an organization has a working product, you want to stay in the lanes, roughly, of that product, which is IMO where the coding time advantage vanishes.
The problem I have with it is the price (I am not talking about the money). I don't know if the price is worth it. For example we are literally witnessing the death of the personal computers, it will soon become a rich people's hobby. I don't know how the whole Free Software/Open source will survive that.
At best we will end up not owning nothing, not even the programming skills as everyone will be at the mercy of AI companies for their coding.
We are still in the honey moon phase of AI coding, I have a very pessimistic view of the future.
I'm not sure what LLMs have to do with the death of personal computers? Can you explain, please?
Prices of RAM, GPUs, SSDs and even HDDs are now way out of reach for many people [0]. An SSD I bought 2 years ago at $300 CAD now cost $1K CAD for example and it's not gonna go down any time soon.
[0]: https://www.tomshardware.com/pc-components/storage/perfect-s...
Ah I see, yeah.
This feels like classic economics, though - if the price of something goes up because of demand, then more suppliers enter the market and supply increases.
Also, the AI thing is a bubble, and bubbles burst. Sooner or later all that demand is going to disappear and we'll be oversupplied.
But yes, interesting times indeed.
Are you asking, essentially, to move past the data and evidence and get anecdotes? That seems the opposite of useful, and tbh LLM coding has wayyyy too much anecdotal 'evidence' going on.
I'm not sure what "the evidence" is in this case?
I mean, we have lots of people using LLMs to write software in different ways, as we explore this space. I don't really see how "the evidence" can be different from "anecdotes" at this stage of the exploration?
There have been a couple of studies done on LLM-assisted dev vs non-LLM-assisted dev, but the author doesn't cite them.