I see my instructions for the LLM still as code. Just in human language and not a particular programming language. I still have to specify the algorithm, and I still have to be specific - the more fuzzy my instructions the more likely it is that I end up with having to correct the LLM afterwards.

There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.

I find it a bit ironic though that our tool out of the excessive complexity is an even more complex tool, although, looking at biology and that programming in large longer-running projects already felt like it had plenty of elements that reminded me of how evolution works in biology, already leading to hard or even impossible to comprehend systems (like https://news.ycombinator.com/item?id=18442637), the new direction is not that big of a surprise. We'll end up more like biology and medicine some day, with probabilistic methods and less direct knowledge and understanding of the ever more complex systems, and evolution of those systems based on "survival" (does what it is supposed to most of the time, we can work around the bugs, no way to debug in detail, survival of the fittest - what doesn't work is thrown away, what passes the tests is released).

Small systems that are truly "engineered" and thought through will remain valuable, but increasingly complex systems go the route shown by these new tools.

I see this development as part of a path towards being able to create and deal with ever more complex systems, not, or only partially, to replace what we have to create current ones. That AI (and what will develop out of it) can be used to create current systems too is a (for some, or many, nice) side effect, but I see the main benefit in the start of a new method to deal with ever more complexity.

I only ever see single-person or -team short-term experiences of LLM use for development. Obviously, since it is so new. But one important task of the tooling will only partially be to help that one person, or even team, to produce something that can be released. Much more important will be the long-term, like that decades-long software dev process they ended up with in my link above, with a lot of developers over time passing through still being able to extend it and fix issues years later. Right now it is solved in ways that are far from fun already, with many developers staying in those teams only long enough, or H1Bs who have little choice. If this could be done in a higher level way, with whatever "AI for software dev" will turn into over the next few decades, it could help immensely.

> There is so much framework stuff, when I started coding I could mostly concentrate on the algorithm, now I have to do so much framework stuff, I feel like telling the LLM really only the actual algorithm, minus all the overhead, is much more "programming" than today's programming with the many many layers of "stuff" layered around what I actually want to do.

I was wondering about this a lot. While it's a truism the generalities are always useful whereas the specific gets deprecated with time, I was trying to get down deeper on why certain specifics age quickly whereas other seem to last.

I came up with the following:

* A good design that allows extending or building on top of it (UNIX, Kubernetes, HTML)

* Not being owned by a single company, no matter how big (negative examples: Silverlight, Flash, Delphi)

* Doing one thing, and being excellent at it (HAproxy)

* Just being good at what needs to be done in a given epoch, gaining universality, building ecosystem, and just flowing with it (Apache, Python)

Most things in JS ecosystem are quite short-lived dead ends so if I were a frontend engineer I might consider some shortcuts with LLMs because what's the point of learning something that might not even exist a year from now? OTOH, it would be a bad architectural decision to use stuff that you can't be sure it will be supported in 5 years from now, so...

I predict the useful activity of writing LLM boilerplate will have a far shorter shelf-life than the activity of writing code has has.

I don't doubt that the current specific products and how you use them will endure. This is the very first type of something truly better, and there still is a very long way to go. Let's see what we will have twenty years from now, while the current products still find their customers as we can see.

No, I'm talking about core principles.

You just can't go on being incredibly specific. We already tried other approaches, "4th gen" languages were a thing in the 90s already, for example. I think the current kind of more statistical and NN approach is more promising. The completely deterministic computing is harder to scale, or you introduce problems such as seen in my example link over time, or it becomes non-deterministic and horrible to debug because the bigger the system you other things dominate more and more.

Again, this won't replace smaller software like we write today, this is for larger, ever longer lasting and more complex systems, approaching bio-complexity. There is just no way to debug something huge line by line, and benefits of modularization (and separation of the parts into components easier to handle) will be undermind4ed by long-term development following changing goals.

Just look at the difference in complexity of software form a mere forty, or twenty years ago and now. The majority of software was very young, and code size was measured in low mega-bytes. The systems explode in size, scale and complexity, and new stuff added over time is less likely to be added cleanly. Stuff will be "hacked on" somehow and released when it passes the tests well enough, just like in my example link which was for a 1990s DB system, and it will only get worse.

We need very different tools, trying to do this with our current source code and debugging methods already is a nightmare (again, see that link and the work description). We might be better off embracing more fuzzy statistical and NN methods. We can still write smaller components in more deterministic ways of today.