I agree with the sentiments here. But, I’m less hopeful about the presented solutions.
I think my argument against humans still needing to know how to manage complexity, is that the models will become increasingly able to manage that complexity themselves.
The only thing that backs up that argument is the rate of progress the models have made in the last 3 years (ChatGPT turned 3 just 3 months ago)
I think software people as a whole need to see that the capabilities won’t stop here, they’re going to keep growing. If you can describe it, an LLM will eventually be able to do it.
disagree because when the "super fast" new CPUs of 20 years ago became common, it was easy to write code that executed slower than previous code, due to language constructs and wasteful work patterns. Therefore, I predict that LLM code can explode in complexity (14KLOC for a binary file parser with some features) but that compute will bog down and effort to understand will explode.. that is, in extreme cases.