Hard disagree. We'll be able to use more expressive languages with better LLM support for understanding how to express ourselves and to understand compiler results. LLMs are only good at stuff that better languages don't require you to do. After that they fall off the cliff quickly.

LLMs are a communication technology, with a huge trained context of conversation. They have a long way to go before becoming anything intelligent.

LLMs lack intentionality, and they lack the ability to hold a series of precepts "in mind" and stick to those precepts. That is, if I say "I want code that satisfies properties A, B, C, D..." at some point the LLM just can't keep track of all the properties, which ones are satisfied, which ones aren't, what needs to be done or can be done to make them all satisfied.

But LLMs aren't "only good at stuff that better languages don't require you to do." In fact they are very good at taking a bad function definition and turning it into an idiomatic one that does what I wanted to do. That's very intelligent, there is no language that can take a bad spec and make it specific and fit for the specified task. LLMs can. (not perfectly mind you, but faster and often better than I can.) The problem is they just can't always figure out when what they've written is off-spec. But "always" isn't "never" and I've yet to meet an intelligence that is perfect.

> LLMs ... lack the ability to hold a series of precepts "in mind" and stick to those precepts.

That is perhaps the biggest weakness I've noticed lately, too. When I let Claude Code carry out long, complex tasks in YOLO mode, it often fails because it has stopped paying attention to some key requirement or condition. And this happens long before it has reached its context limit.

It seems that it should be possible to avoid that through better agent design. I don't know how to do it, though.