I've been wondering about this for some time. My initial assumption was that would be that LLMs will ultimately be the death of typed languages, because type systems are there to help programmers not make obvious mistakes, and near-perfect LLMs would almost never make obvious mistakes. So in a world of near-perfect LLMs, a type system is just adding pointless overhead.

In this current world of quite imperfect LLMs, I agree with the OP, though. I also wonder whether, even if LLMs improve, we will be able to use type systems not exactly for their original purpose but more as a way of establishing that the generated code is really doing what we want it to, something similar to formal verification.

Even near-perfect LLMs would benefit from the compiler optimizations that types allow.

However perfect LLMs would just replace compilers and programming languages above assembly completely.

It's interesting to think about what is 'optimal' when discussing LLMs; considering that the cost is per-token. So assembly would be far from optimal as it is not exactly a succinct language... A lot of common operations are repetitive and require many operations; a more abstract, higher level language might actually be inherently more succinct.

It's not just that humans aren't good at thinking in assembly language or binary, but the operations are much more granular and so it requires a lot of operations to do express something as simple as a for-loop or a function call.

I think the perfect AI might actually come up with a language closer to Python or JavaScript.