I think we're close to the plateau of what LLMs can do, but they will keep improving. IMHO the results are already showing diminishing returns.

The (leading) LLMs work by consensus, like Wikipedia, Openstreetmap, web search engine or opensource movement.

What I mean is if I ask LLM "create a linked list", its understanding (of what I want) is already close to the expected ideal. Just like Wikipedia article on linked list, for example.

But the LLMs will continue to improve in breath and depth of understanding the world, although technically (what they CAN do) they probably already peaked. Similarly, OSS movement technically peaked in the 90s with the creation of compiler, operating system and a database; doesn't mean that new opensource isn't being created.

There is so much money at stake, and so much money pouring into AI development, that I think we are going to continue to see gains for a while. People keep coming up with new agent harness techniques like chain of thought, tool calling, and memories. And then the big LLM companies figure out how to actually train their models to optimize the use of those techniques. To claim that we are reaching the top of the plateau is to claim that we are out of effective ideas for improvement. I think that's a ridiculous claim, the technology is too new. And because of the strong incentives to keep making these things better, it's pretty much a given that people will continue to explore ideas until we really are out of effective ideas. I don't think anyone apart from professional AI researchers have any idea where this is all going to settle.