> if you're seeing that the LLM feature curve is actually sigmoid

It takes a few months to train advanced models - lets say 4 months. So in the 3 years since these models became a thing, there have been only 9 sequential trainings done. There is no way in a technology as advanced as LLMs, one can be sure in depth 9 that they have hit a plateau of performance. Surely, there are many more ideas to be discovered and tested..

But we can be quite sure about the categories of error that are possible with the technology though, however advanced. Because of that, there is a plateau in the range of useful applications which would need a paradigm shift to overcome. Diminishing returns are on the horizon.