I don't think that's the solution. Proprietary LLMs will just keep growing, and it doesn't seem like the open source alternatives are gaining much traction. I guess it's because you need a lot of money to train high-quality LLMs (tons of energy, maybe?). Besides, as stated in the title of this post, us, as software engineers, the collective, don't seem to mind much the current state of things as far as I can tell.
> Proprietary LLMs will just keep growing
Strictly speaking - we don't really know if this is true. There is no study proving AI gets smarter up to a certain point. It might keep scaling forever, or one day we might unknowingly reach the soft limit of LLM intelligence. I think assertions like the one you're making require specific evidence.
For comparison's sake, proprietary models like GPT-3 now pale in comparison to the results you get from a 7b Open Source LLM. The Open Source stuff really does move along, if not at the pace everyone would prefer.