Whenever you solve any hard problem, you start off by finding a complicated solution, which you then scale down to a simpler solution.

LLMs are a "complicated solution" in the sense that they're expensive. Once you know what they're capable of, you can scale them down to something less expensive. There's usually a way.

Also, an important advantage of LLMs over other approaches is that it's easy to improve them by finding better ways of prompting them. Those prompting strategies can then get hard-coded into the models to make them more efficient. Rinse and repeat. Similarly, you can produce curated data to make them better in certain areas like programming or mathematics.

they're not _compplicated_, their complex. And solution implies they're not hallucinating the goat and how to fix it.