As LLMs consume all our compute resources and drive up prices for the compute hardware on which we run applications, the silver lining is that LLMs are helpful in implementing tooling without a heavy stack so it will run quickly on a lower-spec computer.
I've achieved 3 and 4 orders of magnitude CPU performance boosts and 50% RAM reductions using C in places I wouldn't normally and by selecting/designing efficient data structures. TUIs are a good example of this trend. For internal engineering, to be able to present the information we need while bypassing the millions of SLoC in the webstack is more efficient in almost every regard.
I suspect that a native GUI, or even something like GPUI or Flutter would be still more performant than TUI's, which are bound by the limitations of emulating terminals.
A very important thing about constraints is that they also liberate. TUIs automatically work over ssh, can be suspended with ctrl-z and such, and the keyboard focus has resulted in helpful conventions like ctrl-R that tend to not be as prominent in GUIs.
The question is how many decades each user of your software would have to use it in order to offset, through the optimisation it provides, the energy consumption you burned through with LLMs.
When global supply chains are disrupted again, energy and/or compute costs will skyrocket, meaning your org may be forced to defer hardware upgrades and LLMs may no longer be cost effective (as over-leveraged AI companies attempt to recover their investment with less hardware than they'd planned.) Once this happens, it may be too late to draw on LLMs to quickly refactor your code.
If your business requirements are stable and you have a good test suite, you're living in a golden age for leveraging your current access to LLMs to reduce your future operational costs.
In the past week I made 4 different tasks that were going to make my m4brun at full tilt for a week optimized down to 20 minutes with just a few prompts. So more like an hour to pay off not decades. average claude invocation is .3 wh. m4 usez 40-60 watts, so 24x7x40 >> .3 * 10
Especially considering that suddenly everyone and their mother create their own software with LLMs instead of using almost-perfect-but-slighty-non-ideal software others written before.
I’m not really worried about energy consumption. We have more energy falling out of the sky than we could ever need. I’m much more interested in saving human time so we can focus on bigger problems, like using that free energy instead of killing ourselves extracting and burning limited resources.
They’re also great for reducing dependencies. What used to be a new dependency and 100 sub-dependencies from npm can now be 200 lines of 0-import JS.