If it helps, I would consider learning about and deploying local LLMs (llama.cpp etc), if you have the hardware (and it doesn't take all that much hardware at all). They're nowhere near as fast as LLM providers, but they come fairly close in overall quality (including agentic workflows).

The reason that I suggest this is that not only does running locally democratize AI (while AI democratizes software), it solves a lot of the discomfort surrounding the new reliance, dependence, data, and trust given to LLM providers. Basically it removes a lot of the "ick" surrounding LLMs, while also hedging any bets that the investment schemes of LLM providers are unsustainable. You may also hedge bets surrounding "owning" the maintenance of the machines that replace you.

I don't know, it's a small thing that makes me feel a little better during this tumultuous time.