I am also not seeing a moat on LLMs.

It seems like the equilibrium point for them a few years out will be that most people will be able to run good enough LLMs on local hardware through a combination of the fact that they don't seem to be getting much better due to input data exhaustion while various forms of optimization seem to be increasingly allowing them to run on lesser hardware.

But I still have generalized lurking amorphous concerns about where this all ends up because a number of actors in the space are certainly spending as if they believe a moat will magically materialize or can be constructed.