I somewhat agree with you, especially that one could identify a common abstraction that later an LLM could piggyback on top of.
Genuine question though - have you implemented an AI assistant/chat interface recently using LLMs on top of a UI?
I agree it can be a rabbit hole, but I just got through doing it on an app and there were definitely some things it really made way simpler and some complex scenarios that I'm not sure could have been done any more simply.
I built a chat interface in 2017 (this was with chatscript dialog trees with hole-filling and semantic search) that was ostensibly to prevent our data scientists from redundant work, ie, before they spent all day writing a SQL script, describe the job of the script and see if one already exists. The chatbot would then ask for the parameters required for a script, run the job, and then present a CSV of the returned data.
As we collected user feedback and refined the UX, we got closer and closer to an option tree that could be better represented by a drop down menu. It was kind of depressing, but I learned that the actual job of that R & D wasn't to come up with a superintelligent chatbot that replaced data scientists, it was to come up with the infrastructure that would allow data scientists to put their python scripts in a common repository to allow re-use without re-installing locally and screwing around with pyenvs.
Anyway, I'm also traumatized by my involvement with a YC startup that actually had a very good (if ENRONish) product around peer to peer energy futures trading that completely fell apart when investors demanded they make it "AI"