Been thinking about something similar, from fairly grounded ideas like letting a model autogenerate new features with their own name, keybind and icon, all the way to silly ideas, like letting a model synthesize arbitrary shader code and just letting it do whatever within the viewport. Think the entire UI being created on the fly specifically for the task you're working on, constantly evolving in mesh with your workflow habits. Now if only I went beyond being an idea man...
The reason we are not seeing this in mainstream software may also be due to cost. Paying for tokens on every interaction means paying to use the app. Upfront development may actually be cheaper, but the incremental cost per interaction could cost much more in the long term, especially if the software is used frequently and has a long lifetime.
As the cost of tokens goes down, or commodity hardware can handle running models capable of driving these interactions, we may start to see these UIs emerge.
Oh yeah, I was 100% thinking in terms of local models.