Do they not? Many phone functions are already available through voice assistants, and have been for a very long time, and yet the vast majority of people still prefer to use them with the UI. Clicking on the weather icon is much easier than asking a chatbot "what's the weather like?"
My elderly mother has an essential tremor (though only in one hand now due to successful ultrasound treatment!) and she would still rather suffer through all her errors with a touch interface than use voice commands.
Some people seem to think that Deckard’s speech-controlled CSI software in Blade Runner is actually something to strive for, UX-wise. As if it makes any sense to use strictly nonvisual, non-two-dimensional affordances to work with visual data.
The sad part is that while everyone is chasing new interface modalities, the traditional 2D UI is slowly getting worse thanks to questionable design trends and a lack of interest.
Do they not? Many phone functions are already available through voice assistants, and have been for a very long time, and yet the vast majority of people still prefer to use them with the UI. Clicking on the weather icon is much easier than asking a chatbot "what's the weather like?"
My elderly mother has an essential tremor (though only in one hand now due to successful ultrasound treatment!) and she would still rather suffer through all her errors with a touch interface than use voice commands.
Some people seem to think that Deckard’s speech-controlled CSI software in Blade Runner is actually something to strive for, UX-wise. As if it makes any sense to use strictly nonvisual, non-two-dimensional affordances to work with visual data.
The sad part is that while everyone is chasing new interface modalities, the traditional 2D UI is slowly getting worse thanks to questionable design trends and a lack of interest.
No it’ll be some idea we have not developed or named yet.
The current ‘agent’ ecosystem is just hacks on top of hacks.