> This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing
Useful = great. We've made incredible progress in the past 3-5 years.
The people who are disappointed have their standards and expectations set at "science fiction".
I think many people are now learning that their definition of intelligence was actually not very precise.
From what I've seen, in response to that, goalposts are then often moved in the way that requires least updating of somebody's political, societal, metaphysical etc. worldview. (This also includes updates in favor of "this will definitely achieve AGI soon", fwiw.)
I remember when the goal posts were set at the "Turing test."
That's certainly not coming back.
If you know the tricks wont you be able to figure out if some chat is done by a LLM?
Or the people who are disappointed were listening to the AI hype men like Sam Altman, who have, in fact, been promising AGI or something very like it for years now.
I don't think it's fair to deride people who are disappointed in LLMs for not being AGI when many very prominent proponents have been claiming they are or soon will be exactly that.