I can actually see this bein' one task that current levels of language models would excel at, honestly... Given the limited list of items on a typical fast-food menu, and the accuracy of even some of the lowliest modern language models and speech recognition, I see no reason why fast-food order-taking needs to be handled by humans at all anymore, especially if you confirm the final order with the human ordering before proceeding; I could honestly see that bein' much more accurate than a human doing that job. (I can't count how many times over the years I've had a human order-taker completely screw the order up despite them repeating the order back exactly as given. A well-designed LLM-based system likely shouldn't have that problem. What it repeats back should end up bein' exactly the order that the system pushes through to completion.)

would excel at, honestly.

You would think so --- but well financed tests in the real world suggest otherwise.

Typical AI fanatic behavior - presented with the evidence that it doesn't work and goes "hmm, this should work perfectly!"

If that doesn't sum up AI hype and apologia then I don't know what does.

Yeah, no. I'm not an "AI fanatic" by a long-shot, but whatever... I use A.I. sometimes, and other times I don't. When I do use it, I use it for what it's good at. When I don't, it's because it's simply not capable of the task at hand. Simple as that. :shrug: