Mind boggling that they would roll it out at scale without testing.

Using open ended natural language to make a multiple choice selection (choose a taco) seems like a way to massively complicate a simple problem.

What next - have a humanoid robot bring the food out to the car?

Looking forward to more "AI Darwin Award" stories!

The follow up question (not quite mc) was actually what put it in a loop for me-

It kept asking 'what kind of drink?' After apparently interpreting engine noise as asking for one.

Wouldn't respond to 'none' or any other response I gave, except to repeat the q.

As is often the case, reality imitates satire. This reminds me of the "and then" scene from Dude, Where's my Car. https://youtu.be/iuDML4ADIvk

I remember the McDonalds in-store touch screen ordering systems when they were first introduced, which were also astonishingly badly designed.

Using unreliable voice as an input, then not allowing you to cancel/correct, or not supporting it in a robust fashion, is a massive fail. If there is no person there, then I guess you just have to give up and drive away.

As far as I can tell, _most_ recent examples of 'AI' inflicted on the public have been rolled out on scale without testing, or at least the results of testing have been ignored. It's generally incredibly shoddy stuff.

500 stores isn't really "scale," that's only about 6% of their locations.

To be honest, if LLMs are good at anything, this is the exact kind of thing they are good at. It really isn't dumb that Taco Bell tried this.

I could also imagine how great it could potentially be for people to be able to view the menu and/or order in any language.

I think long ago I actually read an article posted to HN that essentially argued that most businesses don't take enough risks and that frequent risk-taking is statistically advantageous.

If they want to do automated voice ordering, then using multiple choice A/B/C/cancel (with feedback on screen) would seem less error prone than LLM open ended natural language with some kind of intent interpretation.

Of course most customers would prefer to interact with a person, but I don't think "vibe ordering" tacos is going to be the same!

Deep down the PMs know that’s just a touch screen but with voice. So no one’s getting a promotion for that.

I agree, that would be just under 6% of all taco bells. The should have done a few in each region.

I can't imagine that people having fun messing with it, putting in ridiculous orders would be region specific.

Has anyone here tried it and know how it works ? If I order 6 large pizzas with a topping of rocks, will that come up on the screen ?

I have worked on NLP systems for decades, the usual pattern is converting a request into structured data, and sanity checking the structured extracted data. If you ordered a pizza with rocks on it, the ‘toppings’ filed would be checked against a datastore of available toppings.

Makes sense, but then where would the fun be in putting in ridiculous orders, and I wonder how the Taco Bell system is responding when people order completely off-menu items like pizzas ?

I would wager a lot of money that they did test it at an even smaller scale before expanding the testing program to these 500 locations.

Having a cat robot deliver food to your car in the parking lot would definitely be more efficient than drive through.