the most important aspect of any company worth its salt is liability. If the LLM provider isn't providing liability (and so far they haven't), then hallucinations are a complete deal breaker. You don't want to be on the receiving end of a precedent setting lawsuit just to save some pennies on labor.

There can be uses, but if you you're falling on deaf ears as a B2B if you don't solve this problem. Consumers accept inaccuracies, not businesses. And that's also sadly where it works best and why consumers soured on it. It's being used to work as chatbots that give worse service, and make consumers work more for something an employee could resolve in seconds.

as it's worked for millenia, human have accountability, and any disaster can start the PR spin by reprimanding/firing a human who messes up. We don't have that for AI yet. And obviously, no company wants to bear that burden.