I've been saying this for years; LLMs will never be able to replace a good support staff. The only support LLMs can be relied on for is the kind of support you get from companies like google and netflix. Off-shored, glorified clickfarm workers in India, who only have access to some restrictive API and a very rigid playbook. They can do stuff like grandma forgot her password again -> help her reset her password.
For a support agent to actually be useful beyond that, they need some leeway to make decisions unilaterally, sometimes in breach of "protocol", when it makes sense. No company with a significant level of complexity in its interactions with customers can have an actually complete set of protocols that can describe every possible scenario that can arise. That's why you need someone with actual access inside the company, the ability to talk to the right people in the company should the need arise, a general ability(and latitude) to make decisions based on common sense, and an overall understanding of the state of the company and what compromises can be made somewhat regularly without bankrupting it. Good support is effectively defined by flexibility, and diametrically opposed to following a strict set of rules. It's about solving issues that hadn't been thought of until they happened. This is the kind of support that gets you customer loyalty.
No company wants to give an LLM the power given to a real support agent, because they can't really be trusted. If the LLM can make unilateral decisions, what if it hallucinated and gives the customer free service for life? Now they have to either eat the cost of that, or try to withdraw the offer, which is likely to lose them that customer. And at the end of all that, there's no one to hold liable for the fuckup(except I guess the programmers that made the chatbot). And no one wants the LLM support agent to be sending them emails all day the same way a human support agent might. So what you end up with is just a slightly nicer natural language interface to a set of predefined account actions and FAQ items. In other words, exactly what you get from clickfarms in Southern Asia or even a phone tree, except cheaper. And sure, that can be useful, just to filter out the usual noise, and buy your real support staff more time to work on the cases where they're really needed, but that's it.
Some companies, like Netflix and Google(Google probably has better support for business customers, never used it, so I can't speak to it. I've only Bangalored(zing) my head against a wall with google support as a lowly consumer who bought a product), seem to have no support staff beyond the clickfarms, and as a result their support is atrocious. And when they replace those clickfarms with LLMs, support will continue to be atrocious, maybe with somewhat better English. And it'll save them money, and because of that they'll report it as a rousing success. But for customers, nothing will have changed.
This is pretty much what I predicted would happen a few years ago, before every company and its brother got its own LLM based support chatbot. And anecdotally, that's pretty much what has happened. For every support request I've made in the last year, I can remember 0 that were sorted out by the LLM, and a handful that were sorted out by humans after the LLM told me it was impossible to solve.