I don't agree with that at all. Hallucination is a very well known issue. Sure leverage AI to improve their productivity.. but not even having a human look over the responses shows they don't care about their customers
I don't agree with that at all. Hallucination is a very well known issue. Sure leverage AI to improve their productivity.. but not even having a human look over the responses shows they don't care about their customers
If you had a human support person feeding the support question into the AI to get a hint, do you think that support person is going to know that the AI response is made up and not actually a correct answer? If they knew the correct answer, they wouldn't have needed to ask the AI.
Exactly, that's why my startup recommends all LLM outputs should come with trustworthiness scores:
https://cleanlab.ai/tlm/
The number of times real human powered support caused me massive headache and sometimes financial damage and the number of times my lawyer fixed those because me trying to explain why they were wrong… I am not surprised that AI will do the same as the creation is the image of the creator and all that.