If you had a human support person feeding the support question into the AI to get a hint, do you think that support person is going to know that the AI response is made up and not actually a correct answer? If they knew the correct answer, they wouldn't have needed to ask the AI.
Exactly, that's why my startup recommends all LLM outputs should come with trustworthiness scores:
https://cleanlab.ai/tlm/