I’ve found LLMs (or at least everyone I’ve tried this on) will always assume the customer is correct and thus even if they’re flat out wrong, the LLM will make up some bullshit to confirm the costumer is still correct.

It’s great when you’re looking to do creative stuff. But terrible when you’re looking to confirm the correctness of an approach or asking for support on something that you weren’t even aware of its nonexistence.

that's because its "answers" are actually "completions". cant escape that fact - LLMs will always "hallucinate".