I use LLM's to write the majority of my code. I haven't encountered a hallucination for the better part of a year. It might be theoretically unsolvable but it certainly doesn't seem like a real problem to me.

I use LLMs whenever I'm coding, and it makes mistakes ~80% of the time. If you haven't seen it make a huge mistake, you may not be experienced enough to catch them.

Hallucinations, no. Mistakes, yes, of course. That's a matter of prompting.

> That's a matter of prompting.

So when I introduce a bug it's the PM's fault.

[deleted]