I really don't understand the hallucination problem now in 2025. If you know what you're doing and you know what you need to get from the LLM and you can describe it well enough that it would be hard to screw up, LLMs are incredibly useful. They can nearly one shot an entire (edited here) skeleton architecture that I only need to nudge into the right place before adding what I want on top of it. Yes, i run into code from LLMs that i have to tweak, but it has been incredibly helpful for me. I haven't had hallucination problems in a couple of years now...
> I really don't understand the hallucination problem now in 2025
Perhaps this OpenAI paper would be interesting then (published September 4th):
https://arxiv.org/pdf/2509.04664
Hallucination is still absolutely an issue, and it doesn’t go away by reframing it as user error, saying the user didn’t know what they were doing, didn’t know what they needed to get from the LLM, or couldn’t describe it well enough.