That's why you tell claude code to write tests, and use them, use linting tools, etc. And then you test the code yourself. If you're still concerned, /clear then tell claude code that some other idiot wrote the code and it needs to tear it apart and critique it.

Hallucination is not an intractable problem, the stochastic nature of hallucinations makes it easy to use the same tools to catch them. I feel like hallucinations have become a cop out, an excuse, for people who don't want to learn how to use these new tools anyway.

> you now have to not only review and double-check shitty AI code, but also hallucinated AI tests too

Gee thanks for all that extra productivity, AI overlords.

Maybe they should replace AI programmers with AI instead?

I said to make the chatbot do it, not to do all the reviewing yourself. You can do manual reviews once it makes something that works. In the meantime, you can be working on something else entirely.

> In the meantime, you can be working on something else entirely.

Like fixing useless and/or broken tests written by an LLM?

(Thank you, AI overlords, for freeing me from the pesky algorithmic and coding tedia so I can instead focus on fixing the mountains of technical debt you added!)