Yeah this is exactly my view. We've had several years of work on the tech, and LLMs are just as prone to randomly spitting out garbage as they were the first day. They are not a tool which is fit for any serious work, because you need to be able to rely on your tools. A tool which is sometimes good and sometimes bad is worse than having no tool at all.

Did google not rely on Gemini to do their ISA changeover?

https://arxiv.org/abs/2510.14928

Was Gemini worse than no tool at all there?

Probably. According to the paper, 83.82% of automated commits were already made by algorithmic tools (non-LLM). For the remainder, a three-phase LLM approach was tried, and achieved a success rate of 30%. Based on these numbers, it probably would have been faster, cheaper, and more efficient to just enhance their current strategy rather than screwing around with text generators.

Do you really think that Opus 4.6 hallucinates to exactly the same degree as GPT-3.5? I am mystified how you can hold this perspective.

If you're not seeing the hallucinations, I'd assert you're either not using it enough, or (more likely) you don't have enough knowledge in the subject matter to notice when it's hallucinating.

I'm not interested in getting into some argument about who has "more knowledge in the subject matter". I'm genuinely curious: do you think Opus 4.6 hallucinates just as much as GPT-3.5?

LMFAO does it hallucinate to the same degree as GPT 3?

Which is what was questioned.