Another one of these sickening pieces. Framing opposition to an expensive tech that doesn't work as "anti". I tried letting the absolutely newest models write c++ today again. Gpt 5.1 and opus 4.5. single function with two or less input parameters, a nice return value, doing simple geometry with the glm library. Yes the code worked. But I took as long fixing the weird parts as it would have taken me myself. And I still don't trust the result, because reviewing is so much harder than writing.
There's still no point. Resharper and clang-tidy still have more value than all LLMs. It's not just a hype, it's a bloody cult, right besides those nft and church of COVID people.
Did you try telling the model to write the unit tests first, watch them fail, then write a function that passes them?
Your comment sounds like John Glenn's quote "Get the girl to check the numbers… If she says they’re good, I’m ready to go." about Katherine Johnson to double check the calculations done by the first computers used by NASA. At that time in history, it was probably accurate and the safest thing to do, but we all know how computer evolved from that time and now we don't have human calculators anymore but rather human checking the correctness of the written code that will do the actual calculations.
IMO the only rebuttal to this can be that LLMs are almost at their peak and there is not going to be any possible significant breakthrough or steady improvement in the next years, in which case they will never become "the new computers".
But LLMs aren't advertised as some future thing. They're advertised as being almighty and replacing devs in great numbers. And that's simply not true. It's a fad like 3D movies
I know they are pumped and overhyped to death, indeed they are. But that does not mean that they already have some use today and that they can (or not) improve in the future.
I'm skeptical about LLMs as well but I also wanted to see what they are actually capable of doing and I vibe coded an Android app in Kotlin (from scratch) with Claude Code and Opus 4.5 and it basically worked. I'm pretty sure the code is horrible to the eyes of a Kotlin developer because I added so many feature by asking CC to do it over the last 2-3 weeks that it already desperately need a refactor.
But still, this is not something an autocomplete would be able to do for you.
> reviewing is so much harder than writing
This is what reams of the AI proponents fail to understand. "Amazing, I don't have to write code, 'only' review AI slop" is sitting backwards on the horse. Who the heck wants to do that?