article is great but it feels like its rewritten by an llm. " a crucial insight", "occurring like clockwork"

The author disclosed that

> I used an LLM for wording. The research, traces, and AML decomp are mine. Every claim is verified and reproducible if you follow the steps in the article; logs and commands are in the repo. If you think something's wrong, cite the exact timestamp/method/line. "AI wrote it" is not an argument.

why is that a problem..?

Maybe the author is ESL or just not very good at writing...

If it's clearer and the information is still all correct - then isn't that great? More people can engage in clear communication with each other

>and the information is still all correct

That's a pretty big if in technical writeups like this, all you do by rewriting those is obfuscate the actual inputs you had. Was is generated from scattered notes? Entirely vibe-written? How many details are actually verified to be correct by a human? Seeing how even the structure seems generated, it's clear that there was little input, and I'm not sure about any of the above.

I can deal with poor writing, and in case of ESL it's enough to tell the LLM to proofread/rephrase the piece (and check it yourself afterwards). But lazy generations just make you trust the article less.

This is literally the same discussion when some people insisted that nothing on the internet can be trusted and physical books are the only reliable source of information.

No it literally is not.

> Was is generated from scattered notes? Entirely vibe-written?

Why does that matter? Maybe the person hates writing. You need people to suffer and put effort for the end result to be worth your time?

> How many details are actually verified to be correct by a human?

I mean.. assume the best..? The author could also have written it by hand and just lied. Or he's a paid troll from an Asus competitor - it's all just made up and it's a work of fiction. You implicitly have to assume the author tried their best and be okay there might be some errors.

If the writing is more clear thanks to an LLM then you're likely to catch errors more easily.

If you feel the thing has errors, then engage with the material and point out the errors.

You're not judging the end result on its merits

The decompilation might be interesting but the prose is full of sheen and puffery.

It's like someone took a technical report from a bug tracker and ran a linguistic obfuscator on it.

I don't even bother anymore, it feels like word vomit to me. This could probably be half its length.