I learned quite a few new things from this, I don't really care if OP filtered it through an LLM before publishing it

Same, but, I'm struggling with the idea that even if I learn things I haven't before, at the limit, it'd be annoying if we gave writing like this a free pass continuously - I'd argue filtered might not be the right word - I'd be fine with net reduction. Theres something bad about adding fluff (how many game changers were there?)

An alternative framing I've been thinking about is, there's clearly something bad when you leave in the bits that obviously lower signal to noise ratio for all readers.

Then throw in the account being new, and, well, I hope it's not a harbinger.*

* It is and it's too late.

You can critique the writing without calling into question how it was written. Speculation on the tools used to write it serves no purpose beyond making a, possibly unfounded, value judgement against the writer.

https://hbr.org/2025/08/research-the-hidden-penalty-of-using...

I think this is both valuable, and yet, it is also the key to why the forest will become dark.

I'm not speculating - I have to work with these things so darn much that the tells are blindingly obvious - and the tells are well-known, ex. there's a gent who benchmarks "it's not just x - it's y" shibboleths for different models.

However, in a rigorous sense I am speculating: I cannot possibly know an LLM was used.

Thus, when an LLM is used, I am seeing an increasing fraction of conversation litigating whether is appropriate, whether it matters, if LLMs are good, and since anyone pointing it out could be speculating, now, the reaction hinges on how you initially frame this observation.

Ex. here, I went out of my way to make a neutral-ish comment given an experience I had last week (see other comment by me somewhere down stream)

Lets say I never say LLM, instead, frame it as "Doesn't that just mean it's a convention?" and "How are there so many game-changers?", which is obvious to audience is a consequence of using an LLM, and yet, also looks like you're picking on someone (are either of those bad writing? I only had one teacher who ever would take umbrage at somewhat subtle fluff like this)

Anyways this is all a bunch of belly-aching to an extent, you're right, and its the way to respond. There's a framing where the only real difficulty here is critiquing the writing without looking like you're picking on someone.

EDIT: Well, except for one more thing: what worries me the most when I see someone using the LLM and incapable of noticing tells and incapable of at least noticing the tells are weakening writing is...well, what else did they miss? What else did the LLM write that I have to evaluate for myself? So it's not so much as somewhat-bad writing, 90%+ still, that bothers me: its that idk what's real, and it feels like a waste of time even being offered it to read if I have to check everything.

Critique of the output is fine in my eyes. If you don't enjoy the style, format, choice of words, etc I think that's fair game even if it's superficial/subjective. It often is with art.

Placing a value judgement on someone for how the art was produced is gatekeeping. What if the person is disabled and uses an LLM for accessibility reasons as one does with so many other tools? I dunno, that seems problematic to me but I understand the aversion to the output.

For example maybe it's like criticising Hawking for not changing his monotone voice vs using the talker all together. Perhaps not the best analogy.

The author can still use LLMs to adjust the style according to criticism of the output if they so choose.

No, I think if someone is passing off an LLM's writing as their own they deserve to be shamed mercilessly. Normally I don't comment on a thread after so long (> 24 hours) but your take is just so bad, I couldn't help myself.