> Is it wrong?
No, but it's a summary of the original article without anything added. I agree with GP that it's very likely LLM generated from the article.
> Is it wrong?
No, but it's a summary of the original article without anything added. I agree with GP that it's very likely LLM generated from the article.
I see a few other posts by electric_muse that are LLM-suspicious, but it bugs me that the style is so much like my own writing. Only a matter of time until I'm the witch on trial, I suppose.
I don't really look at the writing style, as it's a very good way to fall for false positive. And I actually won't blame a non native speaker for editing their thoughts using an AI much more proficient in English than them.
Instead I look how much actual information (be it anecdotes, personal opinion, different perspective, etc.) a comment contains.
Here it's just a very straightforward TL;DR; of the original article, it contains no substance beyond that so I don't think anyone would bother write it (or at least they would likely advertise it as a TL;DR;).
A correct summary without anything added is a very valuable thing. I personally don't have the time to read every article, and highly value the scientific paper format, with an abstract given upfront.
A correct summary like this is what any LLM will give you for free so it's not that valuable anymore, but that would be OK if it was advertised as a TL;DR;, and not presented as if it was OP's personal take like it is right now.