The problem with saying something is LLM generated is it cannot be proven and is a less-helpful way of saying it has errors.
Pointing out the errors is a more helpful way if stating problems with the article, which you have also done.
In that particular picture, you're probably correct to interpret it as C vs N as stated.
> The problem with saying something is LLM generated is it cannot be proven and is a less-helpful way of saying it has errors.
It's a very helpful way of saying it shouldn't be bothered to be read. After all, if they couldn't be bothered to write it, I can't be bothered to read it.
You have no idea how much personal work went behind it. You just suspect it was worded with a LLM.
I have been using embeddings for almost a decade and am well versed with their intricacies. I think this article has merit. The direction of the investigation and the conclusion are interesting, good to have people thinking about how many distinct concepts can be packed in our usual embedding dimension. Wondering how small can you make embedding before a model becomes noticeably worse, given constant parameter count.
The complaint was that the post has a lot of basic inconsistencies which is a problem regardless.
If your content is as bad as AI slop it doesn't really matter if it is or not, but I think it's safe to assume that when a verbose and grandiose post is internally inconsistent and was written after 2022, it's slop[0]
0 https://pyxis.nymag.com/v1/imgs/f0e/0bb/d9346e02d8d7173a6a9d...
What is "AI slop?" I could have a 2 hour long discussion with Claude and in the end have it formalize it as an article. Is that AI slop?
Yes
Ok, so it just means "worded with AI", no relation to how much thinking went in
Anyone interested in a subject can, if they wish, ask an AI about it and get an answer. Your deep conversation with an AI is something fun and insightful only to yourself. You're welcome to do it, but don't pretend it has meaning to anyone else, because if they want to get the same insight on a topic they can do it themselves for the same amount of effort (none).
> It's a very helpful way of saying it shouldn't be bothered to be read.
Not really.
There's a big difference between the type of errors that humans make when they misunderstand a subject, and the type of errors an LLM makes. I'm not well enough versed in the field to know which type of errors are found in this paper, but it seems that people who are versed in this field feel that these errors are of the latter type.
It wouldn't kill for the op to say llm was used btw ... thus not putting readers into a difficult position. Nobody likes being played.