Tip for those who want to skip shit like this, excessive headings glued together by bullet points is quicker to spot, especially since the headings almost always start with "The".
I now scroll any AI-adjacent article I see and just read headings and if I see this I know what I'm getting into:
Deciding whether A is an X or a Y is a really basic part of why we're all communicating. Suspicion of em dashes is one thing, but once you start getting nervous on seeing "It’s not X. It’s Y." then you're just going to get paranoid.
The fundamentals of an LLM is to statistically match their output with the corpus. The tics they have are really common in natural human usage too.
I didn't reply to the comments talking about the AI tells. I replied to the comment that is making a bad argument. It doesn't matter to me whether the article is or isn't LLM assisted.
While it’s got some clear LLM patterns, the content seems novel enough to be worth the squeeze. That or I’m far enough outside of my Gell-Mann amnesia bubble that I can’t see the slop
Tip for those who want to skip shit like this, excessive headings glued together by bullet points is quicker to spot, especially since the headings almost always start with "The".
I now scroll any AI-adjacent article I see and just read headings and if I see this I know what I'm getting into:
The Dexterity Deadlock
The Problem
The Geometric Curse
The Sim-to-Real Gap
The Structural Gap f(⋅)
Seeing It in Motion
The N^2 Impedance Mismatch
The Chaos Term ϵchaos
The Information Wall
The Weakest Link
Why Manipulation Needs Better
What We Built
From 288 to 15
Does It Work?
Hardware Validation
Robot Hand Landscape
The Take-Home
Oh my god, it's absolutely chock full of AI-isms. Almost every sentence is a list of 3 items, often nested lists of 3 items.
The authors presumably don't have English as their first language.
Deciding whether A is an X or a Y is a really basic part of why we're all communicating. Suspicion of em dashes is one thing, but once you start getting nervous on seeing "It’s not X. It’s Y." then you're just going to get paranoid.
The fundamentals of an LLM is to statistically match their output with the corpus. The tics they have are really common in natural human usage too.
This line of argument would fall down if it turns out that a human with statistically normal output is a bizarre-sounding human.
Did you read the article? It's all AI tells. The tone may as well be a fax machine
I didn't reply to the comments talking about the AI tells. I replied to the comment that is making a bad argument. It doesn't matter to me whether the article is or isn't LLM assisted.
I think they did a search-and-replace to turn em dashes into semicolons in an attempt to hide the AI. Weird usage of semicolons.
Even the title follows a common LLM pattern (The X Problem/Issue/etc).
Plus a bunch of other elements that are dead giveaways.
In this day and age, I wish people would ask any model OTHER than ChatGPT to rewrite their shit. At least we'd get a different flavor of slop.
Once I became aware of this AI slop pattern, I can't stop seeing it everywhere.
While it’s got some clear LLM patterns, the content seems novel enough to be worth the squeeze. That or I’m far enough outside of my Gell-Mann amnesia bubble that I can’t see the slop