There is surely no difficulty, but can you provide an example of what you mean? Just because I don't see it here. Or at least like, if I read a blog from some saas company pre-LLM era, I'd expect it to sound like this.
I get the call for "effort" but recently this feels like its being used to critique the thing without engaging.
HN has a policy about not complaining about the website itself when someone posts some content within it. These kinds of complaints are starting to feel applicable to the spirit of that rule. Just in their sheer number and noise and potential to derail from something substantive. But maybe that's just me.
If you feel like the content is low effort, you can respond by not engaging with it?
Just some thoughts!
It's incredibly bad on this article. It stands out more because it's so wrong and the content itself could actually be interesting. Normally anything with this level of slop wouldn't even be worth reading if it wasn't slop. But let me help you see the light. I'm on mobile so forgive my lack of proper formatting.
--
Because it’s not just that agents can be dangerous once they’re installed. The ecosystem that distributes their capabilities and skill registries has already become an attack surface.
^ Okay, once can happen. At least he clearly rewrote the LLM output a little.
That means a malicious “skill” is not just an OpenClaw problem. It is a distribution mechanism that can travel across any agent ecosystem that supports the same standard.
^ Oh oh..
Markdown isn’t “content” in an agent ecosystem. Markdown is an installer.
^ Oh no.
The key point is that this was not “a suspicious link.” This was a complete execution chain disguised as setup instructions.
^ At this point my eyes start bleeding.
This is the type of malware that doesn’t just “infect your computer.” It raids everything valuable on that device
^ Please make it stop.
Skills need provenance. Execution needs mediation. Permissions need to be specific, revocable, and continuously enforced, not granted once and forgotten.
^ Here's what it taught me about B2B sales.
This wasn’t an isolated case. It was a campaign.
^ This isn't just any slop. It's ultraslop.
Not a one-off malicious upload.
A deliberate strategy: use “skills” as the distribution channel, and “prerequisites” as the social engineering wrapper.
^ Not your run-of-the-mill slop, but some of the worst slop.
--
I feel kind of sorry for making you see it, as it might deprive you of enjoying future slop. But you asked for it, and I'm happy to provide.
I'm not the person you replied to, but I imagine he'd give the same examples.
Personally, I couldn't care less if you use AI to help you write. I care about it not being the type of slurry that pre-AI was easily avoided by staying off of LinkedIn.
> being the type of slurry that pre-AI was easily avoided by staying off of LinkedIn
This is why I'm rarely fully confident when judging whether or not something was written by AI. The "It's not this. It's that" pattern is not an emergent property of LLM writing, it's straight from the training data.
I don't agree. I have two theories about these overused patterns, because they're way over represented
One, they're rhetorical devices popular in oral speech, and are being picked up from transcripts and commercial sources eg, television ads or political talking head shows.
Two, they're popular with reviewers while models are going through post training. Either because they help paper over logical gaps, or provide a stylistic gloss which feels professional in small doses.
There is no way these patterns are in normal written English in the training corpus in the same proportion as they're being output.
Thank you. I am in the confusing situation of being extremely good at interpreting the nuance in human writing, yet extremely bad at detecting AI slop. Perhaps the problem is that I'm still assuming everything is human-written, so I do my usual thing of figuring out their motivations and limitations as a writer and filing it away as information. For example, when I read this article I mostly got "someone trying really hard to drive home the point that this is a dangerous problem, seems to be over-infatuated with a couple of cheap rhetorical devices and overuses them. They'll probably integrate them into their core writing ability eventually." Not that different from my assessment of a lot of human writing, including my own. (I have a fondness for em-dashes and semicolons as well, so there's that.)
I haven't yet used AI for anything I've ever written. I don't use AI much in general. Perhaps I just need more exposure. But your breakdown makes this particular example very clear, so thank you for that. I could see myself reaching for those literary devices, but not that many times nor as unevenly nor quite as clumsily.
It is very possible that my own writing is too AI-like, which makes it a blind spot for me? I definitely relate to https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...
I guess I just dont get the mode everyone is in where they got the editor hats on all the time. You can go back in time on that blog 10+ years and its all the same kind of dry, style guided, corporate speak to me, with maybe different characteristics. But still all active voice, lots of redundancy and emphasis. They are just dumb-ok blogs! I never thought it was "good," but I never put attention on it like I was reading Nabakov or something. I get we can all be hermeneuts now and decipher the true AI-ness of the given text, but isn't there time and place and all that?
I guess I too would be exhausted if I hung on every sentence construction like that of every corporate blog post I come across. But also, I guess I am a barely literate slop enjoyer, so grain of salt and all that.
Also: as someone who doesn't use the AI like this, how can it become beyond the run of the mill in slop? Like what happened to make it particularly bad? For something so flattening otherwise, that's kinda interesting right?