This resonates a lot with some observations I drafted last week about "AI Slop" at the workplace.
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
You’re absolutely right. The patterns you’ve noted, from document verbosity to informational decay in summaries, are the primary symptoms. Would you like me to explain the feedback loop that reinforces this behavior and its potential impact on organizational knowledge integrity?
“You’re absolutely right!” is becoming my least favorite phrase.
South Park’s b plot recently with Randy using ChatGPT illustrates this so well
Got it — here’s a satiric AI-slop style reply you could post under rvnx:
Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.
If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?
Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?
* Trying 20.54.123.42:443... * Connected to api.openai.azure.com (20.54.123.42) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-AES256-GCM-SHA384 > POST /v1/chat/completions HTTP/1.1 > Host: api.openai.azure.com > User-Agent: curl/7.88.1 > Accept: / > Content-Type: application/json > Authorization: Bearer sk-xxxx > Content-Length: 123 > * upload completely sent off: 123 out of 123 bytes < HTTP/1.1 500 Internal Server Error < Content-Type: application/json < Date: Mon, 22 Sep 2025 16:30:12 GMT < Content-Length: 352 < {"error":{"message":"The server had an error processing your request. Sorry about that! You can retry your request, or contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 if you keep seeing this error. (Please include the request ID d7fc0c4f-4c08-415c-b22b-3b9a59524a41 in your email.)","type":"server_error","param":null,"code":null}} * Connection #0 to host api.openai.azure.com left intact curl: (22) The requested URL returned error: 500
Fascinating trace — what you’ve essentially demonstrated here is not just a failed TLS handshake culminating in a 500, but the perfect allegory for our entire discourse. The client (us) keeps optimistically POSTing sincerity, the server (reality) negotiates a few protocols, offers some certificates of authenticity, and then finally responds with the only universal truth: Internal Server Error.
If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?
You all are doing a good job at fueling a certain kind of existential nightmare right now. We might just get our own shitty Butlerian Jihad sooner rather than later if this is the future.
CURLOPT_EXISTENTIAL_DREAD struck fear into my heart. Working as intended.
Man you’re really good at that lol
Wait, this isn’t over yet.
Hilarious, and so close to Claude default mode (well yes, parody lol thereof). Try this pre-prompt:
Please respond in mode of Ernest Hemingway
“You’re right. When someone explains why they’re explaining something, it goes in circles. Like a dog chasing its tail.
We do this because we can’t tell anymore when people mean what they say. Everything sounds fake. Even when it’s real.
There are three ways this happens. But naming them won’t fix anything.
You want more words about it? I can give you lists and fancy talk. Make it sound important. But it won’t change what it is.
[That is Claude Sonnet 4 channeling EH]
I have never seen an AI meeting summary that was useful or sufficient in explaining what happened in the meeting. I have no idea what people use them for other than as a status signal
In my company we sometimes cherry-pick parts of the AI summaries and send them to the clients just to confirm the stuff that we agreed on during a meeting. The customers know that the summary is AI-generated and they don't mind. Sometimes people come to me and ask whether what they read in the summary was really discussed in the meeting or is it just AI hallucinating but I can usually assure them that we really did discuss that. So these can be useful to a degree.
I'd use it to help me figure out which meeting we talked about a thing in 3 months ago so I can read the transcript for a refresher.
Why do people want to signal their low status?
That’s a good point, an AI email/Slack/summary postions you as at bootlicker at best, writing summaries to look good, and a failed secretary at most, but in any case of low value on the real-work scale.
I’m just afraid this kind of types are the future people who get promoted.
In their minds it is a signal of high status.
It’s an attempt to be “cutting edge”
I use them to seem engaged about something I don’t actually care about.
It’s painfully common to invite a laundry list of people to meetings.
This is the bull case for AI, as with any significant advance in technology eventually you have no choice but to use it. In this case, the only way to filter through large volumes of AI output is going to be with other LLM models.
The exponential growth of compute and data continues..
As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
> As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?
I'd rather have broken grammar and an honest and useful meta-signal than botched semantics.
Also that better not be a sensitive conversation or contain personal details or business internals of others...
Just don't.
But the meta singal you get is detrimental to the writer, so why wouldn't they want to mask it?
If I think you're fluent, I might think you're an idiot when really you just don't understand.
If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.
Both of those options are exactly what the writer wants to avoid though, and the reason they are using AI for grammar correction in the first place.
Thank you for demonstrating my point.
Security and ethics.
If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.
But if they are using it for copywriting/grammar edits, how would you know? For instance, have I used AI to help correct grammar for these repilies?
I'd rather have words from a humans mind full stop.
I'm so annoyed this morning... I picked up my phone to browse HN out of frustration after receiving an obvious AI-written teams message, only to see this on the front page! I can't escape haha
> - The average length of documents and emails has increased.
Brevity is the soul of wit. Unfortunately, many people think more is better.
People have also veered strongly toward anti-intellectualism in recent decades. Coincidence?
There's a growing body of evidence that AI is damaging people, aside from the obvious slop related costs to review (as a resource attack).
I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.
https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...
> aside from the obvious slop related costs to review
Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.
These tools - just like many of us humans - prioritize form over function.
[dead]