I've been using ChatGPT fairly regularly for about a year. Mostly as an editor/brainstorming-partner/copy-reviewer.
Lots of things have changed in that year, but the things that haven't are:
* So, so many em-dashes. All over the place. (I've tried various ways to get it to stop. None of them have worked long term).
* Random emojis.
* Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
* Weird adjectives it gets stuck on like "deep experience".
* Randomly bolded words.
Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT. But apart from that, it's wild to me that a $500bn company hasn't managed to fix those persistent challenges over the course of a year.
Ah, you've hit a classic problem with <SUBJECT> :smile_with_sweat_drop:. Your intuition is right-- but let me clarify some subtleties...
Yeah, that’s a really insightful point, and you’ve kind of hit the nail on the head…
You didn't just give a compliment, you forged a symbolic bridge between islands of meaning!
yesterday it told me the "juice wasn't worth the squeeze."
I got a rock.
You can customize it to get rid of all that. I set it to the "Robot" personality and a custom instruction to "No fluff and politeness. Be short and get straight to the point. Don't overuse bold font for emphasis."
If I tell it no fluff, the only thing that changes is that it starts out with responses like “Sure, here’s what you asked for with no fluff…”.
For the longest time I didn't know you could change its personality. This helps a lot!
> Affirmations at the start of messages. ("That's a great idea!") With a brief pause when 5 launched. But it's back and worse than ever now.
What a great point! I also can’t stand it. I get it’s basically a meme to point it out - even South Park has mocked it - but I just cannot stand it.
In all seriousness it’s so annoying. It is a tool, not my friend, and considering we are already coming from a place of skepticism with many of the responses, buttering me up does not do anything but make me even more skeptical and trust it less. I don’t want to be told how smart I am or how much a machine “empathizes” with my problem. I want it to give me a solution that I can easily verify, that’s it.
Stop wasting my tokens and time with fake friendship!
Drives me nuts too. All the stuff like "OK let me do..." Or "I agree ..." stop talking like a person.
I want the star trek experience. The computer just says "working" and then gives you the answer without any chit-chat. And it doesn't refer to itself as if it's a person.
What we have now is Hal 9000 before it went insane.
Guys. It's basically because among the all well researched data, the amount of garbage is infinitely more.
If AI wants to be useful (it's not going to atm), real people need to cull all the banalities that facebook, reddit & forums have generated.
Because what you're noticing is things we typically elide over in discussions with actual humans.
It is far more polite than any social media platform or forum I’ve ever seen lol
Yeah, and earlier incarnations too... I remember AI dungeon used to cuss people out and even "leave the chat" when people acted annoying.
Hal was completely competent, until it wasn't... This is like Hal .9 beta mode.
Setting ChatGPT personality to “Robot” pretty much does that for me.
Meanwhile, 90% of the population is asking it to write love letters for their bf’s/gf’s
Man it is truly difficult to overstate all the behavioral health issues that have been emerging.
These are just symptoms and not the cause.
This comes across as an unnecessary oversimplification in service of handwaving away a valid concern about AI and its already-observed, expanding impact on our society. At the very least you should explain what you mean exactly.
Alcoholism can also be a symptom of a larger issue. Should we not at least discuss alcohol’s effects and what access looks like when deciding the solution?
A modern Cyrano de Bergerai.
> Stop wasting my tokens and time with fake friendship!
They could hide it so that it doesn't annoy you, but I think it's not a waste of tokens. It's there so the tokens that follow are more likely to align with what you asked for. It's harder for it to then say "This is a lot of work, we'll just do a placeholder for now" or give otherwise "lazy" responses, or to continue saying a wrong thing that you've corrected it about.
I bet it also probably makes it more likely to gaslight you when you're asking something it's just not capable of, though.
The emoji thing is so bad. You can see it all over github docs and other long form docs. All section headers will have emojis and so on. Strange.
Obviously nothing solid to back this up, but I kind of feel like I was seeing emojis all over github READMEs on JS projects for quite a while before AI picked it up. I feel like it may have been something that bled over from Twitch streaming communities.
Agree, this stuff was trending up very fast before AI.
Could be my own changing perspective, but what I think is interesting is how the signal it sends keeps changing. At first, emoji-heavy was actually kind of positive: maybe the project doesn't need a webpage, but you took some time and interest in your README.md. Then it was negative: having emoji's became a strong indicator that the whole README was going to be very low information density, more emotive than referential[1] (which is fine for bloggery but not for technical writing).
Now there's no signal, but you also can't say it's exactly neutral. Emojis in docs will alienate some readers, maybe due to association with commercial stuff and marketing where it's pretty normalized. But skipping emojis alienates other readers, who might be smart and serious, but nevertheless are the type that would prefer WATCHME.youtube instead of README.md. There's probably something about all this that's related to "costly signaling"[2].
[1] https://en.wikipedia.org/wiki/Jakobson%27s_functions_of_lang... [2] https://en.wikipedia.org/wiki/Costly_signaling_theory_in_evo...
There’s a pattern to emoji use in docs, especially when combined with one or more other common LLM-generated documentation patterns, that makes it plainly obvious that you’re about to read slop.
Even when I create the first draft of a project’s README with an LLM, part of the final pass is removing those slop-associated patterns to clarify to the reader that they’re not reading unfiltered LLM output.
Yeah and this explains why you see it in LLMs in the first place. They had to learn it from somewhere.
The name of HuggingFace is a reminder that it was a thing long before the current crop of LLMs.
It drives me crazy. It happens with Claude models too. I even created an instruction to avoid them in a CLAUDE.md, and the miserable thing from time to time still does it.
Why?!
You can take my em-dashes from my cold, dead hands—I use them all the time.
On iOS in particular the longer dash variants are easy to access — via long pressing dash.
Anecdotally, I use them less often these days, because of the association with AI.
On MacOS or iPadOS keyboard, option - and option shift - give n and m dashes respectively.
Don't forget the classic: "It's not just X—it's Y."
This is the main thing that immediately tells me something is AI. This form of reasoning was much less common before ChatGPT.
I don't think this is true. The LLMs use this construction noticeably more frequently than normal people, and I too feel the annoyance when they do, but if you look around I think you'll find it's pretty common in many registers of human natural english.
And each of us has patterns. I bet if you read a million of my posts, you would be annoyed with my writing idiosyncrasies too.
Yes, this is absolutely part of it, and I think an underappreciated harm of LLMs is the homogeneity. Even to the extent that their writing style is adequate, it is homogeneous in a way that quickly becomes grating when you encounter LLM-generated text several times a day. That said, I think it's fair to judge LLM writing style not to be adequate for most purposes, partly because a decent human writer does a better job of consciously keeping their prose interesting by varying their wording and so forth.
Not sure what the downvotes are for -- it's trivial to find examples of this contruction from before 2023, or even decades ago. I'm not disagreeing that LLMs overuse this construction (tbh it was already something of a "writing smell" for me before LLMs started doing it, because it's often a sign of a weakly motivated argument).
Absolutely this. I feel like I'm having an immune response to my own language. These patterns irk me in a weird way. Lack of variance is jarring perhaps? Everyone sounding more robotic than usual? Mode-collapse of normal language.
It sounds like LinkedIn speak which most people have a natural immune reaction to.
Or... How can you detect the usage of Claude models in a writeup? Look for the word comprehensive, especially if it's used multiple times throughout the article.
"Enhanced"
I notice this less with GPT-5 and GPT-5-Codex but it has a new problem: it'll write a sentence that mostly makes sense but have one or two strange word choices that nobody would use in that situation. It tends to use a lot of very dense jargon that makes it hard to read, spitting out references to various algorithms and concepts in places that don't actually make sense for them to be. Also yesterday Codex refused a task from me because it would be too much work, which I thought was pretty ridiculous - it wasn't actually that much work, a couple hundred lines max.
> refused a task from me because it would be too much work
Was this after many iterations? Try letting it get some "sleep". Hear me out...
I haven't used Codex, so maybe not relevant, but with Claude I always notice a slow degradation in quality, refusals, and "<implementation here>" placeholders with iterations within the same context window. One time, after making a mistake, it apologized and said something like "that's what I get for writing code at 2am". Statistically, this makes sense: long conversations between developers would go into the night, and they get tired, their code gets sparser and crappier.
So, I told it "Ok, let's get some sleep and do this tomorrow.", then the very next message (since the LLM has no concept of time), "Good morning! Let's do this!" and bam, output a completely functional, giant, block of code.
Human behavior is deeeeep in the statistics.
That's hilarious.
I think it's the default behavior, because it's cheaper and faster to produce than the real answer.
I assume the beginning of the answer is given to a cheaper, faster model, so that the slower, more expensive one can have time to think.
It keeps the conversation lively and natural for most people.
Would be interesting to test if it's true, by disabling it with a system prompt, and measure if the time-to-answer is slower for the first word.
I was able to get it to briefly change that initial You’re right!! By telling it to say something else like Yarr Mayte. Stuck for a while.
I dont use ChatGPT very often, though perplexity has it, but I find that going all caps and sounding really angry helps them to fix things.
It’s a pity that em-dashes are being much more shunned due to their LLM association than emojis.
> Honestly, it's kind of helpful because it makes it really easy to recognize content that people have copied and pasted out of ChatGPT
Maybe it's intentional, like the "shiny" tone applied to "photorealistic" images of real people.
I am reasonably sure affirmations are a feature, not a bug. No matter how much I might disagree.
Also pretty sure it is a feature because the general population wants to have pleasant interactions with their ChatGPT and OpenAI's user feedback research will have told them this helps. I know some non-developer type people which mostly talk to ChatGPT about stuff like
- how to cope with the sadness of losing their cat
- ranting about the annoying habits of their friends
- finding all the nice places to eat in a city
etc.
They do not want that "robot" personality and they are the majority.
Agreed on all points.
I also recall reading a while back that it's also a dopamine trigger. If you make people feel better using your app, they keep coming back for another fix. At least until they realize the hollow nature of the affirmations and start getting negative feelings about it. Such a fine line.
ChatGPT is made for normies—they love sweatdrop emojis. I recommend https://ai.dev
A TPU dies every time you say 'normie'.
"normies" such a weird way to divide the world into them and "us".