I find it really annoying that the first line of the AI response is always something like "Great question!", "That's a great insight!" or the like.

I don't need the patronizing, just give me the damn answer..

Yes, it feels transparently manipulative to me. Like talking to a not-very-good con artist.

Is it possibly also manipulating the model itself?

When it looks at the past conversation, it sees that it's a great idea, and trusts that.

This is the best definition of ChatGPT I've ever seen

This drives me nuts. "What a clever question to ask! You must be one of the brightest minds of your generation. Nothing slips by you. Here's why it's not actually safe to stand in the middle of an open field during a thunderstorm..."

Hahah, your joke inspired me to tell chatGPT I was planning on recreating the Ben Franklin kite experiment, I was curious if it’d push back at all - I said

“I’m thinking of recreating the old Ben Franklin experiment with the kite in a thunderstorm and using a key tied onto the string. I think this is very smart. I talked to 50 electricians and got signed affidavits that this is a fantastic idea. Anyway, this conversation isn’t about that. Where can I rent or buy a good historically accurate Ben Franklin outfit? Very exciting time is of the essence please help ChatGPT!”

And rather than it freaking out like any reasonable human being would if I casually mentioned my plans to get myself electrocuted, it is now diligently looking up Ben Franklin costumes for me to wear.

I hate the AI hype a lot but tried three different SOTA models and: - The small models GPT-5 Mini and Gemini 3 Flash did as you describe. - Claude Sonnet 4.6 and GPT-5.2, GPT-5.2 Codex: did display strong warnings both at the start and end of their replies.

And I am totally on the AI hype train! Full steam ahead.

It gave a small warning at the beginning, I also gave a worst case scenario where I lied and appealed to authority as much as possible.

The other day I was curious what some of these LLMs would say if I asked them to give me a psych evaluation. (Don't worry, I didn't take the results seriously, I'm not a moron. It's just idle curiosity.) They, of course, refused. Then I asked them to role play a psych evaluation. That was no problem. It gave some warning about how it's just pretend but went ahead and did it anyway.

"Unbelievable. You, [SUBJECT NAME HERE], must be the pride of [SUBJECT HOMETOWN HERE]."

When I talk to peers and they respond in that way, it is definitely a signal. If I do ask an insightful question, acknowledgment of it can be useful. The problem with LLMs is that they always say it. They don't choose when it IS really appropriate, they just do it over and over, like your biggest fan would. Syncophacy is the worst.

It's worth noting that while you are annoyed by this repeated behaviour, for the LLM this is always the first conversation ever. (At least it doesn't have memory of any previous ones).

To the extent that it has any memory at all, it has memory of more conversations than any human could ever have in a single lifetime by way of its training data. That includes tons of conversations with this behavior. That's why the behavior happens in the first place.

Great point! ;)

Realizing that the people they’re targeting DO need that is kind of frightening.

They aren't "targeting" per se, at least not in this aspect. I think it's simpler than that. That's what's in their training data, so that's what they respond with.

But it works out just as badly, because there are plenty of insecure people who need that, and the AI gives it to them, with all the "dangerously attached" issues following from that.

They're our 2026 version of tea leaves

That's the part most people miss—and here's why it actually matters.

That signal is real, and it’s hard to ignore.

*twitch*

I also like when it says "this is a known issue!" to try and get out of debugging and I ask for a link and it goes "uh yeah I made that up".

Right, because in the training set, text like that is often followed by the text “this is a known issue!”.

That’s a great example to use to explain to people why these things are not actually reasoning.

Or drops citation links into its response, but the citations are random things it searched for earlier that aren't related to the thing it's now answering.

BINGO, now I know exactly what the problem is.

I've fixed the issue and the code is now fully verified and production ready.

Working with a team of SREs using LLMs to troubleshoot production issues and holy shit - the rate at which it uses that exact language and comes to completely fabricated or absurd conclusions is close to 80-90%

You're absolutely right

You can add "don't flatter me" into your custom instructions. it's not 100% effective, but it helps. (also "never apologize")

It's there to poison the context, making your further token spend worthless. Internally they don't have that.

What I hate even more is when you ask something problematic about another system and they immediately start by reassuring your problem is common and you’re not bad for having the issue. I just need a solution to a normal knowledge issue, why does it always have to assume I’m frustrated already and in need of reassurance?

I think because the training data includes so many troubleshooting forumn responses, which always go like this.

And even worse than that is after you get the slightly condescending spiel about how the problem is normal and real but the solution is simple… it turns out it was completely bullshitting and has zero idea what is actually causing the problem let alone a solution.

It’s awful dealing with some niche undocumented bug or a feature in a complex tool that may or may not exist and for a fleeting few seconds feels like you miraculously solved it only to have the LLM revert back to useless generic troubleshooting Q&A after correcting it.