Yeah it bugs me. We've got enough examples in this article to make Cards Against Humanity ChatGPT edition
> One panelist shared a personal story that crystallized the challenge: his wife refuses to let him use Tesla’s autopilot. Why? Not because it doesn’t work, but because she doesn’t trust it.
> Trust isn’t about raw capability, it’s about consistent, explainable, auditable behavior.
> One panelist described asking ChatGPT for family movie recommendations, only to have it respond with suggestions tailored to his children by name, Claire and Brandon. His reaction? “I don’t like this answer. Why do you know my son and my girl so much? Don’t touch my privacy.”
Yeah, AI isn’t creative. You need to ask it to describe these types of patterns, and then include avoiding them in your original prompt to make it come across as somewhat natural.
What I wonder is whether the author of the article recognized these patterns and didn’t care, didn’t even recognize them, or didn’t proofread the article?
I gather he's operating Beyond the Prompt, and isn't here to rehash prompt engineering tips.
This made me chuckle
Are there any good lists of these GPTisms or research on the common patterns?
Beyond the em dashes and overuse of "delve" etc. there is this distinctive style of composition I want to understand and recognize better
it's not written by AI
You've said plainly elsewhere in these comments that you did use AI to write it:
> thanks, I used AI but aren't we all? I thought the point of AI is to get us to be more productive.
You've also repeatedly dismissed any criticism of the writing as "hate."
If you want readers to do you the favor of reading your work, please do them the favor of writing it.