> As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.

What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?

I'd rather have broken grammar and an honest and useful meta-signal than botched semantics.

Also that better not be a sensitive conversation or contain personal details or business internals of others...

Just don't.

But the meta singal you get is detrimental to the writer, so why wouldn't they want to mask it?

If I think you're fluent, I might think you're an idiot when really you just don't understand.

If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.

Both of those options are exactly what the writer wants to avoid though, and the reason they are using AI for grammar correction in the first place.

Thank you for demonstrating my point.

Security and ethics.

If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.

But if they are using it for copywriting/grammar edits, how would you know? For instance, have I used AI to help correct grammar for these repilies?

I'd rather have words from a humans mind full stop.