On the positive side of this, research papers by competent people read very clearly with readable sentences, while those who are afraid that their content doesn't quite cut it, litter it with jargon, long complicated sentences, hoping that by making things hard, they will look smart.

But to expand on the spelling topic, good spelling and grammar is now free with AI tools. It no longer signals being educated. Informal tone and mistakes actually signal that the message was written by a human and the imperfections increase my trust in the effort spent on the thing.

Informal or conversational tone has always been the gold-standard for most communications. People just piss on it because they like to feel smart.

But, most writing has purpose. And usually fulfilling that purpose requires readers to comprehend what you're writing. Conversational tone is easy to comprehend, and shockingly less ambiguous than you'd think, especially when tailored to the target audience.

> But, most writing has purpose.

Over the years, I've become an odd fan of documents that start with a "purpose of this document" section.

Sure, it seems weirdly bureaucratic at first, but as time goes on, you start seeing documents that don't really know what their focus is anymore, because different authors decided it was the least-bad place to dump their own guide, checklist, or opinions.

L for example, imagine four documents about an API: A how-to guide; fine implementation details; a diagnostic checklist; a primer for executives or salespeople considering it as a product.

I've gotten in the writing habit of BLUF, Bottom Line Up Front:

"Hey boss,

I think we should use this vendor.

[4 paragraphs with charts and formulas explaining why that's the only rational choice]"

The way readers parse this is "the sender thinks we should do this thing, and oh, now that I have that idea implanted in my brain, wow, they sure have a lot of supporting evidence! OK, fine, let's do it."

[0] https://en.wikipedia.org/wiki/BLUF_(communication)

>Informal tone and mistakes actually signal that the message was written by a human and the imperfections increase my trust in the effort spent on the thing.

Isn’t this a bit short sighted? So if someone has a wide vocabulary and uses proper grammar, you mistrust them by default?

>Isn’t this a bit short sighted? So if someone has a wide vocabulary and uses proper grammar, you mistrust them by default?

Yes, people, in general, do.

https://www.youtube.com/watch?v=k_gjWlW0kRs

I'd say, not "people in general" but people form other socioeconomic strata. This guy is not talking like us, suspicious. He talks in an elaborate and thought-through manner, not simply, so, he's not candid, double suspicious!

I'm personally suspicious of anyone using the word candid.

Not necessarily but it carries less weight than pre-LLMS. Obviously it's just a heuristic and not the whole story and telltale AI signs are not purely about good spelling and grammar. But I just appreciate some natural, human texture in my correspondence these days.

a vocabulary of certain width raises a question "does this creature understand the words it is using?". So yeah I mistrust them more

> Isn’t this a bit short sighted? So if someone has a wide vocabulary and uses proper grammar, you mistrust them by default?

I don't trust anyone who doesn't use swear words, does that count?

> Informal tone and mistakes actually signal that the message was written by a human

Except that this signal is now being abused. People add into the prompts requesting a few typos. And requesting an informal style.

There was a guy complaining about AI generated comments on substack, where the guy had noticed the pattern of spelling mistakes in the AI responses. It is common enough now.

But yes, typos do match the writer - you can still notice certain mistakes that a human might make that an AI wouldn't generate. Humans are good at catching certain errors but not others, so there is a large bias in the mistakes they miss. And keyboard typos are different from touch autoincorrection. AI generated typos have their own flavour.

Yeah, I'd argue a large portion of what LLMs are being used for can be characterized as "counterfeiting" traditionally-useful signals. Signals that told us there was another human on the other side of the conversation, that they were attentive, invested, smart, empathizing, etc.

Counterfeiting was possible before, but it had a higher bar because you had to hire a ghostwriter.

>research papers by competent people read very clearly with readable sentences, while those who are afraid that their content doesn't quite cut it, litter it with jargon, long complicated sentences, hoping that by making things hard, they will look smart.

Obviously no errors Vs no obvious errors, in a nutshell.

> On the positive side of this, research papers by competent people read very clearly with readable sentences, while those who are afraid that their content doesn't quite cut it, litter it with jargon, long complicated sentences, hoping that by making things hard, they will look smart.

I often find that to be true. Another important factor is that research skill is correlated with writing skill. Someone who's at the top of their field is likely to be talented in other ways, too, and one such talented is making complex topics easier to understand.

> It no longer signals being educated. Informal tone and mistakes actually signal that the message was written by a human and the imperfections increase my trust in the effort spent on the thing.

But... you know that this moment will be so fleeting as one can trivially generate mistakes to look human.

If this becomes the prevailing inclination amongst most readers, Janan Ganesh (one of my most favorite commentators anywhere) at the Financial Times will have a dim professional future.

A friend of mine (non-native English speaker) said she's been talking to a guy (also non-native) on a dating app. She said he was very articulate and showed me some screenshots.

One sentence he sent was "Family is paramount for you.". I told her "I bet you he's using ChatGPT"..

Muddying the water to make it seem deep.

Have you actually read a research paper, ever?

They are FILLED with jargon (that just as easily could be an ordinary English word instead) ... and giant paragraphs made up of ten sentences all combined into one with semi-colons ... and with all sorts of other butchering of the English language.

Scientific research papers follow their own grammar, which is specific to the research community ... and that grammar is atrocious!

Most papers are badly written. https://en.wikipedia.org/wiki/Sturgeon%27s_law

>On the positive side of this, research papers by competent people read very clearly with readable sentences

That's because it's their PhDs that did the actual work...