I'd be curious to do a general study to see what percentage of humans can spot AI written content vs human written content on the same subject.

Specifically is there any correlation between people who have always read a lot as I do and people who don't.

My observation (anecdota) is that the people I know who read heavily are much better at and much more against AI slop vs people who don't read at all.

Even when I've played with the current latest LLM's and asked them questions, I simply don't like the way they answer, it feels off somehow.

I both read a fair amount (and long books, 800-1,000 page classic Russian novels, that kind of thing) and use LLMs.

I quite like using LLMs to learn new things. But I agree: I can't stand reading blog posts written by LLMs. Perhaps it is about expectations. A blog post I am expecting to gain a view into an individual's thinking; for an AI, I am looking into an abyss of whirring matrix-shaped gears.

There's nothing wrong with the abyss of matrices, but if I'm at a party and start talking with someone, and get the whirring sound of gears instead of the expected human banter, I'm a little disturbed. And it feels the same for blog content: these are personal communications; machines have their place and their use, but if I get a machine when I'm expecting something personal, it counters expectations.

I agree, and I'm not sure why it feels off but I have a theory.

AI is good at local coherence, but loses the plot over longer thoughts (paragraphs, pages). I don't think I could identify AI sentences but I'm totally confident I could identify an AI book.

This includes both opening a large text in a way of thinking that isn't reflected several paragraphs later, and also maintaining a repetitive "beat" in the rhythm of writing that is fine locally but becomes obnoxious and repetitive over longer periods. Maybe that's just regression to the mean of "voice?"