This assumes the person using LLMs to put out a blog post gives a single shit about their readers, pride, or “being human”. They don’t. They care about the view so you load the ad which makes them a fraction of a cent, or the share so they get popular so they can eventually extract money or reputation from it.

I agree with you that AI slop blog posts are a bad thing, but there are about zero people who use LLMs to spit out blog posts which will change their mind after reading your arguments. You’re not speaking their language, they don’t care about anything you do. They are selfish. The point is themselves, not the reader.

> Everyone wants to help each other.

No, they very much do not. There are a lot of scammers and shitty entitled people out there, and LLMs make it easier than ever to become one of them or increase the reach of those who already are.

If someone puts an LLM generated post on their personal blog, then their goal isn't to improve their writing or learn on a new topic. Rather, they're hoping to "build a following" because some conman on twitter told them it was easy. What's especially hilarious is how difficult it is to make money with a blog. There's little incentive to chase monetization in this medium, and yet people do it anyways.

> They are selfish. The point is themselves, not the reader.

True!

But when I encounter a web site/article/video that has obviously been touched by genAI, I add that source to a blacklist and will never see anything from it again. If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

> I add that source to a blacklist

Please do tell more. Do you make it like a rule in your adblocker or something else?

> If more people did that, then the selfish people would start avoiding the use of genAI because using it will cause their audience to decline.

I’m not convinced. The effort on their part is so low that even the lost audience (which will be far from everyone) is still probably worth it.

I was using "blacklist" in a much more general sense, but here's how it actually plays out. Most of my general purpose website reading is done through an RSS aggregator. If one of those feeds starts using genAI, then I just drop it out of the aggregator. If it's a website that I found through web search, then I use Kagi's search refinement settings to ensure that site won't come up again in my search results. If it's a YouTube channel I subscribe to, I unsubscribe. If it's one that YouTube recommended to me, I tell YouTube to no longer recommend anything from that channel.

Otherwise, I just remember that particular source as being untrustworthy.

I use Kagi for this: you can block domains from appearing in your search results. https://kagi.com/settings/user_ranked

Don’t most ad platforms and search engines track bounce rate? If too many users see that generic opening paragraph, bullet list and scattering of emoji, and immediately hit back or close, they lose revenue.

Assuming most people can detect LLM writing quickly. I don’t think that’s true. In this very submission we see people referencing cases where colleagues couldn’t detect something is written by LLM even after reading everything.