The story is credited to Benj Edwards and Kyle Orland. I've filtered out Edwards from my RSS reader a long time ago, his writing is terrible and extremely AI-enthusiastic. No surprise he's behind an AI-generated story.

Is he even a real person I wonder

He was murdered on a Condé Nast corporate retreat and they have been using an AI in his likeness to write articles ever since!

Would make for a good book, company hires famous writer, trains an ai on them, tortures them to sign over their likeness rights and then murders them. Keeps up appearances of life via video gen, voice gen and writing gen.

Almost the plot of The Congress

https://en.wikipedia.org/wiki/The_Congress_(2013_film)

Going on my to watch list

> his writing is terrible and extremely AI-enthusiastic

I disagree, his writings are generally quite good. For example, in a recent article [1] on a hostile Gemini distillation attempt, he gives a significant amount of background, including the relevant historical precedent of Alpaca, which almost any other journalist wouldn't even know about.

1: https://arstechnica.com/ai/2026/02/attackers-prompted-gemini...

For what it's worth, both the article you're linking to and the one this story is about are immediately flagged by AI text checkers as LLM-generated. These tools are not perfect, but they're right more often than they're wrong.

>These tools are not perfect, but they're right more often than they're wrong.

Based on what in particular? The only time I have used them is to have a laugh.

Based on experience, including a good number of experiments I've done with known-LLM output and contemporary, known-human text. Try them for real and be surprised. Some of the good, state-of-the-art tools include originality.ai and Pangram.

A lot of people on HN have preconceived notions here based on stories they read about someone being unfairly accused of plagiarism or people deliberately triggering failure modes in these programs, and that's basically like dismissing the potential of LLMs because you read they suggested putting glue on a pizza once.

I had fun with AI detectors in particular for images, even the best one (Hive in my opinion) was failing miserably with my tests, maybe the one trained on text are better but I find it hard to trust them, in particular if someone know how to fiddle with them.

> immediately flagged by AI text checkers as LLM-generate

Proof? Which one? I would like to test a few other articles with your checker to test its accuracy.

Also filtered out the following slop generators from my RSS feed, which significantly enhanced my reading experience:

Jonathan M. Gitlin

Ashley Belanger

Jon Brodkin

I wonder how soon I will be forced to whitelist only a handful of seasoned authors.

> I wonder how soon I will be forced to whitelist only a handful of seasoned authors.

Twenty years ago?