Suppose you spend months deeply researching a niche topic. You make your own discoveries, structure your own insights, and feed all of this tightly curated, highly specific context into an LLM. You essentially build a custom knowledge base and train the model on your exact mental framework.
Is this fundamentally different from using a ghostwriter, an editor, or a highly advanced compiler? If I am doing the heavy lifting of context engineering and knowledge discovery, it feels restrictive to say I shouldn't utilize an LLM to structure the final output. Yet, the internet still largely views any AI-generated text as inherently "un-human" or low-effort.
I would ignore any HN content written by a ghost writer or editor. I guess I would flag compiler output but I’m not sure we’re talking about the same thing?
I’m on the internet for human beings. I already read a newspaper for editors and books for ghostwriters.
Not for long though, HN is dying. Just hanging around here waiting for the next thing , I guess…
Sorry man, the internet has died and is not being replaced by anything but an authoritarian nightmare.
My only guess is if you want actual humans, you'll have to do this IRL. Of course we has humans have got used to the 24/7 availability and scale of the internet so this is going to be a problem as these meetings won't provide the hyperactive environment we want.
Any other digital system will be gamed in one way or another.
The problem is: the structure of LLM outputs generally make everything sound profound. It’s very hard to understand quickly whether a comment has actual signal or it’s just well written bullshit.
And because the cost of generating the comments is so low, there’s no longer an implicit stamp of approval from the author. It used to be the case that you could kind of engage with a comment in good faith, because you knew somebody had spent effort creating it so they must believe it’s worth time. Even on a semi-anonymous forum like HN, that used to be a reliable signal.
So a lot of the old heuristics just don’t work on LLM-generated comments, and in my experience 99% of them turn out to be worthless. So the new heuristic is to avoid them and point them out to help others avoid them.
I would much rather just read the prompt.
I hadn't considered this so eloquently with LLM text output, but you're right. "LLMs make everything sound profound" and "well-written bullshit".
This has severe ramifications for internet communications in general on forums like HN and others, where it seems LLM-written comments are sneaking in pretty much everywhere.
It's also very, very dangerous :/ Because the structure of the writing falsely implies authority and trust where there shouldn't be, or where it's not applicable.