> It would be more human to handwrite your blog post instead.
“Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
> The use of tools to help with writing and communication should make it easier to convey your thoughts
If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
> If it’s on the web, it’s digital, there was never a period when blogs were hand written.
This is just pedantic nonsense
> there was never a period when blogs were hand written.
I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).
I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.
Must have been thrilling to edit.
Isolated instances do not a period define. We can always find some example of someone who did something, but the point is it didn’t start like that.
For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.
The fact that this one example stands out so clearly to you gives more Credence to the fact that this is so rare and not a common aspect of blogging.
> If you’re using an LLM to spit out text for you, they’re not your thoughts
The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.
> Might as well just give people your prompt.
What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?
>The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in.
Not even true! Turning your thoughts into words is a very important and human part of writing. That's where you choose what ambiguities to leave, which to remove, what sort of implicit shared context is assumed, such important things as tone, and all sorts of other unconscious things that are important in writing.
If you can't even make those choices, why would I read you? If you think making those choices is unimportant, why would I think you have something important to say?
Uneducated or unsophisticated people seem to vastly underestimate what expertise even is, or just how much they don't know, which is why for example LLMs can write better than most fanfic writers, but that bar is on the damn floor and most people don't want to consume fanfic level writing for things that they are not fanatical about.
There's this weird and fundamental misconception in pro-ai realms that context free "information" is somehow possible, as if you can extract "knowledge" from text, like you can "distill" a document and reduce meaning to some simple sentences. Like, there's this insane belief that you can meaningfully reduce text and maintain info.
If you reduce "Lord of the flies" to something like "children shouldn't run a community", you've lost immense amounts of info. That is not a good thing. You are missing so much nuance and context and meaning, as well as more superficial (but not less important!) things like the very experience of reading that text.
Like, consider that SOTA text compression algorithms can reduce text to 1/10th of it's original size. If you are reducing a text by more than that to "summarize" or "reduce to it's main points" a text, do you really think you are not losing massive amounts of information, context, or meaning?
You can rewrite a sentence on every page of lord of the flies, and the same important ideas would still be there.
You can have the thoughts in a different language and the same ideas are still there.
You can tell an LLM to tweak a paragraph to better communicate a nuance until you're happy with it.
---
Language isn't thought. It's extremely useful in that it lets us iterate on our thoughts. You can add in LLMs in that iteration loop.
I get you wanted to vent because the volume of slop is annoying and a lot of people are degrading their ability to think by using it poorly, but "If you’re using an LLM to spit out text for you, they’re not your thoughts" is just motivated reasoning.
> If you reduce "Lord of the flies" to something like "children shouldn't run a community"
To be honest, and I hate to say this because it's condescending, it's a matter of literacy.
Some people don't see the value in literature. They are the same kind of people who will say "what's the point of book X or movie Y? All that happens is <sequence of events>", or the dreaded "it's boring, nothing happens!". To these people, there's no journey, no pleasure with words, the "plot" is all that matters and the plot can be reduced to a sequence of A->B->C. I suspect they treat their fiction like junk food, a quick fix and then move on. At that point, it makes logical sense to have an LLM write it.
It's very hard to explain the joy of words to people with that mentality.
language we use actually very much dictates the way we think...
for instance, there's a tribe that describes directions only using the Cardinals. and as such they have no words for nor mental concept of "left and right".
and coincidentally, they're all much more proficient at navigation and have a better general sense of direction (obviously) than the average human because of the way they have to think about directions when just talking to each other.
===
is also why the best translators don't just do a word for word replacement but half to force think through cultural context and ideology on both sides of the conversation in order to make a more coherent translation.
what language you use absolutely dictates how and what we think as well as what particular message is conveyed
> “Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
Did you use AI to write this...? Because it does not follow from the post you're replying to.
Read it again. I explicitly quoted the relevant bit. It’s the first sentence in their last paragraph.
> If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!
Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"
"My LLM generated code is structurally the same as Bach' Preludes and therefore anyone who criticises my work but not Bach's is a hypocrite' is a wild take.
And unless I'm misunderstanding, it's literally the exact point you made, with no exaggeration or added comparisons.
Sometimes repetition serves a purpose, and sometimes it doesn’t.
Except the prompt is a lot harder and less pleasant to read?
Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.
> Except the prompt is a lot harder and less pleasant to read?
It’s not a literal suggestion. “Might as well” is a well known idiom in the English language.
The point is that if you’re not going to give the reader the result of your research and opinions and instead will just post whatever the LLM spits out, you’re not providing any value. If you gave the reader the prompt, they could pass it through an LLM themselves and get the same result (or probably not, because LLMs have no issue with making up different crap for the same prompt, but that just underscores the pointlessness of posting what the LLM regurgitated in the first place).