I personally don’t think I care if a blog post is AI generated or not. The only thing that matters to me is the content. I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.
> No, don't use it to fix your grammar, or for translations, or for whatever else you think you are incapable of doing. Make the mistake. Feel embarrassed. Learn from it. Why? Because that's what makes us human!
It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
Even letting the LLM “clean it up” puts its voice on your text. In general, you don’t want its voice. The associations are LinkedIn, warnings from HR and affiliate marketing hustles. It’s the modern equivalent of “talking like a used car salesman”. Not everyone will catch it but do think twice.
I don't like ChatGPT's voice any more than you do, but it is definitely not HR-voice. LLM writing tends to be in active voice with clear topic sentences, which is already 10x better writing than corporate-speak.
Yep, it's like Coke Zero vs Diet Coke: 10x the flavor and 10x the calories.
Coke Zero and Diet Coke are both noncaloric.
If you’re playing the same games they play on the label, sure. There is less than one calorie per serving.
(Edit: in Diet Coke. Not too sure about Coke Zero).
What game is played? To me it seems pretty straightforward that for both the actual caloric content is ~0.
I believe it’s .4 calories per serving which is less than one and which rounds down to zero, but it’s not approximately zero by a long shot.
How is 0.4 kcal "not approximately zero by a long shot"?
Especially when compared to a standard coke with around 150 kcal.
Well, it’s almost half a calorie, to begin with.
By the time I finish the can I'll have Burned through more than 0.4 calories.
0 × 10 = 0
...that's the joke.
It's really not hard to say "make it in my voice" especially if it's an LLM with extensive memory of your writing.
You can say anything to an LLM, but it’s not going to actually write in your voice. When I was writing a very long blog post about “creative writing” from AIs, I researched Sudowrite briefly, which purports to be able to do exactly this; not only could it not write convincingly in my voice (and the novel I gave it has a pretty strong narrative voice), following Sudowrite’s own tutorial in which they have you get their app to write a few paragraphs in Dan Brown’s voice demonstrated it could not convincingly do that.
I don’t think having a ML-backed proofreading system is an intrinsically bad idea; the oft-maligned “Apple Intelligence” suite has a proofreading function which is actually pretty good (although it has a UI so abysmal it’s virtually useless in most circumstances). But unless you truly, deeply believe your own writing isn’t as good as a precocious eighth-grader trying to impress their teacher with a book report, don’t ask an LLM to rewrite your stuff.
No man. This is the whole problem. Don't sell yourself short like that.
What is a writing "voice"? It's more than just patterns and methods of phrasing. ChatGPT would say "rhythm and diction and tone" and word choice. But that's just the paint. A voice is the expression of your conscious experience trying to convey an idea in a way that reflects your experience. If it were just those semi-concrete elements, we would have unlimited Dickens; the concept could translate to music, we could have unlimited Mozart. Instead—and I hope you agree—we have crude approximations of all these things.
Writing, even technical writing, is an art. Art comes from experience. Silicon can not experience. And experiencers (ie, people with consciousness) can detect soullessness. To think otherwise is to be tricked; listen to anything on suno, for example. It's amazing at first, and then you see through the trick. You start to hear it the way most people now perceive generated images as too "shiny". Have you ever generated an image and felt a feeling other than "neat"?
Only if you have a very low bar for what constitutes "in your voice".
Just ask it to write "in the style of" a few famous writers with a recognizable style. It just can't do it. It'll do an awfully cringe attempt at it.
And that's just how bad LLMs are at it. There's a more general problem. If you've ever read a posthumous continuation of a literary series by a different but skilled author, you know what I mean.
For example, "And another thing..." by Eoin Colfer is written to be the final sequel to the Hitchhiker's Guide, after Douglas Adams died. And to their absolute credit, the author Eoin Colfer, in my opinion, pretty much nails Douglas Adams's tone to the extent it is humanly possible to do so. But no matter how close he got, there's a paradox here. Colfer can only replicate Adams's style. But only Adams could add a new element, and it would still be his style. While if Colfer had done exactly the same, he'd have been considered "off".
Anyway, if a human writer can't pull it off, I doubt an LLM can do it.
I have tried this. It doesnt work. Why? A human’s unique style when executed has a pattern but in each work there are “experiments” that deviate from the pattern. These deviations are how we evolve stylistically. AI cannot emulate this, it only picks up on a tiny bit of the pattern so while it may repeat a few beats of the song, it falls far short of the whole.
This is why heavily assisted ai writing is still slop. That fundamental learning that is baked in is gone. It is the same reason why corporate speak is so hated. It is basically intentional slop.
Best case scenario, this means writing new blog posts in your old voice, as reconstructed by AI; some might argue this gives your voice less opportunity to grow or evolve.
I think no, categorically. The computer can detect your typos and accidents. But if you made a decision to word something a certain way, that _is_ your voice. If a second party overrides this decision, it's now deviating from your voice. The LLM therefore can either deviate from your voice, or do nothing.
That's no crime, so far. It's very normal to have writers and editors.
But it's highly abnormal for everyone to have the _same_ editor, famous for the writing exactly the text that everybody hates.
It's like inviting Uwe Boll to edit your film.
If there's a good reason to send outgoing slop, OK. But if your audience is more verbally adept, and more familiar with its style, you do risk making yourself look bad.
> especially if it's an LLM with extensive memory of your writing.
Personally I'm not submitting enough stuff to an LLM to give it enough to go on.
Exactly. It's so wild to me when people hate on generated text because it sounds like something they don't like, when they could easily tell it to set the tone to any other tone that has ever appeared in text.
respectfully, read more.
Only if you ask it to or let it lead you. Just say no.
> It would be more human to handwrite your blog post instead. I don’t see how this is a good argument. The use of tools to help with writing and communication should make it easier to convey your thoughts, and that itself is valuable.
Whether I hand write a blog post or type it into a computer, I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine arise from hand writing vs. typing.
> your thoughts
No, they aren't! Not if you had AI write the post for you. That's the problem!
The idea that an AI can keep the authors voice just means it is so unoriginal that it doesn't make a difference.
>I'm the one producing the string of characters I intend for you to read. If I use AI to write it, I am not. This is a far, far, far more important distinction than whatever differences we might imagine
That apparently is not the case for a lot of people.
Yeah I don’t agree with that quoted. If you experiment with replying to emails by hand, you’ll practically avoid long threads. If you experiment with avoiding as much typing as possible by allowing an AI substitute, you’ll probably end up erasing large portions. AI pad-out followed by human pare-down might be closer to handwritten.
s/important/significant/, then, if that helps make the point clearer.
I cannot tell you that it objectively matters whether or not an article was written by a human or an LLM, but it should be clear to anybody that it is at least a significant difference in kind vs. the analogy case of handwriting vs. typing. I think somebody who won't acknowledge that is either being intellectually dishonest, or has already had their higher cognitive functions rotted away by excessive reliance on LLMs to do their thinking for them. The difference in kind is that of using power tools instead of hand tools to build a chair, vs. going out to a store and buying one.
I wasn't even arguing with you, nor saying that it doesn't matter to me, rather just pointing an out an observation.
> I think somebody who won't acknowledge that is either being intellectually dishonest, or has already had their higher cognitive functions rotted away by excessive reliance on LLMs to do their thinking for them.
This feels too aggressive for a good faith discussion on this site. Even if you do think that, there's no point in insulting the humans who could engage with you in that conversation.
> I wasn't even arguing with you, nor saying that it doesn't matter to me, rather just pointing an out an observation.
My interpretation of your comment was that it related to my use of the word "important", which has a more subjective connotation than "significant" and arguably allows my comment to be interpreted in two ways. The second way (that I feel people should care more about the distinction I highlighted) was not my intended meaning, since obviously people can care about whatever they want. It was a relevant observation of imprecise wording on my part.
> there's no point in insulting the humans who could engage with you in that conversation.
There would be no point in engaging them in that conversation, either.
Disagreeing with me that the difference in kind I highlighted is important is fine, and maybe even an interesting conversation for both sides. Disagreeing with me that there is a significant difference in kind is just nonsensical, like arguing that there's no meaningful difference, at any level, between painting a painting yourself and buying one from a store. How can you approach a conversation like that? Yet positions like that appear in internet arguments all the time, which are generally arguments between anonymous strangers who often have no qualms about embracing total intellectual dishonesty because the[ir] goal is just to make their opponent mad enough that they forget the original point they were trying to make and go chasing the goalposts all over the room.
The only winning move is not to play, which requires being honest with yourself about who you're talking to and what they're trying to get out of the conversation. I am willing to share that honesty.
I am, to be clear, not saying you are one of these people.
I think of technology as offering a sliding scale for how much assistance it can provide. Your words could be literally the keys you press, or you could use some tool that fixes punctuation and spelling, or something that fixes the grammar in your sentence, or rewrites sentences to be more concise and flow more smoothly, etc. If I used AI to rewrite a paragraph to better express my idea, I still consider it fundamentally my thoughts. I agree that it can get to the point where using AI doesn’t constitute my thoughts, but it’s very much a gray area.
> It would be more human to handwrite your blog post instead.
“Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
> The use of tools to help with writing and communication should make it easier to convey your thoughts
If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
> If it’s on the web, it’s digital, there was never a period when blogs were hand written.
This is just pedantic nonsense
> there was never a period when blogs were hand written.
I’ve seen exactly that. In one case, it was JPEG scans of handwriting, but most of the time, it’s a cursive font (which may obviate “handwritten”).
I can’t remember which famous author it was, that always submitted their manuscripts as cursive writing on yellow legal pads.
Must have been thrilling to edit.
Isolated instances do not a period define. We can always find some example of someone who did something, but the point is it didn’t start like that.
For example, there was never a period when movies were made by creating frames as oil paintings and photographing them. A couple of movies were made like that, but that was never the norm or a necessity or the intended process.
The fact that this one example stands out so clearly to you gives more Credence to the fact that this is so rare and not a common aspect of blogging.
> If you’re using an LLM to spit out text for you, they’re not your thoughts
The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in. Not completely independent, but to claim thoughts are completely dependent on text (thus also the language) is nonsense.
> Might as well just give people your prompt.
What would be the value of seeing a dozen diffs? By the same logic, should we also include every draft?
>The thoughts I put into a text are mostly independent of the sentences or _language_ they're written in.
Not even true! Turning your thoughts into words is a very important and human part of writing. That's where you choose what ambiguities to leave, which to remove, what sort of implicit shared context is assumed, such important things as tone, and all sorts of other unconscious things that are important in writing.
If you can't even make those choices, why would I read you? If you think making those choices is unimportant, why would I think you have something important to say?
Uneducated or unsophisticated people seem to vastly underestimate what expertise even is, or just how much they don't know, which is why for example LLMs can write better than most fanfic writers, but that bar is on the damn floor and most people don't want to consume fanfic level writing for things that they are not fanatical about.
There's this weird and fundamental misconception in pro-ai realms that context free "information" is somehow possible, as if you can extract "knowledge" from text, like you can "distill" a document and reduce meaning to some simple sentences. Like, there's this insane belief that you can meaningfully reduce text and maintain info.
If you reduce "Lord of the flies" to something like "children shouldn't run a community", you've lost immense amounts of info. That is not a good thing. You are missing so much nuance and context and meaning, as well as more superficial (but not less important!) things like the very experience of reading that text.
Like, consider that SOTA text compression algorithms can reduce text to 1/10th of it's original size. If you are reducing a text by more than that to "summarize" or "reduce to it's main points" a text, do you really think you are not losing massive amounts of information, context, or meaning?
You can rewrite a sentence on every page of lord of the flies, and the same important ideas would still be there.
You can have the thoughts in a different language and the same ideas are still there.
You can tell an LLM to tweak a paragraph to better communicate a nuance until you're happy with it.
---
Language isn't thought. It's extremely useful in that it lets us iterate on our thoughts. You can add in LLMs in that iteration loop.
I get you wanted to vent because the volume of slop is annoying and a lot of people are degrading their ability to think by using it poorly, but "If you’re using an LLM to spit out text for you, they’re not your thoughts" is just motivated reasoning.
> If you reduce "Lord of the flies" to something like "children shouldn't run a community"
To be honest, and I hate to say this because it's condescending, it's a matter of literacy.
Some people don't see the value in literature. They are the same kind of people who will say "what's the point of book X or movie Y? All that happens is <sequence of events>", or the dreaded "it's boring, nothing happens!". To these people, there's no journey, no pleasure with words, the "plot" is all that matters and the plot can be reduced to a sequence of A->B->C. I suspect they treat their fiction like junk food, a quick fix and then move on. At that point, it makes logical sense to have an LLM write it.
It's very hard to explain the joy of words to people with that mentality.
language we use actually very much dictates the way we think...
for instance, there's a tribe that describes directions only using the Cardinals. and as such they have no words for nor mental concept of "left and right".
and coincidentally, they're all much more proficient at navigation and have a better general sense of direction (obviously) than the average human because of the way they have to think about directions when just talking to each other.
===
is also why the best translators don't just do a word for word replacement but half to force think through cultural context and ideology on both sides of the conversation in order to make a more coherent translation.
what language you use absolutely dictates how and what we think as well as what particular message is conveyed
> “Blog” stands for “web log”. If it’s on the web, it’s digital, there was never a period when blogs were hand written.
Did you use AI to write this...? Because it does not follow from the post you're replying to.
Read it again. I explicitly quoted the relevant bit. It’s the first sentence in their last paragraph.
> If you’re using an LLM to spit out text for you, they’re not your thoughts, you’re not the one writing, and you’re not doing a good job at communicating. Might as well just give people your prompt.
It's like listening to Bach's Prelude in C from WTCI where he just came up with a humdrum chord progression and uses the exact same melodic pattern for each chord, for the entire piece. Thanks, but I can write a trivial for loop in C if I ever want that. What a loser!
Edit: Lest HN thinks I'm cherry picking-- look at how many times Bach repeats the exact same harmony/melody, just shifting up or down by a step. A significant chunk of his output is copypasta. So if you like burritos filled with lettuce and LLM-generated blogs, by all means downvote me to oblivion while you jam out to "Robo-Bach"
"My LLM generated code is structurally the same as Bach' Preludes and therefore anyone who criticises my work but not Bach's is a hypocrite' is a wild take.
And unless I'm misunderstanding, it's literally the exact point you made, with no exaggeration or added comparisons.
Sometimes repetition serves a purpose, and sometimes it doesn’t.
Except the prompt is a lot harder and less pleasant to read?
Like, I’m totally on board with rejecting slop, but not all content that AI was involved in is slop, and it’s kind of frustrating so many people see things so black and white.
> Except the prompt is a lot harder and less pleasant to read?
It’s not a literal suggestion. “Might as well” is a well known idiom in the English language.
The point is that if you’re not going to give the reader the result of your research and opinions and instead will just post whatever the LLM spits out, you’re not providing any value. If you gave the reader the prompt, they could pass it through an LLM themselves and get the same result (or probably not, because LLMs have no issue with making up different crap for the same prompt, but that just underscores the pointlessness of posting what the LLM regurgitated in the first place).
It is sort of fun to bounce little ideas off ChatGPT, but I can’t imagine wanting to read somebody else’s ChatGPT responses.
IMO a lot of the dumb and bad behavior around LLMs could be solved by a “just share the prompts” strategy. If somebody wants to generate an email from bullet points and send it to me: just send the bullet points, and I can pass them into an LLM if I want.
Blog post based on interesting prompts? Share the prompt. It’s just text completion anyway, so if a reader knows more about the topic than the prompt-author, they can even tweak the prompt (throw in some lingo to get the LLM to a better spot in the latent space or whatever).
The only good reason not to do that is to save some energy in generation, but inference is pretty cheap compared to training, right? And the planet is probably doomed anyway at this point so we as well enjoy the ride.
AI assisted blog posts could have an interleaved mix of AI and human written words where a person could edit the LLM’s output. If the whole blog post were simply a few prompts on ChatGPT with no human directly touching the output, then sure it makes sense to share the prompt.
I tend to agree, though not in all cases. If I’m reading because I want to learn something, I don’t care how the material was generated. As long as it’s correct and intuitive, and LLMs have gotten pretty good at that, it’s valuable to me. It’s always fun when a human takes the time to make something educational and creative, or has a pleasant style, or a sense of humor; but I’m not reading the blog post for that.
What does bother me is when clearly AI-generated blog posts (perhaps unintentionally) attempt to mask their artificial nature through superfluous jokes or unnaturally lighthearted tone. It often obscures content and makes the reading experience inefficient, without the grace of a human writer that could make it worth it.
However, if I’m reading a non-technical blog, I am reading because I want something human. I want to enjoy a work a real person sank their time and labor into. The less touched by machines, the better.
> It would be more human to handwrite your blog post instead.
And I would totally ready handwritten blog posts!
AI- assisted or generated content tends to have an annoying wordiness or bloat to it, but only astute readers will pick up on it.
But it can make for tiresome reading. Like, a 2000 word post can be compressed to 700 or something had a human editor pruned it.
:Edit, not anymore kek
Somehow this is currently the top comment. Why?
Most non-quantitative content has value due to a foundation of distinct lived experience. Averages of the lived experience of billions just don't hit the same, and are less likely to be meaningful to me (a distinct human). Thus, I want to hear your personal thoughts, sans direct algorithmic intermediary.
HN favors very fresh comments, to give them all some time in the limelight.
I don't mind either, I have way too few time to write blogposts, but I have some things that I want to share. So I focus on the content extensively, and use the llm to help with the style and the phrasing and grammar..
But I often correct the result and change some wording.
Maybe at the beginning, when I was less experienced with llms, I used more llm style, but now I find it a good compromise to convey what I think without hindering the message behind my awful writing :)
Even if someone COULD write a great post with AI, I think the author is right in assuming that it's less likely than a handwritten one. People seem to use AI to avoid thinking hard about a topic. Otherwise, the actual writing part wouldn't be so difficult.
This is similar to the common objection for AI-coding that the hard part is done before the actual writing. Code generation was never a significant bottleneck in most cases.
The best yarn is spun from mouth to ear over an open flame. What is this handwriting?
It's what is used to feed the flames.
People are putting out blog posts and readmes constantly that they obviously couldn't even be bothered to read themselves, and they're making it to the top of HN routinely. Often the author had something interesting to share and the LLM has erased it and inserted so much garbage you can't tell what's real and what's not, and even among what's real, you can't tell what parts the author cares about and which parts they don't.
All I care about is content, too, but people using LLMs to blog and make readmes is routinely getting garbage content past the filters and into my eyeballs. It's especially egregious when the author put good content into the LLM and pasted the garage output at us.
Are there people out there using an LLM as a starting point but taking ownership of the words they post, taking care that what they're posting still says what they're trying to say, etc? Maybe? But we're increasingly drowning in slop.
Quality , human-made content is seldom rewarded anymore. Difficulty has gone up. The bar for quality is too high, so an alternative strategy is to use LLMs for a more lottery approach to content: produce as much LLM-assisted content as possible in the hope something goes viral. Given that it's effectivity free to produce LLM writing, eventually something will work if enough content is produced.
I cannot blame people for using software as a crutch when human-based writing has become too hard and seldom rewarded anymore unless you are super-talented, which statistically the vast majority of people are not.
To be fair, you are assuming that the input wasn't garbage to begin with. Maybe you only notice it because it is obvious. Just like someone would only notice machine translation if it is obvious.
> To be fair, you are assuming that the input wasn't garbage to begin with.
It's not an assumption. Look at this example: https://news.ycombinator.com/item?id=45591707
The author posted their input to the LLM in the comments after receiving critcism, and that input was much better than their actual post.
In this thread I'm less sure: https://news.ycombinator.com/item?id=45713835 - it DOES look like there was something interesting thrown into the LLM that then put garbage out. It's more of an informed guess than an assumption, you can tell the author did have an experience to share, but you can't really figure out what's what because of all the slop. In this case the author redid their post in response to criticism and it's still pretty bad to me, and then they kept using an LLM to post comments in the thread, I can't really tell how much non-garbage was going in.
What's really sad here is that it is all form over function. The original got the point across, didn't waste words and managed to be mostly coherent. The result, after spending a lot of time on coaxing the AI through the various rewrites (11!) was utter garbage. You'd hope that we somehow reach a stage where people realize that what you think is what matters and not how pretty the packaging is. But with middle management usually clueless we've conditioned people to having an audience that doesn't care either, they go by word count rather than by signal:noise ratio, clarity and correctness.
This whole AI thing is rapidly becoming very tiresome. But the trend seems to be to push it everywhere, regardless of merit.
The problem is the “they’re making it to the top of HN routinely” part.
That’s true, I just wanted to offer a counter perspective to the anti-AI sentiment in the blog post. I agree that the slop issue is probably more common and egregious, but it’s unhelpful to discount all AI assisted writing because of slop. The only way I see to counteract slop is to care about the reputation of the author.
And how does an author build up said reputation?
Agreed. This short target piece is an amusing Luddite rant. No true content other than to bemoan our first stumbling steps toward using AI to write and think.
I am a reasonably good (but sloppy) writer and use Claude to help improve my text, my ideas, and the flow of sentences and paragraphs. A huge help once I have a good first draft. I treat Claude like a junior editor who is useful but requires a tight leash and sharp advice.
This thoughtless piece is like complaining about getting help from professional human editors: a profession nearly killed off over the last three decades.
Who can afford $50/hr human editorial services? Not me. Claude is a great “second best” and way faster and cheaper.
> I personally don’t think I care if a blog post is AI generated or not. The only thing that matters to me is the content.
An LLM generated blog post is by definition derivative and bland.
> I use ChatGPT to learn about a variety of different things, so if someone came up with an interesting set of prompts and follow ups and shared a summary of the research ChatGPT did, it could be meaningful content to me.
Then say so, up front.
But that's not what people do. They're lazy or lack ideas but want "content" (usually for some kind of self-promotional reason). So you get to read that.
People say "by definition" when they have no idea what the phrase actually means, and their use of it is intellectually dishonest.
Couldn’t agree more with this. AI is a tool like everything else. I mean if You are not a native it could be handy just to suggest You the polishing the style and all the language quirks to some degree. Why when You use autocorrect You are the boss but when You use AI You turn to half brain with ChatGPT?
> I personally don’t think I care if a blog post is AI generated or not.
0% of your HN comments include URLs for sources that support the positions and arguments you've expressed at HN.[1] Do you generally not care about the sources of ideas? For example, when you study public policy issues, do you not differentiate between research papers published in the most prestigious journals and 500-word news articles written at the 8th-grade level by nonspecialist nobodies?
[1] https://hn.algolia.com/?type=comment&query=author:alyxya+htt...
I think the author’s point is that by exposing oneself to feedback, you are on the receiving end of the growth in the case of error. If you hand off all of your tasks to ChatGPT to solve, your brain will not grow and you will not learn.
Content can be useful. The AI tone/prose is almost always annoying. You learn to identify it after a while, especially if you use AI yourself.
> I use ChatGPT to learn about a variety of different things
Why do you trust the output? Chatbots are so inaccurate you surely must be going out of your way to misinform yourself.
I try to make my best judgment regarding what to trust. It isn’t guaranteed that content written by humans is necessarily correct either. The nice thing about ChatGPT is that I can ask for sources, and sometimes I can rely on that source to fact check.
> The nice thing about ChatGPT is that I can ask for sources
And it will make them up just like it does everything else. You can’t trust those either.
In fact, one of the simplest ways to find out a post is AI slop is by checking the sources posted at the end and seeing they don’t exist.
Asking for sources isn’t a magical incantation that suddenly makes things true.
> It isn’t guaranteed that content written by humans is necessarily correct either.
This is a poor argument. The overwhelming difference with humans is that you learn who you can trust about what. With LLMs, you can never reach that level.
> And it will make them up just like it does everything else. You can’t trust those either.
In tech-related matters such as coding, I've come to expect every link ChatGPT provides as reference/documentation is simply wrong or nonexistent. I can count with fingers from a single hand the times I clicked on a link to a doc from ChatGPT that didn't result in a 404.
I've had better luck with links to products from Amazon or eBay (or my local equivalent e-shop). But for tech documentation which is freely available online? ChatGPT just makes shit up.
Sure, but a chatbot will compound the inaccuracy.
Chatbots are more reliable than 95% of people you can ask, on a wide variety of researched topics.
Yeah... you're supposed to ask the 5%.
If you have a habit of asking random lay persons for technical advice, I can see why an idiot chatbot would seem like an upgrade.
Surely if you have access to a technical expert with the time to answer your question, you aren't asking an AI instead.
Books exist
chatGPT exists
(I'm not saying not to read books, but seriously: there are shortcuts)
...and is unreliable, hence the origin of this thread.
If I want to know about the law, I'll ask a lawyer (ok, not any lawyer, but it's a useful first pass filter). If I want to know about plumbing I'll ask a plumber. If I want to ask questions or learn about writing I will ask one or more writers. And so on. Experts in the field are way better at their field than 95% of the population, which you can ask but probably shouldn't.
There are many 100's of professions, and most of them take a significant fraction of a lifetime to master, and even then there usually is a daily stream of new insights. You can't just toss all of that information into a bucket and expect that to outperform the < 1% of the people that have studied the subject extensively.
When Idiocracy came out I thought it was a hilarious movie. I'm no longer laughing, we're really putting the idiots in charge now and somehow we think that quantity of output trumps quality of output. I wonder how many scientific papers published this year will contain AI generated slop complete with mistakes. I'll bet that number is >> 0.
Surely you don't always call up and pay for a lawyer any time you have an interest or question about law, you google it? In what world do you have the time, money and interest to ask people about every single thing you want some more information about.
I've done small plumbing jobs after asking AI if it was safe, I've written legal formalia nonsense that the government wanted with the help of AI. It was faster, cheaper and I didn't bother anyone with the most basic of questions.
Indeed. The level of intellectual dishonesty on this page is staggering.
In some evaluations, it is already outperforming doctors on text medical questions and lawyers on legal questions. I'd rather trust ChatGPT than a doctor who is barely listening, and the data seems to back this up.
The problem is that you don't know on what evaluations and you are not qualified yourself. By the time you are that qualified you no longer need AI.
Try asking ChatGPT or whatever is your favorite AI supplier about a subject that you are an expert about something that is difficult, on par with the kind of evaluations you'd expect a qualified doctor or legal professional to do. And then check the answer given, then extrapolate to fields that you are clueless about.
Sure, so long as the question is rather shallow. But how is this any better than search?
That's the funny thing to me about these criticisms. Obviously it is an important caveat that many clueless people need to be made aware of, but still funny.
AI will just make stuff up instead of saying it doesn't know, huh? Have you talked to real people recently? They do the same thing.
I would personally find it insulting if i ask someone something and they gave me ChatGPT output, i would rather then say idk and I look for answers else where. If I wanted to ask ChatGPT I would have done so myself.
Generative AI tends to be very sure of itself. It doesn’t say, it doesn’t know when it doesn’t know. Sometimes when it doesn’t it won’t engage in the premise of the question and instead give an answer to an easier question
If you want this, why would you want the LLM output and not just the prompts? The prompts are faster to read and as models evolve you can get "better" blog posts out of them.
It's like being okay with reading the entirety of generated ASM after someone compiles C++.
> The only thing that matters to me is the content.
The content itself does have value, yes.
But some people also read to connect with other humans and find that connection meaningful and important too.
I believe the best writing has both useful content and meaningful connection.
Human as in unique kind of experiential learning. We are the sum of our mistakes. So offloading your mistakes, becomes less human, less leaning into the human experience.
Maybe humans aren't so unique after all, but that's its own topic.
I have human-written blog posts, and I can rest assured no one reads those either.
Yeah, same here. I’ve got to the stage where what I write is mostly just for myself as a reminder, or to share one-to-one with people I work with. It’s usually easier to put it in a blog post than spend an hour explaining it in a meeting anyway. Given the state of the internet these days, that’s probably all you can really expect from blogging.
I have those too and I don't actually care who reads them. When I write it is mostly to organize my thoughts or to vent my frustration about something. Afterwards I feel better ;)
As long as you’re not using an autopen, because that is definitely not you!
https://archive.ph/20250317072117/https://www.bloomberg.com/...
Trump uses it more than anyone.
Do you care if a scifi book was written by an AI or human, out of curiosity?
I'm not the OP but I've been thinking about this for a little bit since I read your question. Part of me says no, what could be more Sci-Fi than a complete and comprehensive story written by a computer. Who wouldn't want Data to have been able to and succeeded at writing a story that connects with his human compatriots? On the other hand, I also understand the concern and feeling of "something lost" when I consider a story written by a human vs a machine.
But if I'm truly honest with myself, I think in the long run I wouldn't care. I grew up on Science Fiction, and the stories I've always found most interesting were ones that explored human nature instead of just being techno fetishism. But the reality is I don't feel a human connection to Asimov, or Cherryh, or any of the innumerable short form authors who wrote for the SF&F magazines I devoured every chance I got. I remember the stories, but very rarely the names. So they might as well have been written by an AI since the human was never really part of the equation (for me as a reader).
And even when I do remember the names, maybe the human isn't one I want a lot of "human connection" with anyway. Ender's Game, the short story and later the novel were stories I greatly enjoyed. But I feel like my enjoyment is hampered by knowing that the author of a phenomenal book that has some interesting things to day on the pains caused by de-humanizing the other has themselves become someone who dehumanizes others often. The human connection might be ironic now, but that doesn't make the story better for me. Here too, the story might as well have been written by an AI for all that the current person that the author is represents who they were (either in reality or just in my head) when I read those stories for the first time.
Some authors I have been exposed to later in life, I have had a degree of human connection with. I felt sadness and pain when Steve Miller died and left his spouse and long time writing partner Sharon Lee to carry on the Liaden series. But that connection isn't what drew me to the stories in the first place and that connection is largely the same superficial parasocial one that the easy access into the private lives of famous people gives us. Sure I'm saddened, but honesty requires me to note I'm more sad that it reminds me eventually this decades spanning series will draw to a close, and likely with many loose ends. And so even here, if an AI were capable of producing such a phenomenal series of books, in a twisted way as a reader it would be better because they would never end. The world created by the author would live on forever, just like a "real" world should.
Emotionally I feel like I should care that a book was or wasn't written by an AI. But if I'm truly honest with myself, the author being a human hasn't so far added much to the experience, except in some ways to make it worse, or to cut short something that I wish could have continued forever.
All of that as a longwinded way of answering, "no, I don't think I would care".
Very interesting!
In contrast, I think for me a tremendous part of the joy I get from reading science fiction is knowing there's another inventive human on the other side of the page. When I know what I'm reading is the result of a mechanical computation, it loses that.
But the real noodle-bender for me is would I still enjoy the book if I didn't know?
I agree with you to a point. Ai will often suggest edits which destroy the authentic voice of a person. If you as a writer do not see these suggestions for what they are, you will take them and destroy the best part of your work.
I write pretty long blog posts that some enjoy and dump them into various llms for review. I am pretty opinionated on taste so I usually only update grammar but it can be dangerous for some.
To be more concrete, often ai tells me to be more “professional” and less “irreverent” which i think is bullshit. The suggestions it gives are pure slop. But if english isnt first language or you dont have confidence, you may just accept the slop.
This.
It's about to find the sweet spot.
Vibe coding is crap, but I love the smarter autocomplete I get from AI.
Generating whole blog posts from thin air is crap, but I love smart grammar, spelling, and diction fixes I get from AI.
I just despise the trend of commenting "I asked ChatGPT about this and this is what it said:".
It's like getting an unsolicited text with a "Let Me Google That For You" link. Yes, we can all ask ChatGPT about the thing. We don't need you to do it for us.
What is remarkable is the frequency with which I’ve heard so-called subject matter experts do this on podcasts. It seems to me a very effective way to communicate your lack of any such expertise.
so this is the danger. If you are an expert in the content, you'll realize the AI slop.
If you are not an expert, you'll think the AI is amazing, without realizing the slop.
I'll rather do without the AI slop, thanks.
[dead]