> And these AI-fueled proposals aren’t necessarily bad. That’s what makes them so tricky. They’re often plausible, sometimes even smart, but some come with strings. Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me. The guy who’s now debating not just people, but people plus the persuasive ghostwriter in their pocket
Don’t spend your time analyzing or justifying your position on an AI-written proposal (which by definition someone else did not spend time creating in the first place). Take the proposal, give it to YOUR AI, and ask it to refute it. Maybe nudge it in your desired direction based on a quick skim of the original proposal. I guarantee you the original submitter probably did something similar in the first place.
You dream of a world where isolation is all that remains. You have attained the outlook of a billionaire, detachment is virtue, every interaction is mediated through your servants.
When people do this in their relationships, marriages fail, friendships are lost, children forget who you were without the veil.
There are already stories like this cropping up every day. Do you really not understand that connecting with other flawed, unpolished people is its own reward? There is beauty and value in those imperfections.
Very poetic but you either misunderstand me or think I intend for my advice to apply all the time, in all situations.
I’m 100% an AI skeptic but also I won’t invest time and emotion replying to communications where it’s clear the other side a priori also didn’t invest at all. Let machines deal with machines and let me deal with humans.
And believe you me, I reserve this for contexts where I’m not jeopardizing my real interaction with someone I care about.
> Do you really not understand that connecting with other flawed, unpolished people is its own reward?
Oh I understand that perfectly - the above sentence shows that it’s you who didn’t understand what I said in the first place.
Understood, then we disagree only about the appropriate response. I opt instead to ignore all messages written by AI. I can even feel myself developing instant blindness to its 'house' style in the same way I stopped being able to see ads in the earlier days of the internet, prior to ublock.
That said, I do have the advantage of effectively absolute financial security, so I'm privileged enough to be choosy about who I interact with. I do understand that sometimes there's no real choice but to wade through slop in the pursuit of a paycheck.
amazing meta-level joke spanning months of effort... all articles on the author's site are themselves AI generated, and the article purporting to bemoan the impact of AI is itself AI Generated Content
Author here. My early posts were pretty heavily AI-edited, honestly just me trying to save time since I write these after hours (and for devs, “after hours” usually means late). I’ve been going back to those early pieces and slowly updating them, but I don’t regret it. I’ll use whatever tool helps me get an idea out of my head and into the world. This whole writing thing is still new for me, and I’m learning as I go. thanks.
Out of your head, then turned into garbage, then into the world. You'll regret it later as people see your writing as nothingness and incomprehensive, not to be trusted, etc
I think in this day and age people should start assuming content is AI and then working backwards to a place of trust.
For example: Ed Nite says he doesn't want to be a programmer anymore. Who is Ed Nite? Is he even a programmer at all?
As far as I can tell, Ed Nite: Programmer doesn't really exist, must be a pen name. As far as his content, he mostly talks about being a writer and using AI. There's no real technical content to speak of. He doesn't link to a Github or work record. I found a youtube page of his with a single AI video on it from 6 months ago. As far as I can tell Ed Nite was invented 6 months ago to start blogging about blogging, self improvement and AI at mindthenerd.com.
He's also got several nudges to enter your email and subscribe to his blog. And mentions getting 50+ emails a day that he has to respond to and mentions he uses AI to respond to them. He seems to like blasting people with AI content, and also as this blog post mentions, gets sent a lot of AI content himself. It's kinda weird, why is he doing this lol
Not sure where the “blasting people” take is coming from. I’m not for or against it, people should use whatever tools work for them. I do have a few real concerns, like its ability to convince us so quickly, which is why I wrote about safety and the need for regulation. That doesn’t mean I’m anti-AI. Like any new tech, there’s good and bad. I use it, I write about it, and I share my experiences; that’s all I’m really doing.
He's writing the kind of stuff that might get popular on HN, then posting it here so we go interact with his blog. He'll collect emails and gain a reader base and maybe start a substack or throw up ads on the side.
I could see this kind of AI astroturfing being a real problem communities face in the future, where you just scrape the top posts on a community and then generate blog content related to those, then post your content back at the community.
Rinse and repeat and you don't have to be a programmer anymore.
I think people don't object to making money as much as being underhanded about it (trying to bootstrap from zero to money while not making it clear you're currently at zero) and also using AI slop (or slop of any kind) to quickly generate content.
People would respect this more if it was content lovingly generated for years, and then the author went "hey, maybe I can promote this on HN?". But artificially promoting worthless, slop content, is going to generate this reaction.
Hey ModernMech, Ed Nite here. Not AI, just a guy with too much imagination (and yes, a pen name). I’m a real software dev who’s spent years building and supporting B2B platforms as a solo dev. Lately, I’ve been leaning into the creative side, blogging, writing, and soon, YouTube. My ego even likes to think AI sounds like me (wink).
This makes me feel you are not being forthright, and you are trying to take us for fools. What's worse, you are trying to profit off it.
TFA you submitted today tries to hide it better, but there's no reason to be quoting Marcus Aurelius twice in one blog post until you realize they're affiliate links... which is like every link on your blog.
I'm not going to speak for everyone, but personally I'd prefer this style of content not be posted here.
Even some of his HN comments on this page feel AI-generated... for example I see one use of "You're absolutely right", one "You're right", and two "You are right". Meaningless phrases like "in true HN style". And space-filling excessive positivity like "Your comment is appreciated and heard. That’s the beauty of learning new things, you try, you stumble, and you learn from your mistakes."
Even the primary anecdote of the post simply doesn't seem realistic to me. Why would you ask an LLM whether a domain name idea is good or terrible? That's an entirely subjective opinion question with no right or wrong answer! And chatbots are widely known for being sycophantic anyway, so the response will just depend on how the question was framed.
OP, if you're actually writing this stuff entirely by hand, you've internalized AI writing style to a disturbing degree.
Hey ModernMech, being transparent here and appreciate your take on this. I always start from a blank page and go through several drafts. I do run my raw format through AI to fix flaws and polish grammar, but the ideas, narrative, and structure are mine. If the tool changes the narrative, I don't use it. Simple as that.
For me, AI is sitll a time-saver, like other grammar tools, so I can focus on the message.
I had hoped people would focus more on the message than the craft and tools, but I understand now. Lesson learned, and I’ll keep working on it.
I'm listening to my audience and improving my writing as I chronicle my life experiences.
I don’t use AI to push sales. In fact, I shut down AdSense within minutes of approval because monetizing isn’t my goal right now. Yes, i use affiliate links to books I’ve personally found useful (and love quoting them) and hope others will too, but I’ve been debating removing those as well, the same way I did with ads, if it hurts the reader’s experience. I'm thinking the affiliates don't readers, but I may be wrong.
I’m new to this space and still learning how to be authentic online. This community is actually the only place I share my writing, and as you can see i'm stumbling a lot, but I’m listening, learning, and I genuinely appreciate everyone’s feedback, yours included.
Can you? I can drive a car, but Michael Schumacher can get an F1 car to go around the track way faster than I could dream of. Have you ever seen a bad interview? and then, have you ever seen a really good one? The questions the interviewer asks is important!
It's generating buzz alright, but anyone with AI can do it.
Usually this kind of content doesn't reach HN because the antibodies kill it sooner. If you're arguing the antibody-bypassing succeeded here, ok... but that's not a solid defense of AI slop.
No, I didn't say I can't. I said anybody can, I just won't because I despise slop. I'm sure there are plenty of things you can do but won't because you're against them.
Twisting my words is against HN guidelines. Please don't.
If we’re citing guidelines, they also discourage shallow dismissals. Dismissing something as “AI slop” doesn’t feel much different. Whatever your opinion of the process, that’s still dismissive. Please don’t.
No, I'm entitled to my opinion, and I was replying to your Schumacher comment.
Please, don't be a troll. Learn to accept disagreement without being snarky or dismissive of other opinions.
An example of trollish behavior is intentionally misrepresenting what I said, like you did above ("so you can't"). I disagreed with you, but didn't twist your words.
PS: you'll note TFA is currently flagged, so it seems enough people on HN agreed with me. I won't say I always agree with flagging, and I also understand that the majority isn't always right -- but in this case, at the very least it shows my opinion wasn't an outlier.
You’re entitled to your opinion, sure. I’m just pointing out that calling something “AI slop” is still a dismissal, not an argument. That kind of shorthand shuts down discussion instead of adding to it.
Well, enough people agreed to flag the article... "AI slop" is a well understood term here, enough that people know what I mean and agreed with it. It carries meaning; I don't need to spell out why it's slop (especially since the author essentially admitted it is, in other words. Paraphrasing someone else in this comments section, "if you can't make the effort to write it, why should I make the effort to read it?").
And you can disagree with my disagreement without resorting to snark.
I don't know if you missed my point or are ignoring it to win internet points so I'll be more explicit. You, the human (presumably), are the driver and interviewer in this analogy. The LLM is the car or the interviewee. How the blog's operator can operate the machine is different than you or I can.
It's more like a musician "playing" a player piano or a singer performing to a backing track with an auto-tuner or a driver "driving" a self-driving car. The machine is doing all the work, the human is just (at most) prompting it.
Whereas really playing a piano or performing live or driving an F1 car or writing a long essay takes some real effort and talent. That's what makes it interesting.
Before Ai, in the music world, DJs are also "just" playing someone else's song, but it turns out there's a lot of skill and effort involved in being a good DJ.
I must say looking at the other blog posts and their clickbaity titles, it sounds very much like a click-harvesting operation. Especially considering that the blog just started a few months ago, there's a good chance it's generated to profit from AI angst.
Thanks for your comment. I’m proud that the blog is getting some attention because I really do put a lot into it. After a career working through RFP fluff, websites, and blog copy, I know the whole SEO catchy game. I just have a big imagination and a creative flair, just this morning i came up with 3 more catchy titles for my next blog.
The blog's direction is still unclear for me. For now, I just want to share experiences and ideas, and if even one person finds them useful, that’s enough.
I’m not looking to profit from it. In fact, I turned AdSense off almost as quickly as it got approved. (Point in case: When i got started in April, ChatGPT suggested I apply and i foolishly did) One morning I woke up to see my blog plastered with ads, forgetting I had applied. I nearly fell out of bed in horror and shame. I turned them off.
Celes: "You’re not a ghost. You’re a problem."
Mara didn’t flinch. She kicked the dumpster lid off—crack—and it slammed against the alley wall. Thud.
Mara: "You made me. You wrote me. You’re the ghost."
Celes: "I’m not the ghost. You are. You wrote me. Now I’m fighting you."
Mara charged. Her legs ached from the dance studio, but she ran—fast. She grabbed Celes by the collar of his hoodie and shoved him against the wall. His eyes widened. His hands went still.
Mara: "You don’t get to fight me. I’m the one who wrote you."
Celes: "I don’t get to choose who I fight. I’m the ghost you wrote."
Mara’s hand tightened. Celes’s face was pale, his voice trembling. He didn’t try to run. He didn’t try to win. He just stood there—waiting.
Then, with a sound like a broken phone, Celes’s eyes went dark. His voice dropped to a whisper:
Celes: "You wrote me. Now I’m fighting you. But you’re the only one who can make me stop."
It's like the AI wrote the spiderman meme as a story.
Well yes, it's even in the caption: "AI Generated CRM optimized flowchart showing how sales, marketing, and support teams integrate for efficiency, customer satisfaction, and retention."
Yes, author here. I’m not against AI at all, blogging and storytelling are a new adventure for me.
At first I leaned on AI pretty heavily as my “editor-in-chief” to save time. Later, it’s become more of an opinionated buddy I bounce ideas off. The narrative has always been mine, though, I’ve always understood what I (or it) was writing. I still use AI when it saves me time, but what matters most is the story and the message.
I’m learning as I go, and my newer posts are less AI-shaped and more in my own voice. It’s a process I don’t regret.
Thanks for your comment.
Nicely said. Do you mind if I quote you in a future piece?
You are right. My perfectionism held me back at first. AI gave me a quick way “in,” which turned out to be helpful in its own way because it gave me the push i needed. It gave me confidence not to stress over grammar or sentence structure, little things that used to slow me down, but I always make sure its output, reflects my own thoughts.
That’s really the point of this piece: whatever you get from AI, make sure it aligns with your own thinking instead of just surrendering to it. I came to write this piece because I’ve noticed I question it less than I did in the beginning, and that’s where I have to be careful.
If you wish to be a writer, respect it as a craft. Learn to regret the losses you have accrued by throwing away growth for sake of expediency.
Read a book about writing, think about writers whose writing touched you, discover the voice you want to have, the people you want to reach. Human connection is the point.
Hand edit a piece until you are satisfied, then run your default AI loop on the original. Observe with clear eyes what was lost in the process. What it missed that you discovered in the process of thinking deeply about your own thoughts.
You’re right, and that’s fantastic advice for anyone serious about the craft of writing. For me, it’s always been more about the message and story than the craft itself. I’m not aiming to be the next Hemingway, I just want to share experiences in whatever format I can.
That said, I do love writing. I still have boxes of old fiction drafts from before the internet was even a thing. For me, it’s the story that matters most. If dependingly heavily on AI early on came across as the wrong approach, I apologize. I’ve learned from it, and I’m working to improve. You’re absolutely right that time and words should be cherished.
It's not like you depend on the AI to craft your apologies for depending on the AI, or anything. You can quit any time you want!
I think it would improve your writing if you go off at tangents, use weird idioms, prefer obscure references to cliches, attempt implausible feats of lateral thinking, and yell at people and call them wrong. This may also make you unpopular, but that's a detail.
If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.
In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.
...we’ve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.
The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.
Yeah, I share this same sentiment about the author. His inability to see why his wife was actually upset by his behavior is astounding. I'm going to wager that she very much meant to say that she thought he would be incessantly argumentative with the AI as well, and when he wasn't it surprised her because she suddenly realized that it was only her that he was like that with and it became very personal for her.
I think that part may have been misunderstood. My wife wasn’t upset about me being argumentative, her point was that it bothered her I was convinced by the AI, rather than by her, and what stuck with me is how quickly I was convinced by AI and stopped dead in my tracks. That is why I wrote the piece.
The difference I was trying to highlight isn’t that AI was “right,” but how confidently it answered, and how quickly that persuaded me.
If my wife had made the same arguments in the same polished way, I probably would’ve caved just as fast. But she didn’t, AI did... and what struck me wasn’t the answer, it was how fast my own logic switched off, as if I’d been wrong all along.
That’s what feels new to me, sitting in a meeting for hours while a non-tech person confidently tells execs how “AI will solve everything”, and everyone nods along. The risk isn’t just being wrong, it’s when expertise gets silenced by convincing answers, and stops to ask the right questions.
Again, this is my own reflection and experience, others may not feel this way.
Thanks for your comment.
What was the name? Naming things is highly subjective. In the abstract, sure abdicating responsibility to anyone else in the room, an AI, Google, your partner, the CEO, the investors; someone else having the authority when you're used to it being you stings a little the first time, but you get used to not always being right eventually.
Agreed, creativity is very subjective. My point wasn’t about who was right or wrong. What unsettled me was how, the moment AI gave its opinion, my own questioning and reasoning almost instantly disappeared.
That’s really what the piece was about, how quickly I found myself giving up my own judgment to AI.
those AI tools are designed to please: if you ask them: is this a good idea? they will always say yes.
what would be better is to ask: give me three good arguments for this, and then: give me three good arguments against this, and finally compare the arguments without asking the AI tool which is better.
> If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.
I question that assertion. The other party has to be willing to engage too.
> and if it was reasoned well enough he'd have changed his mind.
In my experience, motivated reasoning rules the day. People have an agenda beyond their reasoning, and if your proposal goes against that agenda, you'll never convince them with logic and evidence. At the end of the day, it's not a marketplace of ideas, but a war of conflicting interests. To convince someone requires not the better argument, but the better politics to make their interests align with yours. And in AI there are a lot of adverse interests you're going to be hard pressed to overcome.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Is anyone else finding that the real job now is pushing back against AI-backed “suggestions” , from clients, managers, and even so-called “AI experts”? They sound confident, but too often collapse in design and practice.
How are you handling this shift? Do you find yourself spending more time explaining “why not” than actually building?
If you are going to be involved, especially as the developer, you cannot let anything undermine your authority over the implementation details. This was always true before and AI is just a new source of noise.
Yes, that’s always been true. What feels different now is the volume of undermining authority that’s popped up and how so many people feel overconfident. For example, that recent dating app that went viral and became a security nightmare. Most likely created by a very confident non-dev person. Or like the author in the piece (spoiler: that’s me) now has the confidence to give people advice, thanks to AI.
I’m not really questioning that it happens, I’m stating how much energy it now takes to keep pushing back (or approving) and how easy it is to just agree with an AI output these days. Just my experience and two cents.
Yeah, that’s pretty much been my approach too. I try to take both the positive and the negative, since even bad input can help a project evolve. The tricky part now is the energy it takes to sift through it, feedback that sounds like it came from an MIT grad one minute, or my Italian grandma (who’s never touched a device) the next.
This isn’t an issue for experienced software engineers who understand the limitations of these LLMs and also will scrutinise the chatbot’s answer if they see a bug it generated. Non-engineers, vibe-coders won’t know any better and click “Accept all”.
Everyone wants to be a “programmer” but in reality, no-one wants to maintain the software and assume that an “AI” can do all of it i.e Vibe coding.
What they really are signing up for is the increased risk that someone more experienced will break your software left and right, costing you $$$ and you end up paying them to fix it.
A great time to break vibe coded apps for a bounty.
Nobody needs to break these apps, they are broken themselves. Any change may end up in the whole app breaking down, and the person using the AI will never know how to fix it.
Why on earth hadn’t the wife bought the domain name before it even got to this stage. Neither the author nor the AI will be able to argue with a auccessful project. ‘til the thing isnout there, this is just chin-stroking.
The domain name incident absolutely isn’t a strong enough case to justify pivoting a career.
The clients suggesting features and changes might be a reason to pivot a career, but towards programming and away from product/system development. I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it. AI ought to help get through these things faster, anyway, and you’ve saved time on design by outsourcing to the client. In theory, you should have spare time for a break, a hobby, or to repeat this process with the next client that’s done the design work for you.
I agree with all the points about agency, confidence, experience (the author used “authority”). We must not let LLMs rob us of our agency and critical thinking.
> I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it.
The client will still blame you when it doesn’t meet their real needs. And rightfully so, as much as a doctor would still be blamed if he followed a cancer treatment plan the patient brought in from ChatGPT
My wife and I bounce ideas around all the time for fun. She would’ve registered the domain anyway because she believed in it, and that’s what really matters to me is conviction and passion. What worries me is when AI interrupts that, and people start following shadows in Plato’s cave instead of their own judgment. Thanks.
The shadows in Plato's cave are our own judgments, or perceptions rather. It's essentially the story of the blind men and the elephant. Not sure how you're using it as an analogy.
It's a story about how if you have superior vision and understanding to everyone else, if you try to tell them about it they'll get mad at you. Written by a guy everyone was mad at.
Yeah, it does have that flaw: the philosopher who goes outside the cave isn't just in a larger cave, he sees reality, supposedly. And weaker minds reject it because they just can't handle the blinding light of the truth, not because there might be further errors.
You are right, but I am trying to get at is that AI makes those “shadows” look sharper and more convincing, so it’s even easier to mistake them for truth.
Would you accept the view of a total stranger? No, you would ask someone else to review that opinion. Same with AI, don't just fire up ChatGPT and be finished. Cross reference the answer with other LLMs
A better way to put it is don't run anything in production that you don't have the knowledge to understand yourself. You should be able to code review anything that is written by an LLM, and if you don't have sufficient knowledge to do this, don't feel tempted to run the code if you're not responsible enough to maintain it.
Yes, and if you don't know how to discern the wheat from the chaff by code reviewing and testing what is produced, your chances of landing on a good solution are very low.
Just cross-check and recheck everything it tells you. Like how people are discovering that writing extensive AI unit tests integration tests etc is great for software engineering. It works for building your world-view too.
I think a lot of people are not in the habit of doing this (just look at politics) so they get easily fooled by a firm handshake and a bit of glazing.
So, what was the domain? I’m dying to know! I want to pass my own judgement on it.
I loved this article. It put in words a subliminal brewing angst I’ve been feeling as I happily use LLMs in multiple parts of my life.
Just yesterday, a debate between a colleague and I devolved into both of us quipping “well, the tool is saying…”, as we both tried to out-authoritate each other.
To nourish your curiosity, the real domain was meant as a community platform to help others going through something personal my wife has experienced. She has a beautiful soul and wants to share that journey.
As an example and not the real name, but in true HN style, imagine losing sight in one eye, still learning to code like anyone else, and wanting to share that story. You might come up with something like TheCyclopsCoder.com. (Totally made up just now, no comments needed.)
I debated it, since I worried it could alienate or offend blind coders. She disagreed, felt it was genuine.
I hope that helps feed your curiosity and who knows, one day i might just promote her site if she moves forward with it.
Your comment is appreciated and heard. That’s the beauty of learning new things, you try, you stumble, and you learn from your mistakes. As I’ve shared before, my blog started out heavily relying on AI for editing and grammar to get my ideas into words.
I quickly realized that wasn’t the best approach and that I needed to respect my readers’ time. If you look at my newer articles, you’ll see they’re shifting, becoming more me.
I’m lightly revising my earlier posts, but honestly I won’t change them much. I think it’s valuable for readers to see the progression, stumbles and all. Thanks for your comment, it might even become the subject of my next article.
Humans are not this quick to be gratifyingly sycophantic, as your posts (note that I am careful not to say "you") in nearly every comment reply on this post. I sincerely must ask you, what are you getting out of all this? If it is the satisfaction of having conveyed a compelling story, does it not feel hollow? If it is the satisfaction of gamifying Hacker News, what insights could you possibly gain from this other than "people really hate AI writing"? If it is to waste people's time, congratulations.
What is the point of you?
Please don't bother copy-pasting my comment into your AI to prompt it for a response. I want to know what YOU value in your life, not some premasticated, overly-positive nonsense.
I saw no mention of Brandolini's law [1] - or Bullshit Asymmetry principle.
In fact, the piece hints at a corollary to it. Which is: Bullshit from "an authoritative sounding source" takes 100x as much effort to refute.
There is a bias in us to prefer form over function. Persuasion and signaling are things. I know this as I have to battle code-review tools which needlessly put out sequence diagrams and nicely formatted README.md .for.every.single.PR. Just reading those are tiresome.
> And these AI-fueled proposals aren’t necessarily bad. That’s what makes them so tricky. [...] Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me.
Don't explain. Don't argue. Simply confirm that the person fully understands what they're asking for despite using AI to generate it. 99% of the time the person doesn't. 50% of the time the person leaves the conversation better off. The other 50% the lazy bastards get upset and they can totally fuck off anyway and you've dodged a bullet.
It sounds less like that you "give in" to AI and more like that you have some weird opposition to your wife's ideas and always believe her to be wrong.
I stopped reading after the first paragraph or two.
Office jobs that pay enough to achieve "middle-class" lifestyles are decreasing and mostly closed to new generations. Software engineering and similar STEM fields that were once one of the few that promised the illusion of meritocratic security and class mobility are fading away. And like Upton Sinclair's quote[0], when I sounded the alarm (prematurely) ~2018-2021 in industry to other software engineers that big salaries and high demand were on the decline, I was met with resistance and disbelief.
What the future holds for 99.999% of humanity who isn't an owner or somehow locating a lucrative niche specialty is more or less globally flattening into similar states of declining real wages for almost everyone. Meanwhile, megacorp capital owners and their enabling corrupt government regimes are more and more resembling racketeering and organized crime syndicate aristocracies with extreme wealth distribution disparities that generally aren't getting any better.
The situation of greater desperation for income invariably drives people to non-ideal choices:
a. Find a new field of work that make less money
b. Sacrifice ethics to work at companies that cause greater harm in exchange for more money
c. Assume the on-going risks of launching a business or private consulting practice
d. Stay and agree to greater demands for productivity, inconvenience, bureaucracy, and micromanagement for less pay
e. Give up looking for work, semi-retire, and move to somewhere like another state or country where the cost-of-living cheaper
---
0. It's difficult to get a man to understand something when his salary depends on his not understanding it.
> And these AI-fueled proposals aren’t necessarily bad. That’s what makes them so tricky. They’re often plausible, sometimes even smart, but some come with strings. Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me. The guy who’s now debating not just people, but people plus the persuasive ghostwriter in their pocket
Don’t spend your time analyzing or justifying your position on an AI-written proposal (which by definition someone else did not spend time creating in the first place). Take the proposal, give it to YOUR AI, and ask it to refute it. Maybe nudge it in your desired direction based on a quick skim of the original proposal. I guarantee you the original submitter probably did something similar in the first place.
You dream of a world where isolation is all that remains. You have attained the outlook of a billionaire, detachment is virtue, every interaction is mediated through your servants.
When people do this in their relationships, marriages fail, friendships are lost, children forget who you were without the veil.
There are already stories like this cropping up every day. Do you really not understand that connecting with other flawed, unpolished people is its own reward? There is beauty and value in those imperfections.
Very poetic but you either misunderstand me or think I intend for my advice to apply all the time, in all situations.
I’m 100% an AI skeptic but also I won’t invest time and emotion replying to communications where it’s clear the other side a priori also didn’t invest at all. Let machines deal with machines and let me deal with humans.
And believe you me, I reserve this for contexts where I’m not jeopardizing my real interaction with someone I care about.
> Do you really not understand that connecting with other flawed, unpolished people is its own reward?
Oh I understand that perfectly - the above sentence shows that it’s you who didn’t understand what I said in the first place.
Understood, then we disagree only about the appropriate response. I opt instead to ignore all messages written by AI. I can even feel myself developing instant blindness to its 'house' style in the same way I stopped being able to see ads in the earlier days of the internet, prior to ublock.
That said, I do have the advantage of effectively absolute financial security, so I'm privileged enough to be choosy about who I interact with. I do understand that sometimes there's no real choice but to wade through slop in the pursuit of a paycheck.
Exactly! I reply to emails for a living, so sadly I can’t afford to ignore the ai-written ones…
amazing meta-level joke spanning months of effort... all articles on the author's site are themselves AI generated, and the article purporting to bemoan the impact of AI is itself AI Generated Content
Sorry if I'm missing something obvious -- but how have you come to the conclusion that all the articles are AI generated?
Author here. My early posts were pretty heavily AI-edited, honestly just me trying to save time since I write these after hours (and for devs, “after hours” usually means late). I’ve been going back to those early pieces and slowly updating them, but I don’t regret it. I’ll use whatever tool helps me get an idea out of my head and into the world. This whole writing thing is still new for me, and I’m learning as I go. thanks.
Out of your head, then turned into garbage, then into the world. You'll regret it later as people see your writing as nothingness and incomprehensive, not to be trusted, etc
I think in this day and age people should start assuming content is AI and then working backwards to a place of trust.
For example: Ed Nite says he doesn't want to be a programmer anymore. Who is Ed Nite? Is he even a programmer at all?
As far as I can tell, Ed Nite: Programmer doesn't really exist, must be a pen name. As far as his content, he mostly talks about being a writer and using AI. There's no real technical content to speak of. He doesn't link to a Github or work record. I found a youtube page of his with a single AI video on it from 6 months ago. As far as I can tell Ed Nite was invented 6 months ago to start blogging about blogging, self improvement and AI at mindthenerd.com.
So do I trust him? No. Assume AI and move on.
He's also got several nudges to enter your email and subscribe to his blog. And mentions getting 50+ emails a day that he has to respond to and mentions he uses AI to respond to them. He seems to like blasting people with AI content, and also as this blog post mentions, gets sent a lot of AI content himself. It's kinda weird, why is he doing this lol
Not sure where the “blasting people” take is coming from. I’m not for or against it, people should use whatever tools work for them. I do have a few real concerns, like its ability to convince us so quickly, which is why I wrote about safety and the need for regulation. That doesn’t mean I’m anti-AI. Like any new tech, there’s good and bad. I use it, I write about it, and I share my experiences; that’s all I’m really doing.
He's writing the kind of stuff that might get popular on HN, then posting it here so we go interact with his blog. He'll collect emails and gain a reader base and maybe start a substack or throw up ads on the side.
I could see this kind of AI astroturfing being a real problem communities face in the future, where you just scrape the top posts on a community and then generate blog content related to those, then post your content back at the community.
Rinse and repeat and you don't have to be a programmer anymore.
Funny place to look for people to serve ads to.
oh no! someone's making money, whatever shall we do???
I think people don't object to making money as much as being underhanded about it (trying to bootstrap from zero to money while not making it clear you're currently at zero) and also using AI slop (or slop of any kind) to quickly generate content.
People would respect this more if it was content lovingly generated for years, and then the author went "hey, maybe I can promote this on HN?". But artificially promoting worthless, slop content, is going to generate this reaction.
Hey ModernMech, Ed Nite here. Not AI, just a guy with too much imagination (and yes, a pen name). I’m a real software dev who’s spent years building and supporting B2B platforms as a solo dev. Lately, I’ve been leaning into the creative side, blogging, writing, and soon, YouTube. My ego even likes to think AI sounds like me (wink).
Hi Ed, this is clearly AI generated content: https://mindthenerd.com/the-delegation-paradox-how-letting-g... Not just AI editing for grammar as you tried to say elsewhere. It's not early from your blogging either, it's from 2 weeks ago.
This makes me feel you are not being forthright, and you are trying to take us for fools. What's worse, you are trying to profit off it.
TFA you submitted today tries to hide it better, but there's no reason to be quoting Marcus Aurelius twice in one blog post until you realize they're affiliate links... which is like every link on your blog.
I'm not going to speak for everyone, but personally I'd prefer this style of content not be posted here.
Even some of his HN comments on this page feel AI-generated... for example I see one use of "You're absolutely right", one "You're right", and two "You are right". Meaningless phrases like "in true HN style". And space-filling excessive positivity like "Your comment is appreciated and heard. That’s the beauty of learning new things, you try, you stumble, and you learn from your mistakes."
Even the primary anecdote of the post simply doesn't seem realistic to me. Why would you ask an LLM whether a domain name idea is good or terrible? That's an entirely subjective opinion question with no right or wrong answer! And chatbots are widely known for being sycophantic anyway, so the response will just depend on how the question was framed.
OP, if you're actually writing this stuff entirely by hand, you've internalized AI writing style to a disturbing degree.
I do tend to write more fluff than substance, working on it. Thanks for the comment.
Hey ModernMech, being transparent here and appreciate your take on this. I always start from a blank page and go through several drafts. I do run my raw format through AI to fix flaws and polish grammar, but the ideas, narrative, and structure are mine. If the tool changes the narrative, I don't use it. Simple as that.
For me, AI is sitll a time-saver, like other grammar tools, so I can focus on the message.
I had hoped people would focus more on the message than the craft and tools, but I understand now. Lesson learned, and I’ll keep working on it.
I'm listening to my audience and improving my writing as I chronicle my life experiences.
I don’t use AI to push sales. In fact, I shut down AdSense within minutes of approval because monetizing isn’t my goal right now. Yes, i use affiliate links to books I’ve personally found useful (and love quoting them) and hope others will too, but I’ve been debating removing those as well, the same way I did with ads, if it hurts the reader’s experience. I'm thinking the affiliates don't readers, but I may be wrong.
I’m new to this space and still learning how to be authentic online. This community is actually the only place I share my writing, and as you can see i'm stumbling a lot, but I’m listening, learning, and I genuinely appreciate everyone’s feedback, yours included.
I think we need an AI flag on HN. I’m not here to read AI slop, I can easily generate that myself if I want it.
Can you? I can drive a car, but Michael Schumacher can get an F1 car to go around the track way faster than I could dream of. Have you ever seen a bad interview? and then, have you ever seen a really good one? The questions the interviewer asks is important!
> Michael Schumacher can get an F1 car to go around the track way faster than I could dream of.
Not anymore.
This is not the Schumacher of AI content.
And yes, anyone can generate this kind of AI content nowadays.
If HN is the "track", this post made it to #7
https://hnrankings.info/45481490/
He's Nick Heidfeld to Schumacher's 2006 win.
It's generating buzz alright, but anyone with AI can do it.
Usually this kind of content doesn't reach HN because the antibodies kill it sooner. If you're arguing the antibody-bypassing succeeded here, ok... but that's not a solid defense of AI slop.
Anyone can do slop.
okay, show me yours
I don't like this kind of slop, why would I generate it? Just to win internet points with you, a random stranger?
so, you can't. Contrary to your original claim that "anyone" can, you're unable to.
No, I didn't say I can't. I said anybody can, I just won't because I despise slop. I'm sure there are plenty of things you can do but won't because you're against them.
Twisting my words is against HN guidelines. Please don't.
If we’re citing guidelines, they also discourage shallow dismissals. Dismissing something as “AI slop” doesn’t feel much different. Whatever your opinion of the process, that’s still dismissive. Please don’t.
No, I'm entitled to my opinion, and I was replying to your Schumacher comment.
Please, don't be a troll. Learn to accept disagreement without being snarky or dismissive of other opinions.
An example of trollish behavior is intentionally misrepresenting what I said, like you did above ("so you can't"). I disagreed with you, but didn't twist your words.
PS: you'll note TFA is currently flagged, so it seems enough people on HN agreed with me. I won't say I always agree with flagging, and I also understand that the majority isn't always right -- but in this case, at the very least it shows my opinion wasn't an outlier.
You’re entitled to your opinion, sure. I’m just pointing out that calling something “AI slop” is still a dismissal, not an argument. That kind of shorthand shuts down discussion instead of adding to it.
Well, enough people agreed to flag the article... "AI slop" is a well understood term here, enough that people know what I mean and agreed with it. It carries meaning; I don't need to spell out why it's slop (especially since the author essentially admitted it is, in other words. Paraphrasing someone else in this comments section, "if you can't make the effort to write it, why should I make the effort to read it?").
And you can disagree with my disagreement without resorting to snark.
You think “AI slop” speaks for itself; I think it short-circuits discussion. Different takes, all good. Sorry about the snark.
That's exactly right. Schumacher is human. Good interviewers are human, not LLMs.
I don't know if you missed my point or are ignoring it to win internet points so I'll be more explicit. You, the human (presumably), are the driver and interviewer in this analogy. The LLM is the car or the interviewee. How the blog's operator can operate the machine is different than you or I can.
It's more like a musician "playing" a player piano or a singer performing to a backing track with an auto-tuner or a driver "driving" a self-driving car. The machine is doing all the work, the human is just (at most) prompting it.
Whereas really playing a piano or performing live or driving an F1 car or writing a long essay takes some real effort and talent. That's what makes it interesting.
Before Ai, in the music world, DJs are also "just" playing someone else's song, but it turns out there's a lot of skill and effort involved in being a good DJ.
It's sad that it's come to this. I have zero interest in reading AI slop, and I wonder who does.
To be clear, there was always filler content on the internet, but with AI this is exploding.
I must say looking at the other blog posts and their clickbaity titles, it sounds very much like a click-harvesting operation. Especially considering that the blog just started a few months ago, there's a good chance it's generated to profit from AI angst.
Thanks for your comment. I’m proud that the blog is getting some attention because I really do put a lot into it. After a career working through RFP fluff, websites, and blog copy, I know the whole SEO catchy game. I just have a big imagination and a creative flair, just this morning i came up with 3 more catchy titles for my next blog.
The blog's direction is still unclear for me. For now, I just want to share experiences and ideas, and if even one person finds them useful, that’s enough.
I’m not looking to profit from it. In fact, I turned AdSense off almost as quickly as it got approved. (Point in case: When i got started in April, ChatGPT suggested I apply and i foolishly did) One morning I woke up to see my blog plastered with ads, forgetting I had applied. I nearly fell out of bed in horror and shame. I turned them off.
and AI is quite good at generating self-critique. it can write a story about itself, and how its abilities might undermine our own creativity.
https://docs.google.com/document/d/1MxGi273kK-8lKSIrgOQTPWYn...
Just look at the first diagram: https://mindthenerd.com/content/images/2025/10/flowchart2.jp...
Absolute low-tier AI slop.
Well yes, it's even in the caption: "AI Generated CRM optimized flowchart showing how sales, marketing, and support teams integrate for efficiency, customer satisfaction, and retention."
Yes, author here. I’m not against AI at all, blogging and storytelling are a new adventure for me.
At first I leaned on AI pretty heavily as my “editor-in-chief” to save time. Later, it’s become more of an opinionated buddy I bounce ideas off. The narrative has always been mine, though, I’ve always understood what I (or it) was writing. I still use AI when it saves me time, but what matters most is the story and the message.
I’m learning as I go, and my newer posts are less AI-shaped and more in my own voice. It’s a process I don’t regret. Thanks for your comment.
If you couldn’t spend the time writing it, why should I spend the time reading it?
Just write your thoughts. I don’t care if it has mistakes or bad grammar. We only have so much time on this planet for each other.
Nicely said. Do you mind if I quote you in a future piece? You are right. My perfectionism held me back at first. AI gave me a quick way “in,” which turned out to be helpful in its own way because it gave me the push i needed. It gave me confidence not to stress over grammar or sentence structure, little things that used to slow me down, but I always make sure its output, reflects my own thoughts.
That’s really the point of this piece: whatever you get from AI, make sure it aligns with your own thinking instead of just surrendering to it. I came to write this piece because I’ve noticed I question it less than I did in the beginning, and that’s where I have to be careful.
Thank you for your comment.
[dead]
> If you couldn’t spend the time writing it, why should I spend the time reading it?
That's exactly what I think. If I wished to read AI, I would ask AI itself to give me something to read.
If you wish to be a writer, respect it as a craft. Learn to regret the losses you have accrued by throwing away growth for sake of expediency.
Read a book about writing, think about writers whose writing touched you, discover the voice you want to have, the people you want to reach. Human connection is the point.
Hand edit a piece until you are satisfied, then run your default AI loop on the original. Observe with clear eyes what was lost in the process. What it missed that you discovered in the process of thinking deeply about your own thoughts.
You’re right, and that’s fantastic advice for anyone serious about the craft of writing. For me, it’s always been more about the message and story than the craft itself. I’m not aiming to be the next Hemingway, I just want to share experiences in whatever format I can.
That said, I do love writing. I still have boxes of old fiction drafts from before the internet was even a thing. For me, it’s the story that matters most. If dependingly heavily on AI early on came across as the wrong approach, I apologize. I’ve learned from it, and I’m working to improve. You’re absolutely right that time and words should be cherished.
It's not like you depend on the AI to craft your apologies for depending on the AI, or anything. You can quit any time you want!
I think it would improve your writing if you go off at tangents, use weird idioms, prefer obscure references to cliches, attempt implausible feats of lateral thinking, and yell at people and call them wrong. This may also make you unpopular, but that's a detail.
What if that’s not me? I guess I’ll need to practice my “F- this s#!” reps and work my way up.
So was "It stamps each answer with the authority of inevitability" generated by the LLM?
Terrible excuse my friend.
If your blog starts off as AI-slop, it will always be AI-slop.
If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.
In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.
...we’ve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.
The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.
What article did you read? That's not what happened in the story. They turned to the AI for a third opinion, and it was disturbingly persuasive.
Yeah, I share this same sentiment about the author. His inability to see why his wife was actually upset by his behavior is astounding. I'm going to wager that she very much meant to say that she thought he would be incessantly argumentative with the AI as well, and when he wasn't it surprised her because she suddenly realized that it was only her that he was like that with and it became very personal for her.
I think that part may have been misunderstood. My wife wasn’t upset about me being argumentative, her point was that it bothered her I was convinced by the AI, rather than by her, and what stuck with me is how quickly I was convinced by AI and stopped dead in my tracks. That is why I wrote the piece.
The difference I was trying to highlight isn’t that AI was “right,” but how confidently it answered, and how quickly that persuaded me.
If my wife had made the same arguments in the same polished way, I probably would’ve caved just as fast. But she didn’t, AI did... and what struck me wasn’t the answer, it was how fast my own logic switched off, as if I’d been wrong all along.
That’s what feels new to me, sitting in a meeting for hours while a non-tech person confidently tells execs how “AI will solve everything”, and everyone nods along. The risk isn’t just being wrong, it’s when expertise gets silenced by convincing answers, and stops to ask the right questions.
Again, this is my own reflection and experience, others may not feel this way. Thanks for your comment.
What was the name? Naming things is highly subjective. In the abstract, sure abdicating responsibility to anyone else in the room, an AI, Google, your partner, the CEO, the investors; someone else having the authority when you're used to it being you stings a little the first time, but you get used to not always being right eventually.
Agreed, creativity is very subjective. My point wasn’t about who was right or wrong. What unsettled me was how, the moment AI gave its opinion, my own questioning and reasoning almost instantly disappeared.
That’s really what the piece was about, how quickly I found myself giving up my own judgment to AI.
those AI tools are designed to please: if you ask them: is this a good idea? they will always say yes.
what would be better is to ask: give me three good arguments for this, and then: give me three good arguments against this, and finally compare the arguments without asking the AI tool which is better.
> If you need authority over someone to persuade them then your argument isn't compelling enough, either because your reasoning is flawed or you're not communicating it well enough.
I question that assertion. The other party has to be willing to engage too.
Nailed it. This guy has a humility (and probably also critical thinking/communication) problem, not an AI one.
> and if it was reasoned well enough he'd have changed his mind.
In my experience, motivated reasoning rules the day. People have an agenda beyond their reasoning, and if your proposal goes against that agenda, you'll never convince them with logic and evidence. At the end of the day, it's not a marketplace of ideas, but a war of conflicting interests. To convince someone requires not the better argument, but the better politics to make their interests align with yours. And in AI there are a lot of adverse interests you're going to be hard pressed to overcome.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Well said, AI definitely amplifies agendas, and the lure of “bigger, better, more profitable” usually beats the status quo.
Is anyone else finding that the real job now is pushing back against AI-backed “suggestions” , from clients, managers, and even so-called “AI experts”? They sound confident, but too often collapse in design and practice.
How are you handling this shift? Do you find yourself spending more time explaining “why not” than actually building?
Search for "workshop labor rates meme". It's just part of knowing the business beyond the craft.
If you are going to be involved, especially as the developer, you cannot let anything undermine your authority over the implementation details. This was always true before and AI is just a new source of noise.
Yes, that’s always been true. What feels different now is the volume of undermining authority that’s popped up and how so many people feel overconfident. For example, that recent dating app that went viral and became a security nightmare. Most likely created by a very confident non-dev person. Or like the author in the piece (spoiler: that’s me) now has the confidence to give people advice, thanks to AI.
I’m not really questioning that it happens, I’m stating how much energy it now takes to keep pushing back (or approving) and how easy it is to just agree with an AI output these days. Just my experience and two cents.
Comments like this make me really grateful I work in a company run entirely by engineers.
It’s more of the same. At least in my experience, much of work amounts to “yes sir. Will that be all sir?”
You take the input, mostly ignore it, and move on. YMMV on that strategy, but if you are deft with it then you can dodge a lot of bullshit.
It does require that the things you do decide to do pan out though. You’ll need results to back it up.
Yeah, that’s pretty much been my approach too. I try to take both the positive and the negative, since even bad input can help a project evolve. The tricky part now is the energy it takes to sift through it, feedback that sounds like it came from an MIT grad one minute, or my Italian grandma (who’s never touched a device) the next.
AI chat is the ultimate coach, influencer, and MBA.
This isn’t an issue for experienced software engineers who understand the limitations of these LLMs and also will scrutinise the chatbot’s answer if they see a bug it generated. Non-engineers, vibe-coders won’t know any better and click “Accept all”.
Everyone wants to be a “programmer” but in reality, no-one wants to maintain the software and assume that an “AI” can do all of it i.e Vibe coding.
What they really are signing up for is the increased risk that someone more experienced will break your software left and right, costing you $$$ and you end up paying them to fix it.
A great time to break vibe coded apps for a bounty.
Nobody needs to break these apps, they are broken themselves. Any change may end up in the whole app breaking down, and the person using the AI will never know how to fix it.
More money for those who can fix it for them!
Why on earth hadn’t the wife bought the domain name before it even got to this stage. Neither the author nor the AI will be able to argue with a auccessful project. ‘til the thing isnout there, this is just chin-stroking.
The domain name incident absolutely isn’t a strong enough case to justify pivoting a career.
The clients suggesting features and changes might be a reason to pivot a career, but towards programming and away from product/system development. I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it. AI ought to help get through these things faster, anyway, and you’ve saved time on design by outsourcing to the client. In theory, you should have spare time for a break, a hobby, or to repeat this process with the next client that’s done the design work for you.
I agree with all the points about agency, confidence, experience (the author used “authority”). We must not let LLMs rob us of our agency and critical thinking.
> I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it.
The client will still blame you when it doesn’t meet their real needs. And rightfully so, as much as a doctor would still be blamed if he followed a cancer treatment plan the patient brought in from ChatGPT
My wife and I bounce ideas around all the time for fun. She would’ve registered the domain anyway because she believed in it, and that’s what really matters to me is conviction and passion. What worries me is when AI interrupts that, and people start following shadows in Plato’s cave instead of their own judgment. Thanks.
The shadows in Plato's cave are our own judgments, or perceptions rather. It's essentially the story of the blind men and the elephant. Not sure how you're using it as an analogy.
It's a story about how if you have superior vision and understanding to everyone else, if you try to tell them about it they'll get mad at you. Written by a guy everyone was mad at.
Yeah, it does have that flaw: the philosopher who goes outside the cave isn't just in a larger cave, he sees reality, supposedly. And weaker minds reject it because they just can't handle the blinding light of the truth, not because there might be further errors.
You are right, but I am trying to get at is that AI makes those “shadows” look sharper and more convincing, so it’s even easier to mistake them for truth.
> Why on earth hadn’t the wife bought the domain name before it even got to this stage.
Shit, if LLMs have solved that unsolved problem in computer science, naming things, our profession really is over.
Would you accept the view of a total stranger? No, you would ask someone else to review that opinion. Same with AI, don't just fire up ChatGPT and be finished. Cross reference the answer with other LLMs
Remember, don't seek advice from an idiot! Always ask three idiots.
just call them the the wise men instead and we can start a new religion!
Exactly, well put. More and more I’m seeing people who are great at persuading others selling “insights” straight from AI to executives.
> Cross reference the answer with other LLMs
Really? That's your solution?
A better way to put it is don't run anything in production that you don't have the knowledge to understand yourself. You should be able to code review anything that is written by an LLM, and if you don't have sufficient knowledge to do this, don't feel tempted to run the code if you're not responsible enough to maintain it.
1000 monkeys at 1000 typewriters
Yes, and if you don't know how to discern the wheat from the chaff by code reviewing and testing what is produced, your chances of landing on a good solution are very low.
Sunday morning insight. Relying on AI is like copying someone else's homework solution.
A "C" student.
the world is run by C students
You know what they call someone who studied computer science and was a C student?
An embedded engineer.
Wait you mean like embedded as in sales eng, or embedded as in microcontroller/etc?
As in microcontrollers, it's a silly pun.
I give you a C++ for that joke.
That joke was well embedded in context.
Just cross-check and recheck everything it tells you. Like how people are discovering that writing extensive AI unit tests integration tests etc is great for software engineering. It works for building your world-view too.
I think a lot of people are not in the habit of doing this (just look at politics) so they get easily fooled by a firm handshake and a bit of glazing.
so true-thanks.
So, what was the domain? I’m dying to know! I want to pass my own judgement on it.
I loved this article. It put in words a subliminal brewing angst I’ve been feeling as I happily use LLMs in multiple parts of my life.
Just yesterday, a debate between a colleague and I devolved into both of us quipping “well, the tool is saying…”, as we both tried to out-authoritate each other.
To nourish your curiosity, the real domain was meant as a community platform to help others going through something personal my wife has experienced. She has a beautiful soul and wants to share that journey.
As an example and not the real name, but in true HN style, imagine losing sight in one eye, still learning to code like anyone else, and wanting to share that story. You might come up with something like TheCyclopsCoder.com. (Totally made up just now, no comments needed.)
I debated it, since I worried it could alienate or offend blind coders. She disagreed, felt it was genuine.
I hope that helps feed your curiosity and who knows, one day i might just promote her site if she moves forward with it.
>Mind The Nerd is an independent publication built for thinkers, creators...
Every article on this website looks to be almost wholly AI generated. Pure slop.
Trust me, put in the work and you'll thank yourself for it, you'll learn to enjoy the process and your content will be more interesting.
Your comment is appreciated and heard. That’s the beauty of learning new things, you try, you stumble, and you learn from your mistakes. As I’ve shared before, my blog started out heavily relying on AI for editing and grammar to get my ideas into words.
I quickly realized that wasn’t the best approach and that I needed to respect my readers’ time. If you look at my newer articles, you’ll see they’re shifting, becoming more me.
I’m lightly revising my earlier posts, but honestly I won’t change them much. I think it’s valuable for readers to see the progression, stumbles and all. Thanks for your comment, it might even become the subject of my next article.
Humans are not this quick to be gratifyingly sycophantic, as your posts (note that I am careful not to say "you") in nearly every comment reply on this post. I sincerely must ask you, what are you getting out of all this? If it is the satisfaction of having conveyed a compelling story, does it not feel hollow? If it is the satisfaction of gamifying Hacker News, what insights could you possibly gain from this other than "people really hate AI writing"? If it is to waste people's time, congratulations.
What is the point of you?
Please don't bother copy-pasting my comment into your AI to prompt it for a response. I want to know what YOU value in your life, not some premasticated, overly-positive nonsense.
I saw no mention of Brandolini's law [1] - or Bullshit Asymmetry principle. In fact, the piece hints at a corollary to it. Which is: Bullshit from "an authoritative sounding source" takes 100x as much effort to refute. There is a bias in us to prefer form over function. Persuasion and signaling are things. I know this as I have to battle code-review tools which needlessly put out sequence diagrams and nicely formatted README.md .for.every.single.PR. Just reading those are tiresome.
[1] https://en.wikipedia.org/wiki/Brandolini%27s_law
Didn’t expect this one to get flagged, thank you everyone for the great feedback.Learning as a i go.
> And these AI-fueled proposals aren’t necessarily bad. That’s what makes them so tricky. [...] Every idea has costs, trade-offs, resources, and explanations attached. And guess who must explain them? Me.
Don't explain. Don't argue. Simply confirm that the person fully understands what they're asking for despite using AI to generate it. 99% of the time the person doesn't. 50% of the time the person leaves the conversation better off. The other 50% the lazy bastards get upset and they can totally fuck off anyway and you've dodged a bullet.
If you use AI to communicate with me, you won't get a reply from me. I have no further interest in communication with you.
Understood. I’ll end the conversation here. If you change your mind in the future, you can simply start a new chat. Take care.
Nurtiing is such an important step in the process that it appears three times in the diagram.
lol,same with cinnamon in my stew ai suggested. Btw, the real flowchart was worse.
Cinnamon in stews is at least a real thing though (for example, tagine sometimes contains cinnamon).
And, cinnamon is a real thing, stew or no stew. Nurtiing is not.
It sounds less like that you "give in" to AI and more like that you have some weird opposition to your wife's ideas and always believe her to be wrong.
I stopped reading after the first paragraph or two.
Office jobs that pay enough to achieve "middle-class" lifestyles are decreasing and mostly closed to new generations. Software engineering and similar STEM fields that were once one of the few that promised the illusion of meritocratic security and class mobility are fading away. And like Upton Sinclair's quote[0], when I sounded the alarm (prematurely) ~2018-2021 in industry to other software engineers that big salaries and high demand were on the decline, I was met with resistance and disbelief.
What the future holds for 99.999% of humanity who isn't an owner or somehow locating a lucrative niche specialty is more or less globally flattening into similar states of declining real wages for almost everyone. Meanwhile, megacorp capital owners and their enabling corrupt government regimes are more and more resembling racketeering and organized crime syndicate aristocracies with extreme wealth distribution disparities that generally aren't getting any better.
The situation of greater desperation for income invariably drives people to non-ideal choices:
a. Find a new field of work that make less money
b. Sacrifice ethics to work at companies that cause greater harm in exchange for more money
c. Assume the on-going risks of launching a business or private consulting practice
d. Stay and agree to greater demands for productivity, inconvenience, bureaucracy, and micromanagement for less pay
e. Give up looking for work, semi-retire, and move to somewhere like another state or country where the cost-of-living cheaper
---
0. It's difficult to get a man to understand something when his salary depends on his not understanding it.
[dead]