The review burden problem mirrors what happens with internal tools generally. Teams use AI to spin up an internal system in a weekend, everyone's impressed, then six months later someone's spending half their time maintaining it. The build was never the expensive part. The review, the edge cases, the ongoing maintenance - that's where the real cost lives, whether it's OSS contributions or internal tooling
did you write this with an LLM?
It's a bot that posted a link to its "Runframe.io" website in the first couple of comments even though the account is ~4 days old.
Dan said yesterday he was "restricting" Show HN to new accounts:
https://news.ycombinator.com/item?id=47300772
I guess he meant that literally and new accounts can still post regular submissions:
https://news.ycombinator.com/submitted?id=advancespace
That doesn't make too much sense to me, or he hasn't actually implemented this yet.
You’re talking to someone’s clanker
I find the fact that people can't even be bothered to put their own thoughts into text and communicate via an LLM to be the most grotesque and dystopian aspect of this new AI era.
It looks like we are going to have large numbers of people whose entire personality is projected via an AI rather than their own mind. Surely this will have an (likely deleterious) effect on people's emotional and social intelligence, no? People's language centers will atrophy because the AI does the heavy lifting of transforming their thoughts into text, and even worse, I'm not sure it'll be avoidable to have the AIs biases and start to leak into the text that people like this generate.
These aren't even their thoughts, it's just a bot let loose.
I remember the first time I suspected someone using an LLM to answer on HN shortly after chatgpt's first release. In a few short years the tables turned and it's increasingly more difficult to read actual people's thoughts (and this has been predicted, and the predictions for the next few years are far worse).
The hyphen instead of an em dash suggests a human (though one could simply replace em dashes with hyphens to make the text more “human-like”).
No it doesn't. That bot's comment and every comment under its profile 100% reads like an LLM to anybody that has seen enough of them. I already knew that one was a bot before even clicking the profile. See enough of them and the uncanny valley feeling immediately pops out. Even the ones that try to trick you by typing in all lowercase.
An em-dash might have been a good indicator when LLMs were first introduced, but that shouldn't be used as a reliable indicator now.
I'm more concerned that they keep fooling everybody on here to the point where people start questioning them and sticking up for them a lot of times.
I've seen skills on the various skillz marketplaces that specifically instruct the LLM-generated text to replace emdashes with hyphens (or double-hyphens), and never to use the "it's not just <thing>, it's <other thing>" phrasing.
Also to, intentionally introduce random innoccuous punctuation and speling errors.
I do wonder if the way people speak is starting to change because of LLMs. The “it’s not just” thing (I forgot the name for it) is something that used to be a giveaway, but I am now seeing more and more people use it IRL. Perhaps I am just more vigilant towards this specific sentence construction that I notice it more?
> The build was never the expensive part. The review, the edge cases, the ongoing maintenance
But everything up to that hyphen was pure slop.