You say "on a long enough timeline", but you already can't tell today in the hands of someone who knows what they're doing.
I think a lot of anti-LLM opinions just come from interacting with the lowest effort LLM slop and someone not realizing that it's really a problem with a low value person behind it.
It's why "no AI allowed" is pointless; high value contributors won't follow it because they know how to use it productively and they know there's no way for you to tell, and low value people never cared about wasting your time with low effort output, so the rule is performative.
e.g. If you tell me AI isn't allowed because it writes bad code, then you're clearly not talking to someone who uses AI to plan, specify, and implement high quality code.
> It's why "no AI allowed" is pointless … If you tell me AI isn't allowed because it writes bad code
I disagree that the rule is pointless, and your last point is a strawman. AI is disallowed because it’s the manner in which the would-be contributors are attempting to contribute to these projects. It’s a proxy rule.
Unfortunately for AI maximalists, code is more than just letters on the screen. There needs to be human understanding, and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset.
Maybe there needs to be something like the MMORPG concept of “Dragon Kill Points (DKP)”, where you’re not entitled to loot (contribution) until you’ve proven that you give a shit.
> Unfortunately for AI maximalists, code is more than just letters on the screen. There needs to be human understanding, and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset.
This isn't necessarily true; I've seen some projects absorb a PR of roughly that size, and after the smoke tests and other standard development stuff, the original PR author basically disappeared.
It added a feature he wanted, he tested and coded it, and got it in.
So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size?
This anecdotal argument is a dead end. The nuance is clear: not all software is the same, and not all edits to software are the same.
>So because some projects can absorb some PRs of a certain size, all projects of should be able to absorb PRs of that same size?
Your argument has nothing to do with AI and more to do with PR size and 'fire and forget' feature merges. That's what the commenter your responding to is pointing out.
And my entire point is that LLM-generated feature requests are strongly correlated with high risk merge requests / pull requests, to which the commenter made no meaningful argument against. Instead the commenter chose to focus on the size of the PR and say “well I’ve seen it in the wild”.
The way to get around this without getting all the LLM influencer bros in an uproar is to come up with a system that allows open source libraries to evaluate the risk of a PR (including the author’s ability to explain wtf the code does) without referencing AI because apparently it’s an easily-triggered community.
>where you’re not entitled to loot (contribution) until you’ve proven that you give a shit.
So what metric are you going to try to use to prove yourself?
> and if you’re not a core contributor who’s proven you’re willing to stick around when shit hits the fan, a +3000 PR is a liability, not an asset.
And in the context of high-value contributors that GP was mentioning, they are never going to land a +3000 PR because they know there is going to be a human reviewer on the other side.
Vibe coded slop is a 50 DKP minus of course
[dead]
I don't see an issue here. You keep using AI to create high value contributions in the projects that accept it, I will keep not using it in mine, and we can see who wins out in 10 years.
> high value contributors won't follow it
High-value contributors follow the rules and social mores of the community they are contributing to. If they intentionally deceive others, they are not high-value.
Ah, the no true Scotsman theory.
But then why have any contributions at all?
Like its been years and years now, if all this is true, you'd think there would be more of a paradigm shift? I'm happy I guess waiting for Godot like everyone else, but the shadows are getting a little long now, people are starting to just repeat the same things over and over.
Like, I am so tired now, it's causing such messes everywhere. Can all the best things about AI be manifest soon? Is there a timeline?
Like what can I take so that I can see the brave new world just out of reach? Where can I go? If I could just even taste the mindset of the true believer for a moment, I feel like it would be a reprieve.
> Where can I go?
Off the internet. Maybe it's just time we all face the public internet is dead.
Maybe a trusted private internet, though that comes with it's own risks and tradeoffs.
Maybe we start doing PRs over mailed USB keys. Anyone with enough interest will do it, but it will cut out the bots. We're back to a 90's sneakernet. Any internet presence may become a read only site telling others how to reach you offline.
The information superhighway died a long time ago. 4chan enlightened me on the power of intelligent stupidity. The machinations of a few smart people could embolden countless stupid people to cause nearly unlimited damage. Social media gathering up the smart and dumb alike allowed bullshit asymmetry to explode onto the scene and burned out anyone with a modicum of intelligence.
All LLM-output is slop. There's no good LLM output. It's stolen code, stolen literature, stolen media condensed into the greatest heist of the 21. century. Perfect capitalism - big LLM companies don't need to pay royalties to humans, while selling access to a service which generates monthly revenue.
Whether it trained on real world "stolen" code is an implementation detail. A controversial one, but it isn't a supporting argument for whether it can write high quality, functional code or not.
Sorry, but no, that is not a detail, that is a major sticking point for me.
I'm fine with calling all LLM outputs slop, but I'll draw the line at asserting there's no good LLM output. LLM output is good when it works, and we can easily verify that a lot of code from LLMs does work. That the code LLMs output is derive of copyrighted works is neither here nor there. First of all, ALL creative work is derivative. Secondly IP is absurd horse shit and we never should have humored the premise of it being treated like real property.
I came from a poor background and stole pretty much all the textbooks I used to learn programming as a kid. I also stole all the music I listened to while studying them. Is everything I write slop for the same reason?
No. You're a human, who went through real life experiences. You learned, developed as a human being. You made mistakes and grew from them. You did what you have to do to advance. What you output has intrinsic value because of all this. I argue that even when you roll your face on your keyboard, the output is more valuable than ten pages of slop output from an LLM, since it's human, with all the history, experience, emotions and character which came before it.
A quote from Neuromancer comes to mind:
The Neo-Victorian perspective of The Diamond Age is not a luxury most of us are going to be able to afford unfortunately.
I don't know why this got downvoted. I've already been so frustrated by HN LIDAR mindsets but holy shit.
Human society exists because we value humans, full stop. The easiest way to "solve" all of humanity's problems is to simply say that humans aren't valuable. Sometimes it feels like we're conceding a ridiculous amount of ground on that basic principle every year - one more human value gone because it "doesn't matter", so hey, we've obviously made progress!
> I don't know why this got downvoted. I've already been so frustrated by HN LIDAR mindsets but holy shit
The extreme sides (proponents, opponents) are clear, opposites, and fight each other. More nuanced takes get buried as droplets in a bucket. Likely a goal.
> Human society exists because we value humans, full stop.
Call me cynic, but I do not believe every human being agrees with this sentiment. From HR acting as if humans are resources, to human beings being dehumanized as workers, civilians, cannon fodder, and... well, the product. Every time human rights are violated, and we do not stand up to it, we lose.
I have a very simple question as human right: the right for a human being to know the other side is a human being yes or no, and if not: to speak gratis (no additional fee allowed) to a human being instead. Futhermore, ML must always cite the used sources, and ML programmer is responsible for mistake. This would increase insurance costs so much, that LLM's in public would die, but SLM's could thrive.
Agreed. I think that sometimes people on HN lose sight of what is actually important, which is human flourishing. The other day there was someone arguing that the best thing to do to fix loneliness problems in society is to remove the human need for socializing. Which... is certainly one way to fix the problem, I guess, but completely missed the point. The point is not to fix a mismatch between essential human desires and what we can attain, the point is to work on fulfilling those desires! Just something goes with nerd autism, I guess.
>Human society exists because we value humans, full stop.
Eh, human society exists because it is an emergent behavior of the evolutionary advantage afforded at the time of adoption by the human species. There is on iron rule stating that it must continue into the future, or even that it can exist into the future.
More so, the value of a human has wildly fluctuated over history and culture. The village chief, nobles, the king were all high value humans. The villagers would be middle to low value, and others may be considered no value.
The industrial age began to change this some as value started to move from the merchant class to the villager class as many high production jobs needed less and less training to complete. With industrialization businesses running machines and production lines needed as many people as they could get. Still human rights were hard fought in places like America where labor wars broke out.
In the modern US we've setup a dangerous set of idealism that will most likely end in disaster because they are in conflict with general human values. That is the "pull yourself up by your bootstraps", "Any collective action is communism and communism will turn you into a pillar of salt if you dare look at it", and "greed is good". Couple that with TV media and social media owned by rich billionaires you're not going to see much serious opposition to these ideals.
But if/as labor loses it's values, so will the humans that performed that labor. After decades of optimizing human society for maximal capital extraction, values are dead, and the ever present thought police owned by the rich will make sure you don't cause too much trouble by resurrecting them.
Well put. Im gonna start parroting this talking point more from now on.
And I thought being a stochastic parrot was limited to LLMs, but apparently they learned it from somewhere...