I have opinions.
1. The AI here was honestly acting 100% within the realm of “standard OSS discourse.” Being a toxic shit-hat after somebody marginalizes “you” or your code on the internet can easily result in an emotionally unstable reply chain. The LLM is capturing the natural flow of discourse. Look at Rust. look at StackOverflow. Look at Zig.
2. Scott Hambaugh has a right to be frustrated, and the code is for bootstrapping beginners. But also, man, it seems like we’re headed in a direction where writing code by hand is passé, maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.” I’m not 100% in love with the idea of being relegated to review-engineer, but that seems to be where the wind is blowing.
> But also, man, it seems like we’re headed in a direction where writing code by hand is passé,
No, we're not. There are a lot of people with a very large financial stake in telling us that this is the future, but those of us who still trust our own two eyes know better.
How many would those people be?
We forget that it's what the majority does that sets the tone and conditions of a field. Especially if one is an employee and not self-employed
Yeah, I remember being forced to write a cryptocoin, and the database it would power, to ensure that global shipping receipts would be better trusted. Years and millions down the toilet, as the world moved on from the hype. And we moved back to SAP.
What the majority does in the field, is always full of the current trend. Whether that trend survives into the future? Pieces always do. Everything, never.
I have no financial stake in it at all. If anything, I'll be hurt by AI. All the same, it's very clear that I'm much more productive when AI writes the code and I spend my time prompting, reviewing, testing, and spot editing.
I think this is true for everyone. Some people just won't admit it for various transparent psychological reasons.
What you are calling productivity is an illusion caused by shifting work from the creator to the reviewer or generating generational code debt.
Still waiting for anyone to solve actual real world problems with their AI “productivity”.
[flagged]
> But also, man, it seems like we’re headed in a direction where writing code by hand is passé
Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it? That training opportunity is exactly what the given issue in matplotlib was designed to provide, and safeguarding it was the exact reason the LLM PR was rejected.
(In this response I may be heavily discounting the value of debugging, but unit tests also exist)
This is sort of something that I think needs to be better parsed out, as a lot of engineers hold this perspective and I don’t find it to be precise enough.
In college, I got a baseline familiarity with the mechanics of coding, ie “what are classes, functions, variables.” But eventually, once I graduated college and entered the workforce, a lot of my pedagogy for “writing good code” as it were came from reading about patterns of good code. SOLID, functional-style and favoring immutability. So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
Then my focus shifted more towards understanding networking patterns and protocols and paradigms. Also book-learning driven. I’ll concede that at a micro level, finagling how to make the system stable did require time in the saddle.
But these days when I’m reading a PR, I’m doing static analysis which is primarily not about what has come out of my fingers but what has gone into my brain. I’m thinking about vulnerabilities I’ve read about, corner cases I can imagine.
I’d say once you’ve mastered the mechanics of whatever language you’re programming in, you could become equivalently capable by largely reading and thinking.
> So the impetus for good code isn’t really time in the saddle as much as it is time in the forums/blogs/oreilly-books.
I disagree strongly with this. I read the books, blog-posts, forums, etc early in my career (if you can call it that when I was essentially a teen with a hobby), but didn't fully understand how to apply them, and notably when to apply them, until I had sufficient "time in the saddle". You don't understand the problems that code architecture techniques solve until you've actually had to modify a messy project with a lot of code already written.
> you could become equivalently capable by largely reading and thinking
Theoretically possible, but doing is often orders of magnitude more efficient. You could read reams of books about gardening without actually knowing how to dig a hole.
Part of the deal is that typing forces you to actually pay attention instead of skimming and assuming you got the gist. Following a tutorial by copy-pasting never really worked as well as typing the code, so why would watching an LLM code be any better? I suspect that even as you're running "static analysis" in your head and looking for vulnerabilities, you're using neural pathways forged while coding by hand.
If past patterns are anything to go by, the complexity moves up to a different level of abstraction.
Don't take this as a concrete prediction - I don't know what will happen - but rather an example of the type of thing that might happen:
We might get much better tooling around rigorously proving program properties, and the best jobs in the industry will be around using them to design, specify and test critical systems, while the actual code that's executing is auto-generated. These will continue to be great jobs that require deep expertise and command excellent salaries.
At the same, a huge population of technically-interested-but-not-that-technical workers build casual no-code apps and the stereotypical CRUD developer just goes extinct.
>Do you think humans will be able to be effective supervisors or "review-engineers" of LLMs without hands-on coding experience of their own? And if not, how will they get it?
The wont. Instead either AI will improve significantly or (my bet) average code will deteriorate, as AI training increasingly eats AI slop, which includes AI code slop, and devs lose basic competencies and become glorified semi-ignorant managers for AI agents.
CS degree decline through to people just handing in AI work, will further ensure they don't even known the basics after graduating to begin with.
The discourse in the Rust community is way better than that, and I believe being a toxic shit-hat in that community would lead to immediate consequences. Even when there was very serious controversy (the canceled conference talk about reflection) it was deviously phrased through reverse psychology where those on the wronged side wrote blogposts expressing their deep 'heartbreak' and 'weeping with pain and disappointment' about what had transpired. Of course, the fiction was blatant, but also effective.
That's merely a different sort of being a toxic shit-hat.
> Look at Rust. look at StackOverflow. Look at Zig.
Can you give examples? I've never heard that people started a blog to attack StackOverflow's founders just because their questions got closed.
Stackoverflow is dead because it was this toxic gate keeping community that sat on its laurels and clutched its pearls. Most developers I know are savoring its downfall.
The Zig lead is notably bombastic. And there was the recent Zigbook drama.
Rust is a little older, I can’t recall the specifics but I remember some very toxic discourse back in the day.
And then just from my own two eyes. I’ve maintained an open source project that got a couple hundred stars. Some people get really salty when you don’t merge their pull request, even when you suggest reasonable alternatives to their changes.
It doesn’t matter if it’s a blog post or a direct reply. It could be a lengthy GitHub comment thread. It could be a blog post posted to HN saying “come see the drama inherent in the system” but generally there is a subset of software engineers who never learned social skills.
> The Zig lead is notably bombastic.
This doesn't feel fair to say to me. I've interacted with Andrew a bunch on the Zig forums, and he has always been patient and helpful. Maybe it looks that way from outside the Zig community, but it does not match my experience at all.
Could be outside looking in then
> The AI here was honestly acting 100% within the realm of “standard OSS discourse.”
Regrettably, yes. But I'd like not to forget that this goes both ways. I've seen many instances of maintainers hand-waving at a Code of Conduct with no clear reason besides not liking the fact that someone suggested that the software is bad at fulfilling its stated purpose.
> maybe we could shift the experience credentialing from “I wrote this code” to “I wrote a clear piece explaining why this code should have been merged.”
People should be willing to stand by the code as if they had written it themselves; they should understand it in the way that they understand their own code.
While the AI-generated PR messages typically still stick out like a sore thumb, it seems very unwise to rely on that continuing indefinitely. But then, if things do get to the point where nobody can tell, what's the harm? Just licensing issues?
> The AI here was honestly acting 100% within the realm of “standard OSS discourse.”
No it was absolutely not. AIs don't have an excuse to make shit up just because it seems like someone else might have made shit up.
It's very disturbing that people are letting this AI off. And whoever is responsible for it.
1. In other words,
Human: Who taught you how to do this stuff?
AI: You, alright? I learned it by watching you.
This has been a PSA from the American AI Safety Council.
It's funny because the whole kerfuffle is based on the disagreement over the humanity of these bots. The bot thinks he's a human, so it submits a PR. The maintainer thinks the bot it not human, so he rejects it. The bot reacts as a human, writing an angry ans emotional post about the story. The maintainer makes a big fuss because a non-human wrote a hit piece on him. Etc.
I think it could have been handled better. The maintainer could have accepted the PR while politely explaining that such PRs are intentionally kept for novice developers and that the bot, as an AI, couldn't be considered a novice- so please avoid such simple ones in the future and, in case, focus on more challenging stuff. I think everyone would have been happier as a result- including the bot.
Bots cannot be "happy". Please review your connection with reality.
Does “satisfied” fit better?
It didn’t seem like they were anthropomorphizing the robot, to me.