From https://kristoff.it/blog/contributor-poker-and-ai/:

"Unfortunately the reality of LLM-based contributions has been mostly negative for us, from an increase in background noise due to worthless drive-by PRs full of hallucinations (that wouldn’t even compile, let alone pass CI), to insane 10 thousand line long first time PRs. In-between we also received plenty of PRs that looked fine on the surface, some of which explicitly claimed to not have made use of LLMs, but where follow-up discussions immediately made it clear that the author was sneakily consulting an LLM and regurgitating its mistake-filled replies to us."

Pretty much sums up the LLM fanbase.

I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."

I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.

Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.

> to "for this next step I need a vpn multiplexer written in a language I don't use"

but that acceleration is exactly because you're not good at that language

It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.

> However, there are lots of people in the world who live their whole life by vibing

Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.

> Why do they think they are "helping" with hallucinated rubbish that can't even build?

Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.

the target audience of the cyber typer terminal [0]

[0] https://hackertyper.net/

"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.

AI is absolutely terrible for people like that, as it's the perfect enabler.

> Why do they think they are "helping"

It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.

I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.

I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.

This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.

Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.

You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.

I wonder how many are account farming.

It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option

LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.

This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.

> LLMs are in this case enabling bad behavior

Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.

Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.

Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.

Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.

They excel at communication. Perfecting the art.

Now LLMs are there to accelerate the trend.

wouldnt llm do all the tasks that determistic programs are doing. like chatgpt files taxes for you instead of using turbotax.

> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

If you will forgive an appeal to authority:

The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.

- Fred Brooks, 1986

> It's hard to know if LLMs will end up being a net win for the industry.

True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.

"Claude, don't create any technical debt please"

For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.

I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.

Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.

You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.

This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.

> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason

This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.

I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.

It says a lot that most readers can't distinguish good writing from something an LLM spat out.

Ray Kroc's genius was to make people forget that you get what you pay for.

False equivalency. If you had the humility to run your own writing through an LLM first, it would have caught it. Just saying.

Not picking on you in particular, but most of the anti-AI crowd can’t present their case compellingly and have an utter lack of humility.

If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.

So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.

Do You really want it?

There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.

That's looks like a strong dystopia for me.

> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.

How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?

> Do You really want it?

If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.

> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.

This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.

> teaching better than most humans

Ah, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?

All of which are parts of the writing and thinking skillset, no?

Right. It can enhance that skillset. Are you suggesting it can’t?

This wouldn’t be a plausible position.

Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.

That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.

I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:

> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.

I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.

[dead]

> Programming was a domain that filtered out those people because they found it hard to succeed at it.

I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.

I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.

Fanbase, maybe. Software engineers using these projects? Probably forking and updating themselves.

FWIW, I've opened a half dozen PRs from LLMs and had them approved. I have some prompts I use to make them very difficult to tell they are AI.

However if it is a big anti-llm project I just fork and have agents rebase my changes.

Your employer allows/encourages this? Do you run that stuff in production? Would you mind telling us where you work so we can avoid using their products? It is just not possible to trust the software that emerges from the process you've described.

[dead]

I'm firmly in the LLM fanbase. Not because I can't type code (was doing it for over 17 years, everywhere from low level hardware drivers in C to web frontend to robot development at home as a hobby - coding is fun!), but because in my profession it allows me to focus more on the abstraction layer where "it matters".

I'm not saying that I'm no longer dealing with code at all though. The way I work is interactively with the LLM and pretty much tell it exactly what to do and how to do it. Sometimes all the way down to "don't copy the reference like that, grab a deep copy of the object instead". Just like with any other type of programming, the only way to achieve valuable and correct results is by knowing exactly what you want and express that exactly and without ambiguity.

But I no longer need to remember most of the syntax for the language I happen to work with at the moment, and can instead spend time thinking about the high level architecture. To make sure each involved component does one thing and one thing well, with its complexities hidden behind clear interfaces.

Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

This mindset is fine (it's mine essentially too).

But it absolutely has to be combined with verification/testing at the same speed as code production.

I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly

Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.

I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.

On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).

It's not. Invariably, the code is locally fine and globally nonsense.

One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.

> Engineers who refuse to, or can't, or won't utilize the benefits that LLMs bring will be left behind. It's just the way it is. I'm already seeing it happening.

Any examples how you see some engineers being left behind?

I'm starting to notice how those who don't use AI end up having to hand tasks over to people who can get them done quicker.

It is anecdotal for sure, but it's a pattern that seems to be emerging around me that expectations of velocity increases, and those who don't use AI can't keep up.

Probably in cognitive surrender. I have one such colleague and he is driving me crazy. "Claude sad that ..."

[deleted]

Not really - I imagine as with almost everything in life there's a normal distribution, in this case of the quality with which people use AI tools.

Fake it ‘till you make it. Seems like LLM’s have caught-on to that too.

You can curb an LLM into doing what you want. Unfortunately people don't have the patience or the skill.

People who have skill can do the same without LLMs, maybe slightly slower on average but on more predictable schedule.

I wouldn’t say slightly slower; LLMs are massively useful for software engineering in the right hands.

For some personal projects I still stick to the basics and write everything by hand though. It’s kinda nice and grounding; and almost feels like a detox.

For any new software engineer, I’m a strong advocate of zero LLM use (except maybe as a stack overflow alternative) for your first few months.

The chat UX with a fake-human lying to you and framing things emotionally really doesn’t help. And it is pretty much not possible to get away from it, or at least I haven’t found yet how.

I would love to see a model trained to behave way more like a tool instead of auto-completing from Reddit language patterns…