I don't think it's the complete fanbase. However, there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason. Programming was a domain that filtered out those people because they found it hard to succeed at it. LLM's have changed that and it's a huge problem. It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

"They may speed up the good programmers a little, but those people were able to program anyway without LLMs."

I don't think this is realistic. I'm a good programmer, and it speeds up my work a lot, from "make sense of this 10 repo project I haven't worked on recently" to "for this next step I need a vpn multiplexer written in a language I don't use" to, yeah, "this 10k line patch lets me see parts of design space we never could have explored before." I think it's all about understanding the blast radius. Sonetimes a lot of code is helpful, sometimes more like a lot of help proving a fact about one line of code.

Like Simon says, if I'm driving by someone else's project, I don't send the generated pull request, I just file the bug report / repro that would generate it.

> to "for this next step I need a vpn multiplexer written in a language I don't use"

but that acceleration is exactly because you're not good at that language

It's great when I know how the code should look. Sometimes I just can't bring myself to write yet another http handler.

> However, there are lots of people in the world who live their whole life by vibing

Why are they often so desperate to lie and non-consensually harass others with their vibing rather than be honest about it? Why do they think they are "helping" with hallucinated rubbish that can't even build?

I use LLMs. It is not difficult to: ethically disclose your use, double check all of your work, ensure things compile without errors, not lie to others, not ask it to generate ten paragraphs of rubbish when the answer is one sentence, and respect the project's guidelines. But for so many people this seems like an impossible task.

> Why do they think they are "helping" with hallucinated rubbish that can't even build?

Because they can't tell the difference between what the machine is outputting, and what people have built. All they see is the superficial resemblance (long lines of incomprehensbile code) and the reward that the people writing the code have got, and want that reward too.

the target audience of the cyber typer terminal [0]

[0] https://hackertyper.net/

> Why do they think they are "helping"

It's not about helping. It's about the feeling of clout. There are still plenty of people who look at Github profile activity to judge job candiates, etc. What gets measured gets repeated.

I believe that most of the ills of social media would disappear, if we eliminated the "like" and "upvotes" buttons and the view counts. Most open source garbage pull requests may likewise go away if contributions were somehow anonymous.

"Main character energy". What they're really doing is protecting their view of themselves as smart, and they're making a contribution for the sake of trying to perform being an OSS dev rather than out of need or altruism.

AI is absolutely terrible for people like that, as it's the perfect enabler.

I think a lot of people who haven't given it more thought might see it as an arbitrary rule or even some kind of gatekeeping or discrimination. They haven't seen why people would want to not deal with the output.

This might not be helped by the fact that there are a lot of seemingly psychotic commenters attacking anything which might have touched an LLM or any generative model at some point. Their slur and expletive filled outbursts make every critical response look bad by vague association.

Having sensible explanations like in TFA for the rules and criticism clearly visible should help. But looking at other similar patterns, I'm not optimistic. And education isn't likely to happen since we're way past any eternal september.

You're asking why oil doesn't act like water. It's not really an impossible task, it's just not one they agree with.

I wonder how many are account farming.

LLMs are in this case enabling bad behavior, but open source software has always been vulnerable to this. Similarly, people who use LLMs to do this kind of thing are the kind of people who would have done it without LLMs but for the large effort it would have taken. We're just learning now how large that group is.

This is a good thing, it's an opportunity to make open source development processes robust to this kind of sabotage.

> LLMs are in this case enabling bad behavior

Yeah that seems to be their primary use case, if I'm honest. It's possible to use them ethically and responsibly, much in the same way it's possible to write one's own code, and more broadly, do one's own work. Most people however, especially in our current cultural moment and with the perverse incentives our systems have created, are not incentivized to be ethical or responsible: they are incentivized to produce the most code (or most writing, most emails, whatever), and get the widest exposure and attention, for the least effort.

Hence my position from the start: if you can't be bothered to create it, I'm not interested in consuming it.

It's the same as cheating in a game. You are given an """advantage""", so lying about it seems like the best option

Before LLMs we could already see a growing abundance of half baked engineers only in for the good pay. Willing to work double time to pull things out.

Management, unsurprisingly deemed those precious. They could email them out anytime, working weekend to fix problems their kind were the cause. Sure sir.

They excel at communication. Perfecting the art.

Now LLMs are there to accelerate the trend.

> It's hard to know if LLMs will end up being a net win for the industry. They may speed up the good programmers a little, but those people were able to program anyway without LLMs. They will speed up the bad programmers a lot and that's where the balance sheet goes into the red.

If you will forgive an appeal to authority:

The hard thing about building software is deciding what one wants to say, not saying it. No facilitation of expression can give more than marginal gains.

- Fred Brooks, 1986

> It's hard to know if LLMs will end up being a net win for the industry.

True, regardless of that, for sure with LLM we are borrowing Technical debt like never before.

"Claude, don't create any technical debt please"

For at least the last 3 decades programming was a field that rewarded utter mediocrity with (relatively to other fields) massive remuneration. It has been filled with opportunists for as long as I remember.

I think worth noting that a more impactful and maybe even bigger proportion of those opportunists is in management.

Regarding quality overall, I agree, it's truly a cursed field. It was bad before; and with LLMs, going against that tide seems more difficult than ever.

You are talking about bad programmers who are at least able to fool their managers for at least several years. The people OP is talking about could not even do that and most likely would have dropped out in the first week trying to program full time since they just don’t have the aptitude and patience to get unblocked after their first compilation error. Now they can go very far with a LLM.

This is an excellent point. LLMs might merely be exposing and amplifying behaviors that were always there. This can be an opportunity, in that shining light on it may allow us to cleanse ourselves of it. It's fundamentally about integrity, and sadly it's becoming clearer how few possess it (if it ever wasn't!). But maybe we'll get better at measuring integrity, and make hiring/collaboration decisions based on it.

> there are lots of people in the world who live their whole life by vibing. It's a viable way to live and sometimes it's the only way to live. But they have a very loose relationship with truth and reason

This response 1000% was crafted with input from an LLM, or the user spends too much time reading output from llms.

I have never used an LLM to write. Writing forces me to think (and I edited the comment a couple of times when writing it which helped me clear up my thinking). "It's a viable way to live and sometimes it's the only way to live" is a personal realization that has taken me some time to understand. You can go back through my comment history to the time before LLMs to check if my style was different then.

It says a lot that most readers can't distinguish good writing from something an LLM spat out.

Ray Kroc's genius was to make people forget that you get what you pay for.

False equivalency. If you had the humility to run your own writing through an LLM first, it would have caught it. Just saying.

Not picking on you in particular, but most of the anti-AI crowd can’t present their case compellingly and have an utter lack of humility.

If you run your writing through an LLM, it can poke holes in your argument, organize your ideas better, or point out that your tone is hostile/dismissive. It doesn’t need to be a replacement for writing or thinking, especially if you’re learning along the way.

So - in that way - LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.

Do You really want it?

There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.

That's looks like a strong dystopia for me.

> LLM will be Your mentor, it will shape Your way of thinking according to algorithms and datasets stuffed into by corporate creators.

How is this mutually exclusive with teaching better than most humans? Part of these "corporate" datasets include deep knowledge of the world's best literature and philosophy, for instance. Why can't it be both?

> Do You really want it?

If I'm in a hurry, don't know where to start, or don't have money for someone to teach me—sure.

> There is also a second face of that: people are lazy. They wouldn't develop their own skills but rather they would off-load tasks to LLM-s, so their communicative abilities will be fade away.

This is a recapitulation of the Luddite argument during the Industrial Revolution. And it's valid, but it has consequences for all technological change, not just this one. There was a world before Google, the Web, the Internet, personal computing, and computers. The same argument applies across the board, and the pre-AI / post-AI cutoff looks arbitrary.

> teaching better than most humans

Ah, so now we get to the "ed tech" question. What is teaching? Is there a human element to it, and if so, what is it? Or is it something completely inhuman? Or do we need to clarify what meaning of "teaching" we're talking about before we have a discussion?

All of which are parts of the writing and thinking skillset, no?

Right. It can enhance that skillset. Are you suggesting it can’t?

This wouldn’t be a plausible position.

Rather that avoiding delegating it to LLM for these tasks helps you practice that skill.

That said, I think it depends how you use it. You can learn from explanations, and you'd better avoid "rewrite this for me and do nothing else" kind of approach.

I don't get that impression at all. LLMs would have avoided the stylistic repetition of "live". Asking an LLM to reformulate the sentences you quoted yields this slop:

> There are a lot of people who go through life by vibing. And honestly: that’s not automatically “bad.” Sometimes it’s even the only workable way to get through things. The issue is that “vibe-first” people tend to have a pretty loose relationship with truth, rigor, and being pinned down by specifics. They’ll confidently move forward on what sounds right instead of what they can verify.

I'll finish this post with a sentence containing an em-dash -- just to confuse people -- and by remarking on how sad I find it that people latch onto dashes and complete sentences as the signifiers of LLM use, instead of the inconsistent logic and general sloppiness that's the actual problem.

[dead]

> Programming was a domain that filtered out those people because they found it hard to succeed at it.

I think this is a very rosy view of programmers, not borne out by history. The people leading the vibe coding charge are programmers, rather than an external group.

I know it's popular to divide the world into the technically-literate and the credulous, but in this case the technical camp is also the one going all in.