My two cents: I've been coding practically my entire life, but a few years back I sustained a pretty significant and lasting injury to my wrists. As such, I have very little tolerance for typing. It's been quite a problem and made full time work impossible.

With the advent of LLMs, AI-autocomplete, and agent-based development workflows, my ability to deliver reliable, high-quality code is restored and (arguably) better. Personally, I love the "hallucinations" as they help me fine-tune my prompts, base instructions, and reinforce intentionality; e.g. is that >really< the right solution/suggestion to accept? It's like peer programming without a battle of ego.

When analyzing problems, I think you have to look at both upsides and downsides. Folks have done well to debate the many, many downsides of AI and this tends to dominate the conversation. Probably thats a good thing.

But, on the flip side, I personally advocate hard for AI from the point-of-view on accessibility. I know (more-or-less) exactly what output I'm aiming for and control that obsessively, but it's AI and my voice at the helm instead of my fingertips.

I also think it incorrect to look at it from a perspective of "does the good outweigh the bad?". Relevant, yes, but utilitarian arguments often lead to counter-intuitive results and end up amplifying the problems they seek to solve.

I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems. Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

> I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems. Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

That doesn't address the controversy because you are a reasonable person assuming that other people using AI are reasonable like you, and know how to use AI correctly.

The rumors we hear have to do with projects inundated with more pull requests that they can review, the pull requests are obviously low quality, and the contributors' motives are selfish. IE, the PRs are to get credit for their Github profile. In this case, the pull requests aren't opened with the same good faith that you're putting into your work.

In general, a good policy towards AI submission really has to primarily address the "good faith" issue; and then explain how much tolerance the project has for vibecoding.

>other people are reasonable like you

No AI needed. Spam on the internet is a great example of the amount of unreasonable people on the internet. And for this I'll define unreasonable as "committing an action they would not want committed back at them".

AI here is the final nail in the coffin that many sysadmins have been dealing with for decades. And that is that unreasonable actors are a type of asymmetric warfare on the internet, specifically the global internet, because with some of these actors you have zero recourse. AI moved this from moderately drowning in crap to being crushed under an ocean of it.

Going to be interesting to see how human systems deal with this.

Every order of magnitude of difference constitutes a categorical difference.

The ability to create spam instantly, fitted perfectly to any situation, and doing that 24/7, everywhere, is very different from before. Before, spam was annoying but generally different enough to tell apart. It was also (in general) never too much as to make an entire platform useless.

With AI, the entire internet IS spam. No matter what you google or look at, there's a very high chance it's AI spam. The internet is super duper extra dead.

> AI here is the final nail in the coffin

so far*

> Going to be interesting to see how human systems deal with this.

At least a bunch of lawyers already got hit when their court filings cited hallucinated cases. If this trend continues, I'll not be surprised when some end up disbarred.

> Spam on the internet is a great example of the amount of unreasonable people on the internet.

AI also generates spam though, so this is a much bigger problem than merely "unreasonable" people alone.

I mean, AI generates spam at the behest of unreasonable people currently, and we can just think of it as a powerful automated extension of other technologies. We could say it's a new problem in quantity but the same old problem in kind.

Now, with that said I don't think we're very far from automated agents causing problems all on their own.

[dead]

> But, on the flip side, I personally advocate hard for AI from the point-of-view on accessibility. I know (more-or-less) exactly what output I'm aiming for and control that obsessively, but it's AI and my voice at the helm instead of my fingertips.

This is the technique I've picked up and got the most from over the past few months. I don't give it hard, high-level problems and then review a giant set of changes to figure it out. I give it the technical solution I was already going to implement anyway, and then have it generate the code I otherwise would have written.

It cuts back dramatically on the review fatigue because I already know exactly what I'm expecting to see, so my reviews are primarily focused on the deviations from that.

The only issue to beat in mind is that visual inspection is only about 85% accurate at its limit. I was responsible for incoming inspection at a medical device factory and visual inspection was the least reliable test for components that couldn’t be inspected for anything else. We always preferred to use machines (likes big CMM) where possible.

I also use LLM assistance, and I love it because it helps my ADHD brain get stuff done, but I definitely miss stuff that I wouldn’t miss by myself. It’s usually fairly simple mistakes to fix later but I still miss them initially.

I’ve been having luck with LLM reviewers though.

This, and I curate a tree of MD docs per topic to define the expected structure. It is supposed to output code that looks exactly like my code. If not, I manually edit it and perhaps update the docs.

This is how I've found myself to be productive with the tools, or since productivity is hard to measure, at least it's still a fun way to work. I do not need to type everything but I want a very exact outcome nonetheless.

Similar story, albeit not so extreme. I have similar ergonomic issues that crop up from time to time. My programming is not so impacted (spend more time thinking than typing, etc), but things like email, documentation, etc can be brutal (a lot more computer usage vs programming).

My simple solution: I use Whisper to transcribe my text, and feed the output to an LLM for cleanup (custom prompt). It's fantastic. Way better than stuff like Dragon. Now I get frustrated with transcribing using Google's default mechanism on Android - so inaccurate!

But the ability to take notes, dictate emails, etc using Whisper + LLM is invaluable. I likely would refuse to work for a company that won't let me put IP into an LLM.

Similarly, I take a lot of notes on paper, and would have to type them up. Tedious and painful. I switched to reading my notes aloud and use the above system to transcribe. Still painful. I recently realized Gemini will do a great job just reading my notes. So now I simply convert my notes to a photo and send to Gemini.

I categorize all my expenses. I have receipts from grocery stores where I highlight items into categories. You can imagine it's painful to enter that into a financial SW. I'm going to play with getting Gemini to look at the photo of the receipt and categorize and add up the categories for me.

All of these are cool applications on their own, but when you realize they're also improving your health ... clear win.

> I'm going to play with getting Gemini to look at the photo of the receipt and categorize and add up the categories for me.

FWIW, I have a pet project for a family recipe book. I normalize all recipes to a steps/instructions/ingredients JSON object. A webapp lets me snap photos of my old recipes and AI reliably yields perfectly structured objects back. The only thing I've had to fix is odd punctuation. For production, use is low, so `gemini-2.5-flash` works great and the low rate limits are fine. For development the `gemma-3-27b-it` model has MUCH higher limits and still does suprisingly well.

I'd bet you can pull this off and be very happy with the result.

For projects, it's also a licensing issue. You don't own the copyright on AI generated code, no one does, so it can't be licensed.

This isn't an issue of "nobody can use this" but an "everyone can use this", i.e. projects can use AI generated code just fine and they own the copyright to any modifications they do to it.

Think of it like random noise in an image editor: you do own the random pixels since they're generated by the computer, but you can still use them as part of making your art - you do not lose copyright to your art because you used a random noise filter.

Only if the generated text has no inherited copyright from the source data.

Which it might. And needs to be judged on a case-by-case basis, under current copyright law.

I'm in a very similar situation: I have RSI and smarter-autocomplete style AI is a godsend. Unlike you I haven't found more complex AI (agent mode) particularly useful though for what I do (hard realtime C++ and Rust). So I avoid that. Plus it takes away the fun part of coding for me. (The journey matters more than the destination.)

The accessibility angle is really important here. What we need is a way to stop people who make contributions they don't understand and/or can not vouch they are the author for (the license question is very murky still, and no what the US supreme court said doesn't matter here in EU). This is difficult though.

Fwiw, I try to make sure we have an accessibility focused talk every year (if possible) at the Carolina Code Conference. Call for Speakers is open right now if you'd be interested in submitting something on your story.

Accessibility is an angle that rarely comes up in these debates and it's a strong one

If you sign off the code and put your expertise and reputation behind it, AI becomes just an advanced autocomplete tool and, as such, should not count in “no AI” rules. It’s ok to use it, if that enables you to work.

this sounds reasonable, but in practice people will simply sign off on anything without having thoroughly reviewed it.

I agree with you that there's a huge distinction between code that a person understands as thoroughly as if they wrote it, and vibecoded stuff that no person actually understands. but actually doing something practical with that distinction is a difficult problem to solve.

Unless the code is explicitly signed by AI as auto-commit, you cannot really tell if it was reviewed by human. So it essentially becomes a task of detecting specific AI code smell, which is barely noticeable in code reviewed by an experienced engineer. Very subjective, probably does not make sense at all.

> If you sign off the code and put your expertise and reputation behind it, AI becomes just an advanced autocomplete tool and, as such, should not count in “no AI” rules.

No, it's not that simple. AI generated code isn't owned by anyone, it can't be copyrighted, so it cannot be licensed.

This matters for open source projects that care about licensing. It should also matter for proprietary code bases, as anyone can copy and distribute "their" AI generated code for any purpose, including to compete with the "owner".

> No, it's not that simple. AI generated code isn't owned by anyone, it can't be copyrighted, so it cannot be licensed.

There is no way to reliably identify code as AI-generated, unless it is explicitly labelled so. Good code produced by AI is not different from the good code produced by software engineer, so copyright is the last thing I would be worried about. Especially given the fact that reviewing all pull requests is substantial curation work on the side of maintainers: even if submitted code is not copyrightable, the final product is.

this is equivalent to claiming that automation has no negative side effects at all.

we do often choose automation when possible (especially in computer realms), but there are endless examples in programing and other fields of not-so-surprising-in-retrospect failures due to how automation affects human behavior.

so it's clearly not true. what we're debating is the amount of harm, not if there is any.

Putting aside the specifics for a second, I'm sorry to hear about your injury and glad you've found workarounds. I also think high-quality voice transcription might end up being a big thing for my health (there's no way typing as much as I do, in the positions I do, is good).

Much appreciated. I find is that referencing code in conversation is hard -- e.g. "underscore foo bar" vs `_fooBar`, "this dot Ls" vs `this.els`, etc happens often. Lower-powered models especially struggle with this, and make some frustrating assumptions. Premium models do way better, and at times are shockingly good. They just aren't remotely economically viable for me.

My solution so far is to use my instructions to call out the fact that my comments are transcribed and full of errors. I also focus more on "plan + apply" flows that guide agents to search out and identify code changes before anything is edited to ensure the relevant context (and any tricky references) are clearly established in the chat context.

It's kinda like learning vim (or emacs, if you prefer). First it was all about learning shortcuts and best practices to make efficient use of the tool. Then it was about creating a good .vimrc file to further reduce the overhead of coding sessions. Then it was about distributing that .vimrc across machines (and I did a LOT of ssh-based work) for consistency. Once that was done, it became unimaginable to code any other way.

It has been even more true here: agent-based workflows are useless without significant investment in creating and maintaining good project documentation, agent instructions, and finding ways to replicate that across repos (more microservice hell! :D) for consistency. There is also some conflict, especially in corporate environments, with where this information needs to live to be properly maintained.

Best of luck!

>Personally, I love the "hallucinations" as they help me fine-tune my prompts, base instructions, and reinforce intentionality

This reads almost like satire of an AI power user. Why would you like it when an LLM makes things up? Because you get to write more prompts? Wouldn't it be better if it just didn't do that?

It's like saying "I love getting stuck in traffic because I get to drive longer!"

Sorry but that one sentence really stuck out to me

You worked with people before haven't you? Sometimes they make stuff up, or misremember stuff. Sometimes people who do this are brilliant and you end up learning a lot from them.

I appreciate the feedback.

I like it because I have no expectation of perfection-- out of others, myself, and especially not AI. I expect "good enough" and work upwards from there, and with (most) things, I find AI to be better than good enough.

Yeah, if RSI is an issue why would you want to be forced to type more?

> I'd MUCH rather see a holistic embrace and integration of these tools into our ecosystems.

I understand that your use case is different, so AI may help handicapped people. Nothing wrong with that.

The problem is that the term AI encompasses many things, and a lot of AI led to quality decay. There is a reason why Microsoft is now called Microslop. Personally I'd much prefer for AI to go away. It won't go away, of course, but I still would like to see it gone, even if I agree that the use case you described is objectively useful and better for you (and others who are handicapped).

> I also think it incorrect to look at it from a perspective of "does the good outweigh the bad?". Relevant, yes, but utilitarian arguments often lead to counter-intuitive results and end up amplifying the problems they seek to solve.

That is the same for every technology though. You always have a trade-off. So I don't think the question is incorrect at all - it applies the same just as it is for any other technology, too. I also disagree that utilitarian arguments by their intrinsic nature lead to counter-intuitive results. Which result would be counter-intuitive when you analyse a technology for its pros and cons?

> There is a reason why Microsoft is now called Microslop.

Because young people repeat things they see on social media?

A few years ago I was in a place where I couldn't type on a computer keyboard for more than a few minutes without significant pain, and I fortunately had shifted into a role where I could oversee a bunch of junior engineers mostly via text chat (phone keyboard didn't hurt my hands as much) and occasional video/voice chat.

I'm much better now after tons of rehab work (no surgery, thankfully), but I don't have the stamina to type as much as I used to. I was always a heavy IDE user and a very fast coder, but I've moved platforms too many times and lost my muscle memory. A year ago I found the AI tools to be basically time-wasters, but now I can be as productive as before without incurring significant pain.

This is a bit of a straw man. The harms of AI in OSS are not from people needing accessibility tooling.

I disagree. I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

I do agree that at large, the theoretical upsides of accessibility are almost certainly completely overshadowed by obvious downsides of AI. At least, for now anyway. Accessibility is a single instance of the general argument that "of course there are major upsides to using AI", and there a good chance the future only gets brighter.

My point, essentially, is that I think this is (yet another) area in life where you can't solve the problem by saying "don't do it", and enforcing it is cost-prohibitive. Saying "no AI!" isn't going to stop PR spam. It's not going to stop slop code. What is it going to stop (see edit)? "Bad" people won't care, and "good" people (who use/depend-on AI) will contribute less.

Thus I think we need to focus on developing robust systems around integrating AI. Certainly I'd love to see people adopt responsible disclosure policies as a starting point.

--

[edit] -- To answer some of my own question, there are obvious legal concerns that frequently come up. I have my opinions, but as in many legal matters, especially around IP, the water is murky and opinions are strongly held at both extremes and all to often having to fight a legal battle at all* is immediately a loss regardless of outcome.

> I've done nothing to argue that the harm isn't real, downplayed it, nor misrepresented it.

You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society. I'd say that that is downplaying and misrepreting the issue. You even go so far to say

>Telling people "no AI!" (even if very well defined on what that means) is toothless against people with little regard for making the world (or just one specific repo) a better place.

These aren't balanced arguments taking both sides into considerations. It's a decision that your mindset is the only right one and anyone else is a opposing progress.

>You're literally saying that the upsides of hallucinanigenic gifts are worth the downside of collapsing society.

No, literally, he didn't.

Yes, I literally quoted it.

> are worth the downside of collapsing society.

At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

>It's a decision that your mindset is the only right one and anyone else is a opposing progress.

So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?

> At least in the US, society has been well on it's way to collapse before the LLM came out. "Fake news" is a great example of this.

IMO you can blame this on ML and the ability to microtarget[1] constituencies with propaganda that's been optimized, workshopped, focus grouped, etc to death.

Proto-AI got us there, LLMs are an accelerator in the same direction.

[1] https://en.wikipedia.org/wiki/Microtargeting

Sure. I always said Ai was a catalyst. It could have made society build up faster and accelerate progress, definitely.

But as modern society is, it is simply accelerating the low trust factors of it and collapsing jobs (even if it can't do them yet), because that's what was already happening. But hey, assets also accelerated up. For now.

>So pretty much every religious group that's ever existed for any amount of time. Fundamentalism is totally unproblematic, right?

Religion is a very interesting factor. I have many thoughts on it, but for now I'll just say that a good 95% of religious devouts utterly fail at following what their relevant scriptures say to do. We can extrapolate the meaning of that in so many ways from there.

It's absolutely not a straw man, because OP and people like OP will be affected by any policy which limits or bans LLMs. Whether or not the policy writer intended it. So he deserves a voice.

He doesn't think others deserve a voice, so why should I consider his?

The fact that you are engaging in this thread shows me you have considered my opinions, even if you reject them. I think thats great, even in the face of being told I advocate for the collapse of civilization and that I want others to shut up and not be heard.

It is a bit insulting, but I get that these issues are important and people feel like the stakes are sky-high: job loss, misallocation of resources, enshitification, increased social stratification, abrogation of personal responsibility, runaway corporate irresponsibility, amplification of bad actors, and just maybe that `p(doom)` is way higher than AI-optimists are willing to consider. Especially as AI makes advances into warfare, justice, and surveillance.

Even if you think AI is great, it's easy to acknowledge that all it may take is zealotry and the rot within politics to turn it into a disaster. You're absolutely right to identify that there are some eerie similarities to the "gun's don't kill people, people kill people" line of thinking.

There IS a lot to grapple with. However, I disagree with these conclusions (so far) and especially that AI is a unique danger to humanity. I also disagree that AI in any form is our salvation and going to elevate humanity to unfathomable heights (or anything close to that).

But, to bring it back to this specific topic, I think OSS projects stand to benefit (increasingly so as improvements continue) from AI and should avoid taking hardline stances against it.

[delayed]

The premise LLM are "AI" is false, but are good at problems like context search, and isomorphic plagiarism.

Given the liabilities of relying on public and chat users markdown data to sell to other users without compensation raises a number of issues:

1. Copyright: LLM generated content can't be assigned copyright (USA), and thus may contaminate licensing agreements. It is likely public-domain, but also may conflict with GPL/LGPL when stolen IP bleeds through weak obfuscation. The risk has zero precedent cases so far (the Disney case slightly differs), but is likely a legal liability waiting to surface eventually.

2. Workmanship: All software is terrible, but some of it is useful. People that don't care about black-box obfuscated generated content, are also a maintenance and security liability. Seriously, folks should just retire if they can't be arsed to improve readable source tree structure.

3. Repeatability: As the models started consuming other LLM content, the behavioral vectors often also change the content output. Humans know when they don't know something, but an LLM will inject utter random nonsense every time. More importantly, the energy cost to get that error rate lower balloons exponentially.

4. Psychology: People do not think critically when something seems right 80% of the time. The LLM accuracy depends mostly on stealing content, but it stops working when there is nothing left to commit theft of service on. The web is now >53% slop and growing. Only the human user chat data is worth stealing now.

5. Manipulation: The frequency of bad bots AstroTurf forums with poisoned discourse is biasing the delusional. Some react emotionally instead of engaging the community in good faith, or shill hard for their cult of choice.

6. Sustainability: FOSS like all ecosystems is vulnerable to peer review exhaustion like the recent xz CVE fiasco. The LLM hidden hostile agent problem is currently impossible to solve, and thus cannot be trusted in hostile environments.

7. Ethics: Every LLM ruined town economic simulations, nuked humanity 94% of the time in every war game, and encouraged the delusional to kill IRL

While I am all for assistive technologies like better voice recognition, TTS, and individuals computer-user interfaces. Most will draw a line at slop code, and branch to a less chaotic source tree to work on.

I think it is hilarious some LLM proponents immediately assume everyone also has no clue how these models are implemented. =3

"A Day in the Life of an Ensh*ttificator "

https://www.youtube.com/watch?v=T4Upf_B9RLQ

Fantastic point. I do think there was a bit of an over correction toward AI hostility because capitalism, and for good reason, but it did almost make it taboo to talk about legitimate use cases that are not related to bad AI use cases like instigating nuclear wars in war game simulations.

I think the ugly unspoken truth whether Mozilla or Debian or someone else, is that there are going to be plausible and valuable use cases and that AI as a paradigm is going to be a hard problem the same way that presiding over, say, a justice system is a hard problem (stay with me). What I mean is it can have a legitimate purpose but be prone to abuse and it's a matter of building in institutional safeguards and winning people's trust while never fully being able to eliminate risk.

It's easy for someone to roll their eyes at the idea that there's utility but accessibility is perfect and clear-eyed use case, that makes it harder to simply default to hedonic skepticism against any and all AI applications. I actually think it could have huge implications for leveling the playing field in the browser wars for my particular pet issue.

I think generating slop and having others review it is bad even if you are disabled. I say this as a disabled person myself.