This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.

It’s an extension of the asymmetric bullshit principle IMO, and I think now all workplaces / projects need norms about this.

This problem statement was actually where the idea for Proof of Work (aka mining) in bitcoin came from. It evolved out of the idea of requiring a computational proof of work for sending an email via cypherpunk remailers as a way of fighting spam. The idea being only a legitimate or determined sender would put in the "proof of work" to use the remailer.

I wonder how it would look if open source projects required $5 to submit a PR or ticket and then paid out a bounty to the successful or at least reasonable PRs. Essentially a "paid proof of legitimacy".

The parallel between PoW and barriers to entry many communities (be it Wikipedia editors or open-source contributors) use to sustain themselves seems apt.

Unfortunately, there is no community equivalent of PoS—the only alternative is introducing different barriers, like ID verification, payment, in-person interviews, private invite system, etc., which often conflict with the nature of anonymous volunteer communities.

Such communities are perhaps one of the greatest things the Web has given us, and it is sad to see them struggle.

(I can imagine LLM operators jumping on the opportunity to sell some of these new barriers, to profit from selling both the problematic product and a product to work around those problems.)

> (I can imagine LLM operators jumping on the opportunity to sell some of these new barriers, to profit from selling both the problematic product and a product to work around those problems.)

That is their business model. Use AI to create posts in LinkedIn, mails in a corporate environment, etc. And then use AI to summarize all that text.

AI creates a problem and then offers a solution.

My current approach is to look at new sources lie The Guardian, Le Monde, AP news, etc. I know that they put the work, sadly places like Reddit and such are just becoming forums that discuss garbage news with bot comments. (I could use AI to identify non-bot comments and news sources, but it does not really work even if it says that it does, and I should not have to do that in the first place either).

The community equivalent of Proof of Stake is reputation. You can't do anything until you've shown up for awhile and contributed in small ways, you gradually gain the ability to contribute in bigger ways, and if you are discovered to be malicious or corrupt or toxic, then your rights are revoked. The people who've gained this trust are presumably motivated to maintain it (although there's always the risk they sell their account/identity/soul for healthcare and do some damage before they're found out).

Reputation is always there in a community, regardless, in members’ minds. It’s just that not every community wants explicit quantified reputation, and I’m with them on that…

> I wonder how it would look if open source projects required $5 to submit a PR or ticket and then paid out a bounty to the successful or at least reasonable PRs. Essentially a "paid proof of legitimacy".

Badly. You will alienate most legitimate contributors, and only leave spam bots subsidized by revenue from other sources

$5 could go towards a strict AI reject/review funnel as a prefilter

It feels like reputation / identity are about to become far more critical in determining whether your contribution, of whatever form, even gets considered.

Web of Trust will make a comeback. Both personal and on actual websites.

If I can say I trust you, the websites you trust will be prioritised for me and marked as reliable (no AI slop, actual humans writing content).

Perhaps it's time for Klout to rise from the ashes?

> expecting people to review and act upon it.

But why should this expectation be honored? If someone spends close to zero effort generating a piece of code and lobs it over the fence to me, why would I even look at it? Particularly if it doesn't even meet the requirements for a pull request (which is what it seems like the article is talking about)?

Because that's the definition of collaboration? Prior to the invention of LLMs, one could generally assume requests for collaboration were somewhat sincere due to the time investment involved. Now we need a new paradigm for collaboration.

> Because that's the definition of collaboration?

I don't think the definition of collaboration includes making close to zero effort and expecting someone else to expend considerable effort in return.

The problem is that the sheer volume of low-quality AI PRs is overwhelming. Just the time it takes determining whether you should pay attention to a PR or not can add up when there are a lot of plausible-looking, but actually low-quality and untested, pull requests to your project.

But if you stop looking at PRs entirely, you eliminate the ability for new contributors to join a project or make changes that improve the project. This is where the conflict comes from.

Since the bar to opening a PR has gotten lower, there's an argument that the bar for closing it might need to be lowered as well. I think right now, we consider the review effort to be asymmetric in part because it's natural to want to give the benefit of the doubt to PR authors rather than making a snap judgement from only a looking briefly at something; the current system seems to place a higher value on not accidentally closing a potentially useful but poorly presented PR than not accidentally wasting time on one that superficially appears like it might be good but isn't. I have to wonder if the best we can do is to just be more willing to close PRs when reviewers aren't sufficiently convinced of the quality after a shorter inspection regardless of whether we're 100% certain about whether that judgment is perfect. If "false positive" PRs that seem reasonable but turn out not to be are better at appearing superficially good, the best option seems like it might just be to be willing to throw out more "false negatives" that would be useful but aren't sufficiently able to distinguish themselves from the ones that aren't.

After a minute (or whatever length of time makes sense for the project), decide whether you're not fully confident that the PR is worth your time to continue reviewing, with the default answer being "no" if you're on the fence. Unless it's a yes, you got a bad vibe; close it and move on. Getting a PR merged will require more effort in making the case that there's value in keeping it open, which restores some of the balance that's been lost in the effort having been pushed to the review side.

PR authors blow now need to spend energy and effort to make their PR appear worthwhile for consideration. AI PRs will have the effect of shifting the burden of effort to the PR authors (the real ones).

No more drive-by PRs.

My music/Youtube algos are ruined because when I flag I don't like the 100 AI songs/videos that it presents me each day the algos take it as my no longer liking those genres. Between me down rating AI music/AI history videos, Youtube now give me like half a page of recommendations then gives up. I'm now punished by Youtube/my experience is worse because Youtube's cool with hosting so much AI slop content and I chose to downrate it/try to curate if out of my feed. The way Youtube works today it punishes you (or trys to train you not to) for flagging 'don't recommend channel' when recommended a channel of AI slop. Flag AI and Youtube will degrade you algo recommendations.

> This is a problem everywhere now, and not just in code. It now takes zero effort to produce something, whether code or a work plan or “deep research” and then lob it over the fence, expecting people to review and act upon it.

Where is the problem? If I don't have the time to review a PR, I simply reject it. Or if I am flooded in PRs, I only take those from people from which I know that their PRs are of high quality. In other words: your assumption "expecting people to review and act upon it" is wrong.

Even though I would bet that for the kind of code that I voluntarily write in my free time, using an LLM to generate lots of code is much less helpful because I use such private projects to try out novel things that are typically not "digested stuff from the internet".

So, the central problem that I rather see is the license uncertainties for AI-generated code.

You're still getting DDoSed. If you only accept PRs from pre-vetted people you'll inevitably be left with zero contributors: people naturally leave over time, so in order to maintain a healthy ecosystem you need to accept some newcomers.

Don't throw the baby out with the bathwater.

There is no healthy ecosystem. Most packages are one or two contributors. And have been for forever. Granted, it's Nuget, where MS is the giant that overshadows everything, but I have read a lot of about this and it's same everywhere.

https://opensourcesecurity.io/2025/08-oss-one-person/

I think people are starting to realize what the “end of work” is going to look like and they don’t like it