If AI actually has hit the levels that Sequoia, Anthropic, et al claim it has, then autonomous AI agents should be forking projects and making them so much better that we'd all be using their vastly improved forks.
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream.
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
The agents are not that good yet, but with human supervision they are there already.
I've forked a couple of npm packages, and have agents implement the changes I want plus keep them in sync with upstream. Without agents I wouldn't have done that because it's too much of a hassle.
couldn't get espanso to work with by abnt2 keyboard. a few cc sessions later I had a completely new program doing only what I wanted from espanso and working perfectly with my keyboard. I also have forked cherri and voxd, but it's all vibe coded so I'm not publishing it or open sourcing it as of now (maybe in the future if I don't have more interesting things to build - which is unlikely)
Do you think you'd ever feel confident enough to submit non-slop patches in the future? I feel like that way, at least the project gains a potential maintainer.
I already do that, but only on projects where I actually wrote the code. I don’t see a future where I would submit something AI fully wrote even if I understood it.
If AI actually has hit the levels that Sequoia, Anthropic, et al claim it has, then autonomous AI agents should be forking projects and making them so much better that we'd all be using their vastly improved forks.
Why isn't this happening?
I dunno about autonomous, but it is happening at least a bit from human pilots. I've got a fork of a popular DevOps tool that I doubt the maintainers would want to upstream, so I'm not making a PR. I wouldn't have bothered before, but I believe LLMs can help me manage a deluge of rebases onto upstream.
same, i run quite a few forked services on my homelab. it's nice to be able to add weird niche features that only i would want. so far, LLMs have been easily able to manage the merge conflicts and issues that can arise.
The agents are not that good yet, but with human supervision they are there already.
I've forked a couple of npm packages, and have agents implement the changes I want plus keep them in sync with upstream. Without agents I wouldn't have done that because it's too much of a hassle.
couldn't get espanso to work with by abnt2 keyboard. a few cc sessions later I had a completely new program doing only what I wanted from espanso and working perfectly with my keyboard. I also have forked cherri and voxd, but it's all vibe coded so I'm not publishing it or open sourcing it as of now (maybe in the future if I don't have more interesting things to build - which is unlikely)
Because those levels are pure PR fiction.
I do this all the time. I just keep them to myself. Nobody wants my AI slop fork even if it fixes the issues of the original.
Do you think you'd ever feel confident enough to submit non-slop patches in the future? I feel like that way, at least the project gains a potential maintainer.
I already do that, but only on projects where I actually wrote the code. I don’t see a future where I would submit something AI fully wrote even if I understood it.