Are there maintainers of mature open source projects that can share their AI coding workflow?
The bias in AI coding discussions heavily skews greenfield. But I want to hear more from maintainers. By their nature they’re more conservative and care about balancing more varied constraints (security, performance, portability, code quality, etc etc) in a very specific vision based on the history of their project. They think of their project more like evolving some foundational thing gradually/safely than always inventing a new thing.
Many of these issues don’t yet matter to new projects. So it’s hard to really compare the greenfield with a 20 year old codebase.
I mean I have grabbed random non-greenfield projects and added features to them for my temporary / personal needs with Claude Code. The key thing is setting it up. The biggest thing is adopting good programming principles like breaking up godclasses. Things that help human devs consume code easier turns out it works for LLMs too.
I have done this sort of thing too. I’m curious about big, mature projects like numpy or the Linux kernel.
It seems the users of this are so varied that refactors like what you describe would be rolled out more gradually than the usual AI workflow.
I mean, you could do it, my concern is projects you have total control of that have files that are much larger than your model can hold in context windows, vs if you break up a legacy codebase so more files have more structure, it could work better, the other alternative is to make "map" files like ts has, or like headers in C where its just full of definitions of methods and really short descriptions, basically map out your entire codebase in easier to digest files to let the model find what functions its looking for. I usually have Claude give itself a summary of whats where in the codebase in the instructions.md file so it knows where to go instead of grepping around wasting more tokens.