The bottlenecks today are:
* understanding the problem
* modelling a solution that is consistent with the existing modelling/architecture of the software and moves modelling and architecture in the right direction
* verifying that the the implementation of the solution is not introducing accidental complexity
These are the things LLMs can't do well yet. That's where contributions will be most appreciated. Producing code won't be it, maintainers have their own LLM subscriptions.
This is the part people keep missing. The bottleneck was never typing code. It's understanding the problem deeply enough to know what the right solution looks like. AI made the cheap part cheaper.
I don't know, I've had good experiences getting LLMs to understand and follow architecture and style guidelines. It may depend on how modular your codebase already is, because that by itself would focus/minimize any changes.
I still think there is value in external contributors solving problems using LLMs, assuming they do the research and know what they are doing. Getting a well written and tested solution from LLM is not as easy as writing a good prompt, it's a much longer/iterative process.
> assuming they do the research and know what they are doing.
This is the assumption that has almost always failed and thus has lead to the banning of AI code altogether in a lot of projects.