> In fact, LLMs will be better than humans in learning new frameworks.

LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.

Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?

They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.

If its on github eventually it will cycle into the training data. I have also seen Claude pull down code to look at from github.

How much proprietary business logic is on public github repos?

I'm not talking about "do me this solo founder saas little thing". I'm talking about working on existing codebases running specialized stuff for a functional company or companies.

Wouldn't there be a chicken and egg problem once humans stop writing new code directly? Who would write the code using this new framework? Are the examples written by the creators of the framework enough to train an AI?

There's tooling out there 100% vibe coded, that is used by tens of thousands of devs daily, if that codebase found its way to training data, would it somehow ruin everything? I don't think this is really a problem, the problem will become people will need to identify good codebases from bad ones, if you point out which codes bad during training it makes a difference. There's a LOT of writings about how to write better code out there that I'm sure are already part of the training data.