I find that instructing AI to use frameworks yields better results and sets you up for a better outcome.

I use Claude Code with both Django and React which its surprisingly good with. I rather use software thats tried and tested. The only time I let it write its own is when I want ultra minimal CSS.

This. For area where you can use tested and tried libraries (or tools in general) LLMs will generate better code when they use them.

In fact, LLMs will be better than humans in learning new frameworks. It could end up being the opposite that frameworks and libraries become more important with LLMs.

> In fact, LLMs will be better than humans in learning new frameworks.

LLMs don't learn? The neural networks are trained just once before release and it's a -ing expensive process.

Have you tried using one on your existing code base, which is basically a framework for whatever business problem you're solving? Did it figure it out automagically?

They know react.js and nest.js and next.js and whatever.js because they had humans correct them and billions of lines of public code to train on.

If its on github eventually it will cycle into the training data. I have also seen Claude pull down code to look at from github.

How much proprietary business logic is on public github repos?

I'm not talking about "do me this solo founder saas little thing". I'm talking about working on existing codebases running specialized stuff for a functional company or companies.

Wouldn't there be a chicken and egg problem once humans stop writing new code directly? Who would write the code using this new framework? Are the examples written by the creators of the framework enough to train an AI?

There's tooling out there 100% vibe coded, that is used by tens of thousands of devs daily, if that codebase found its way to training data, would it somehow ruin everything? I don't think this is really a problem, the problem will become people will need to identify good codebases from bad ones, if you point out which codes bad during training it makes a difference. There's a LOT of writings about how to write better code out there that I'm sure are already part of the training data.

Yeah, I don't know why you'd drop using frameworks and libraries just because you're using an LLM. If you AREN'T using them you're just loading a bunch of solved problems into the LLMs context so it can re-invent the wheel. I really love the LLM because now I don't need to learn the new frameworks myself. LLMs really remove all the bullshit I don't want to think about.

> LLMs will be better than humans in learning new frameworks.

I don't see a base for that assumption. They're good at things like Django because there is a metric fuckton of existing open-source code out there that they can be trained on. They're already not great at less popular or even fringe frameworks and programming languages. What makes you think they'll be good at a new thing that there are almost no open resources for yet?

LLMs famously aren’t that good at using new frameworks/languages. Sure they can get by with the right context, but most people are pointing them at standard frameworks in common languages to maximize the quality of their output.

I asked Claude to use some Dlang libraries even I had not heard of and it built a full blown proof of concept project for me, using obscure libraries nobody really knows. It just looked through docs and source code. Maybe back 3 years ago this would have been the case.

[deleted]

This is not my experience any longer. With properly set feedback loop and frameworks documentation it does not seem to matter much if they are working with completely novel stuff or not. Of course, when that is not available they hallucinate, but who anymore does that even? Anyone can see that LLMs are just glorified auto-complete machines, so you really have to put a lot of work in the enviroment they operate and quick feedback loops. (Just like with 90% of developers made of flesh...)

Or you could use an off the shelf popular framework in Python and save yourself some time curating the context.

How will LLM's become better than humans in learning new frameworks when automated/vibe coders never manually code how to use those new frameworks ?