Gotta love a world in which a tool which has ingested "all the world's libraries" is now trotted out as a solution to replace those libraries.
You know what would happen if all the people who handwrote and maintained those libraries revoked their code from the training datasets and forbid their use by the models?
:clown face emoji:
This LLM-maxxing is always a myopic one-way argument. The LLMs steal logic from the humans who invent it, then people claim those humans are no longer required. Yet, in the end, it's humans all the way down. It's never not.
> You know what would happen if all the people who handwrote and maintained those libraries revoked their code from the training datasets and forbid their use by the models?
The MCP servers combined with agentic search solved this possibility, just this year superseding RAG methods but all techniques have their place. I don't see much of a future for RAG though, given its computational intensity.
Long story short, training and fine tuning is no longer necessary for an LLM to understand the latest libraries, and therefore the "permission" to train would not even be something applicable to debate
it's a fast moving field, best not to have a strong opinion about anything
LLMs write superior code. Maybe they learned from humans, but they have seen further while standing on those shoulders.
> LLMs write superior code.
How would they know what superior code is? They're trained on all code. My expectation and experience has been that they write median code in the best-case scenario (small greenfields projects, deeply specified, etc).
Then you have access to better models then I do (4.6/5.3)
The code is mostly not bad, but most programmers i have worked with write far better code.
The AI maximalist are really out in force in this thread.