Even as the field evolves, the phoning home telemetry of closed models creates a centralized intelligence monopoly. If open source atrophies, we lose the public square of architectural and design reasoning, the decision graph that is often just as important as the code. The labs won't just pick up new patterns; they will define them, effectively becoming the high priests of a new closed-loop ecosystem.

However, the risk isn't just a loss of "truth," but model collapse. Without the divergent, creative, and often weird contributions of open-source humans, AI risks stagnating into a linear combination of its own previous outputs. In the long run, killing the commons doesn't just make the labs powerful. It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

Humans will likely continue to drive consensus building around standards. The governance and reliability benefits of open source should grow in value in an AI-codes-it-first world.

> It might make the technology itself hit a ceiling because it's no longer being fed novel human problem-solving at scale.

My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future. For instance, implementation of low-level networking code can be the combination of patterns of zeromq. The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.

> The underlying assumption is that most people don't know how to write high-performance concurrent code anyway, so why not just ask them to command the AI instead.

The data economics reflexivity of LLM input means that when you reduce the future volume of that input to the few experts who "know how to write X anyway", the LLM labs just lost one of the most important inputs. All those non-experts who voted with their judgement and left in the wake of their effort to use the expert-written code, grist for the LLM input weighing mill.

I find it is usually the non-experts that run into the sharp operational edges the experts didn't think of. When you throw the non-experts out of the marketplace of ideas, you're often left with hazardous tooling that would just as soon cut your hand off than help you. It would be a hoot if the LLM's and experts decided to output everything and training in Common Lisp, though.

If handed just Babbage's Difference Engine, or the PDP-11 Unix V7 source code and nothing else, LLM's could speed-run and eventually re-derive the analogs of Zig, ffmpeg, YouTube, and themselves, I'll grant that "just let them cook with the experts" is a valid strategy. The information imparted by the activity around the source code is deeply recursive, and absent that I'm not sure how the labs are going to escape a local minima they're digging themselves into by materially shrinking that activity. If my hypothesis is correct, then LLM labs are industrial-scale stripping away the very topsoil that their products rely upon, and it is a single-turn cheap game that gets enormously more expensive in further iterations to create synthetic topsoil.

>My read of the recent discussion is that people assume that the work of far fewer number of elites will define the patterns for the future.

Even if we assume that's true, what will prevent atrophy of the skillset among the elites with such a small pool of practitioners?