The current LLMs are not constantly learning. They're static models that require megatons of coal to retrain.
Now if the LLMs could modify their own nets, and improve themselves, then that would be immensely valuable for the world.
But as of now, its a billionaires wet dream to threaten all workers as a way to replace labor.
How would you have done it differently? It’s clear that even if this builds up more billionaire wealth, it still benefits everyone. Just like any other technology?
So would you rather stop billionaires from doing it?
> It’s clear that even if this builds up more billionaire wealth, it still benefits everyone.
That is by no means clear. I have yet to see any benefit from AI. I have no doubt in my mind that I'm not alone. So how is "everyone" benefiting from this trend?
Is this actually true? Anyone care to comment on the claim (from some quarters) that offering free access to ChatGPT allows OpenAI to 'collect a gold mine of real-world interaction data to improve the underlying language model. Every conversation users have with ChatGPT – every question asked and every task requested – provides valuable training data for refining' the model.
If false, I'm thinking - there's me, thinking I'm doing my bit to help . . .
Corporate LLMs also are going to be the absolute biggest rug-pull.
The billionaire companies responsible for this claim they can bed used for intellectual and artistic labor. Sure, they're the biggest piracy and plagariam engines.
Once people start losing the abilities of what LLMs were trained on, will be the next phase. And thats to ratchet up prices to replace what they would have paid to humans.
It will be the biggest wealth transfer we will have ever seen.
This whole socio-econic equasion would be different if we all did actually gain. Raising tides raise all ships, but sinking ships get gobbled faster and faster.
Think bigger, think of the entire system outside of just the single LLM: the interplay of capital, human engineering, and continual improvement of LLMs. First gen LLM used 100% human coding. Previous gen were ~50% human coding. Current gen are ~10% human coding (practically all OpenAI/Anthropic engineers admit they're entirely using Claude Code / Codex for code). What happens when we're at 1% human coding, then 0%? The recursive self-improvement is happening, it's just indirectly for now.
And if everybody could thrive, then I say go for it.
But thats nowhere near the system we have. Instead, this is serving to take all intellectual and artistic labor, and lower everyone into the 'ditch digger and garbage man' class, aka the low wage class.
LLMs was never about building upwards. Its about corporate plundering from the folks who get paid well - the developers, the artists, the engineers, you name it. And its a way to devalue everyone with a pretend that they can be replaced with the biggest plagiarism machine ever created.
https://www.reddit.com/r/economicCollapse/comments/1hspiym/t...
is correct in their take. Why else would we see a circular economy of shoving AI everywhere? Because it serves to eliminate wages, paid to humans.
This stratification and cementing social classes nearly permanently is a way to have our race of humanity die. But hey, shareholder value was at an all time high for a while.
> if the LLMs could modify their own nets ... then that would be immensely valuable for the world.
Not sure :)
I expect different things, don't think Wall Street allows good things to happen to ordinary people