Many of us on HN are beneficiaries of the standing world order and American hegemony.
I see the developments in LLMs not as getting us close to AGI, but more as destabilizing the status quo and potentially handing control of the future to a handful of companies rather than securing it in the hands of people. It is an acceleration of the already incipient decay.
I agree. You wouldn't see incredibly powerful and wealthy people frothing at the mouth to build this technology if that wasn't true, in my opinion.
People who like Curtis Yarvin's ramblings.
No one needs Curtis Yarvin, or any other commentator of any political stripe, to tell them that they'd like more money and power, and that they'd like to get it before someone else locks it in.
We should be so lucky as to only have to worry about one particular commentator's audience.
Are you seeing a moat develop around LLMs, indicating that only a small number of companies will control it? I'm not. It seems that there's nearly no moat at all.
The moat is around capital. For thousands of years most people were slaves or peasants whose cheap fungible labor was exploited.
For a brief period intellectual and skilled work has (had?) been valued and compensated, giving rise to a somewhat wealthy and empowered middle class. I fear those days are numbered and we’re poised to return to feudalism.
What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination? Or burgeoning of precariat gig workers barely hanging on? If we’re speaking of extremes, I find the latter far more likely.
> The moat is around capital.
Not really. I can run some pretty good models on my high end gaming PC. Sure, I can't train them. But I don't need to. All that has to happen is at least one group releases a frontier model open source and the world is good to go, no feudalism needed.
> What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination
I'd say whats more likely is that whatever we are seeing now continues. And that current day situation is a massive startup boom run on open source models that are nearly as good as the private ones while GPUs are being widely distributed.
I am also not seeing a moat on LLMs.
It seems like the equilibrium point for them a few years out will be that most people will be able to run good enough LLMs on local hardware through a combination of the fact that they don't seem to be getting much better due to input data exhaustion while various forms of optimization seem to be increasingly allowing them to run on lesser hardware.
But I still have generalized lurking amorphous concerns about where this all ends up because a number of actors in the space are certainly spending as if they believe a moat will magically materialize or can be constructed.
LLMs as we know them have no real moat, but few people genuinely believe that LLMs are sufficient as a platform for AGI. Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.
> Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.
Today yes but extrapolate GPU/NPU/CPU improvement by a decade.
Im pretty skeptical "the people" are smart enough to control their own destiny anymore. We've deprioritized education wo heavily in the US that it may be better to have a ruling class of corporations and elites. At least you know where things stand and how they'll operate.
> it may be better to have a ruling class of corporations and elites.
Given that the outcome of that so far has been to deprioritize education so heavily in the US that one becomes skeptical that the people are smart enough to control their own destiny anymore while simultaneously shoving the planet towards environmental calamity, I’m not sure doubling down on the strategy is the best bet.
Or we could, you know, prioritize education.
The standing world order is already dead since well before AI, it ended back in 2010s in terms of when the US had an opportunity to maybe resist change and we're just watching the inevitable consequences play out. They no longer have the economic weight to maintain control over Asia even assuming China is overstating their income by 2x. The Ukraine war has been a bloodier path than we needed to travel to make the point, but if they can't coerce Russia there is an open question of who they can, Russia isn't a particularly impressive power.
With that backdrop it is hard to see what impact AI is supposed to make to people who are reliant on US hegemony. They probably want to find something reliable to rely on already.
It is not decay. People are just more conscious than previous generations ever were about how the world works. And that leads to confusion and misunderstandings if they are only exposed to herd think.
The chicken doesn't understand it has to lay a certain number of eggs a day to be kept alive in the farm. It hits its metrics because it has been programmed to hit them.
But once it gets access to chatgpt and develops consciousness of how the farm works, the questions it asks slowly evolve with time.
Initially its all fear driven - how do we get a say in how many eggs we need to lay to be kept alive? How do we keep the farm running without relying on the farmer? etc etc
Once the farm animals begins to realize the absurdity of such questions, new questions emerge - how come the crow is not a farm animal? why is the shark not used as a circus animal? etc etc
And thro that process, whose steps cannot be skipped the farm animal begins to realize certain things about itself which no one, especially the farmer, has any incentive of encouraging.
Are you entertaining the idea that chatGPT should become Pastoral Care for the masses? Sounds like an easier target than AGI.
(Stoics have already taken issue with the notion that fear is the motive for all human action, and yes, consciousness is a part of their prescription)
Separately,
"Hard work seems to lead to things getting better"
sounds like an unsung (fully human) impulse
https://geohot.github.io/blog/jekyll/update/2025/10/24/gambl...
truly, nobody ever asked such questions until they had access to the world’s most sycophantic dumb answer generating machine
Ideology is a -10 modifier on Intelligence
Are you implying that there are people who don't have ideology or that they're somehow capable of reasoning and acting independently of their ideology?
I'm implying that some people put ideology above everything else including data, experience and context. Most people don't but some do. People of course have biases based upon experience. But they make a good faith attempt to be accurate. Others are completely blinded to reality by ideology. Acting like this doesn't happen just because people have different opinions is just dishonest.