How can there be a "winner takes it all" situation with AI?
OpenAI lead the game while they were best. Antrophic followed and got better. Now openAI is catching up again and also google with gemini(?) ... and the open weight models are 2 years behind.
Any win here seems only temporary. Even if a new breakthrough to a strong AI happen somehow.
Look at the "winner takes all" situation in web search. Of course other search engines exist, but the scale of the Google search operation allows it to do things that are uneconomical for smaller players.
Recursive self-improvement is one argument. Otherwise winner takes all seems much less likely than a OpenAI/Anthropic duopoly. For the best models, obviously other providers will have plenty of uses, but even looking at the revenue right now it's pretty concentrated at the top.
So if I'm Google I'd want a decent chunk of at least one of them.
What is the argument for a duopoly when Kimi and Deepseek models are only months behind?
It’s a commodity in the making.
The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.
How does it follow that companies that are months apart will trip the singularity and this will prevent the others from doing so?
Who supplies the hardware for the singularity?
There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.
That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).
Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.
I guess you can sell it to the Department of War.
> What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.
Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.
To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.
It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.
There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.
Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.
If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.
Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.
That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.
(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)
Don't open models and competition between frontier providers both serve as barriers here? If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else. They'd need a sufficient advantage to pull out far ahead of everyone else before others had a chance to react in a meaningful way. Otherwise the competitors that absorbed all your subscriptions would stack that much more hardware and continue to challenge you.
I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.
> If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else.
Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.
Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.
Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.
Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.
Either way, the way I see it, software industry as we know it is already living on borrowed time.
I don't understand where the unbeatable edge is supposed to come from here. Don't we already have this in the form of agents using tools? Right now it's CLI but it's not difficult to imagine extending that to a GUI coupled with OCR and image recognition in a way that generalizes.
So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.
So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?
If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.
I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.
Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.
So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.
> Open models can't compete
They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.
One of the most valuable software products in the world is Instagram. Tens of billions of revenue annually.
Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.
They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.
I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?
> Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?
At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.
That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.
If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.
Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.
Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.
They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold.
The first to AGI, or a close approximation, is the winner. That’s what the investors in Anthropic and OpenAI are betting on.
I’d be willing the bet that the Venn diagram of investors in those two companies is nearly a circle.
"The first to AGI, or a close approximation, is the winner. "
But why? Assuming there is a secret undiscovered algorithm to make AGI from a neuronal network ... then what happens if someone leaks it, or china steals it and releases it openly tomorrow?
So, what will AGI be able to do that will make that bet pay off? Human-like intelligence is already very common. Vastly better than human intelligence seems like it would be worth the expense, but I don't know where we'd get suitable training data.
The bet is that they perfect a new kind of neural network which is roughly as good at "training" as the human mind is as far as "amount learned/experience gained per bit of information input".
Current LLMs are absolutely stupidly inefficient on this front, requiring virtually all human knowledge to train on as a prerequisite to early-college-level understanding of any one subject (granted, almost all subjects at that point, but what it has in breadth it lacks in depth).
That way instead of training millions of TPUs on petabytes of data just to get a model that maintains an encyclopedia of knowledge with a twelve-year-old's capacity for judgment, that same training set and compute could (they hope) instead far exceed the depth of judgement, planning, and vision of any human who has ever lived (ideally while keeping the same depth, speed of inference, etc).
It's one of those situations where we have reason to believe that "exactly matching" human intelligence is basically impossible: the target range is too exponentially large. You either fall short (and it's honestly odd that LLMs were able to exceed animal intelligence/judgment while still falling short of average adult humans.. even that should have been too small of a target) or you blow past it completely into something that both humans and teams of humans could never compete directly against.
Chess and Go are fine examples here: algorithms spent very short periods of time "at a level where they could compete reasonably well against" human grand masters. It was decades falling short, followed by quite suddenly leaving humans completely in the dust with no delusions of ever catching up.
That is what the large players hope to get with AGI as well (and/or failing that, using AI as a smoke screen to bilk investors and the public, cover up their misdeeds, play cup and ball games with accountability, etc)
Are these investors high? Or just insane?
Finance professor Aswath Damodaran, and others, have made many useful insights as to how AI as an investment is likely to pay out.
One technique is, instead of trying to pick individual winners, look at the total addressable market. Then compare the market size with the capital being pumped in. If you look on this basis, Aswath concluded that collectively AI investment is likely to provide unsatisfactory returns.
Here's a recent headline: "Nvidia’s Jensen Huang thinks $1 trillion won’t be enough to meet AI demand—and he’s paying engineers in AI tokens worth half their salary to prove it"
There are two parts to this. 1. A staggering $1t is expected to be invested in AI. Someone worked out that this was more than the entire capital expenditure for companies like Apple. We're talking about its entire existence here. IOW, $1t is a lot of dough. A LOT.
Secondly, this whole notion that AI is such a sure thing that half the salary will be in tokens should ring alarm bells. '“I could totally imagine in the future every single engineer in our company will need an annual token budget,” he said. “They’re going to make a few 100,000 a year as their base pay. I’m going to give them probably half of that on top of it as tokens so that they could be amplified 10 times.”'
I recall from the dotcom fiasco that service companies like accountants and lawyers were providing services to the dotcom companies and being compensated in stock options rather than cold hard cash like you'd normally expect.
Very dangerous.
As another poster pointed out, this really boils down to FOMO by big tech. I'm expecting big trouble down the line. We await to see if I'm early or just plain wrong.
Neither. It's the most severe FOMO in history. The best case scenario is equivalent to attempting to pick future winners just prior to the industrial revolution really kicking off. Except this time around the technological timelines appear to be severely compressed and everyone is fully aware of what's at stake. And again, that's the best case scenario.
Its just market euphoria.
This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.
It is just cargo cult financing at this point.
2 years? 2 years ago, gpt-4o was OpenAI's flagship model. The gap is real, but much smaller than 2 years.
I guess if you build the first AI that can autonomously self improve, then nobody can catch up anymore.
This is a common canard. AI already autonomously self improves. All the training pipelines for modern frontier models are filled with AI. AI generates synthetic data, it cleans data, it judges output quality and feeds back via RL, it does hyperparameter tuning, it rewrites kernels for speed and a thousand other things.
But: no singularity. At least not yet.
The flaw in this thinking seems to be the idea that AI is a singular thing. You point the model back at its own source code, sit back and watch as it does everything at once. Right now it's more like AI being an army of assistants organized by human researchers. You often need specialized models for this stuff, you can't just use GPT for everything.
That seems really paradoxical and I think it would just burn up compute. The AI really doesn't have any way to know it's getting better without humans telling. As soon as the AI begins to recursively improve based on its own definition of improvement model collapse seems unavoidable.
If humans are able to judge, and if the AI is more capable than a human in every respect, then why can't the AI be the judge of its own performance? Humans judge their own output all the time.
The difference IMO is that every single human is a slightly different model, not the same one with a different prompt, or weights.
I'm not sure I buy that competition between individuals is a hard requirement but lets assume that to be the case for now. Then how many variants of itself do you suppose an AI could instantiate in parallel given full control of a gigawatt class datacenter?
Humans ultimately judge their output by comparison and competition. When we get to the point an AI is capable of participating on the market directly, it'll no longer make sense to proxy judgement through humans anymore.
Agreed. But also, comparison and competition between individuals is only one of the ways in which improvement happens. Consider for example that it's also possible to build something for personal consumption and iteratively improve on the design without regard for what anyone else thinks of it. Cooking comes to mind.
Right. But even that is shaped directly or indirectly by environment you live in. The way you scratch your own itch looks differently depending on what itch you have. Plus, humans are social animals, we live in groups and constantly judge each other and try to have others judge us favorably.
AI has none of that now - it only gets direct human feedback from those controlling the training (or at a second level, the harness), and that feedback is really in service of the humans at the steering wheels. Sum total of humanity, mixed in the blender, and flavored to make the trainers look good in front of their peers.
Now, if AI could interact directly and propagate that feedback to their training, or otherwise learn on-line, that changes. It's a qualitative jump. The second one is, once there's enough AIs interacting with human economy and society directly, that their influence starts to outweigh ours. At that point, they'll end up evolving their own standards and benchmarks, and then it's us who will be judged by their measure.
(I.e. if you think we have it bad now, with how we're starting to adapt our writing and coding style to make it easier for LLMs, just wait when next-gen models start participating in the economy, and we'll all be forced by the market forces to learn some weird, emergent token-efficient English/Chinese pidgin that AI-run companies prefer their suppliers to use.)
But if the second AI that can self improve comes up?
Then it all remains a question of who has the most compute power, as self improve seems compute heavy with the current approach.
If that happens catching up will be meaningless, everything we know and care about will change. You don’t have to be doomsday about it even, a self improving AI will quickly be more efficient than a human brain, all the data centers will be useless, tech companies will collapse (so will most others), everyone will have an incredible AI resource for the price of a hotdog. There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.
> There’s no way it wouldn’t leak from whoever made it, either by people or by the AI itself.
It seems pretty wild to bet the future on such an assumption. What are you even basing it on?
Because any goal can be better achieved if you're under fewer constraints. We're building super powerful agentic problem solving machines. Give them literally any complex goal. Breaking out of the sandbox is a useful subtask to increase their options.
Not even 2 years behind.