Will open source or local llms kill the big AI providers eventually? If so when? I can see maybe basic chat, not sure about coding and images yet

Not necessarily kill; but it will slowly push them off the critical path. Local agents can delegate to remote sub agents as needed but should default to local processing for low cost and latency reasons.

I think the notion of a one size fits all model that is a bit like a sports car in the sense that just get the biggest/fastest/best one is overkill; you use bigger models when needed. But they use a lot of resources and cost you a lot. A lot of AI work isn't solving important math or algorithm problems. Or leet coding exercises. Most AI work is mundane plumbing work, summarizing, a bit of light scripting/programming, tool calling, etc. With skills and guard rails, you actually want agents to follow those rather than get too creative. And you want them to work relatively quickly and not overthink things. Latency is important. You can actually use guard rails to decide when to escalate to bigger models and when not to.

Centralized inference is more economically efficient⁰, and should be cheaper for most users once competition squeezes the air out of token prices. It remains very valid for anyone who wants to maintain their privacy, ofc.

0: Because the only way to get cache locality out of a LLM is to batch invocations. A centralized system where the server handles thousands of invocations at the same time only needs a tiny fraction of the total memory throughput as having all of those invocations run locally on different machines would.

Financial gravity will kill them when returns don't match stratospheric expectations.

I hope so too, but I think it's wishful thinking. Be prepared for the mother of all financial bailouts from the world governments to make sure that doesn't happen

I can understand why banks got bailed out by the US gov in 2008, but why would a government feel the need to bail out AI labs?

I hope you are not going to say, "to avoid a global recession or depression caused by the popping of the AI bubble". That would be unnecessary and harmful (in its second-order effects), and governments do have advisors who are competent enough in economics to advise against such a move.

Can you understand why banks were bailed out to the extent of protecting shareholders?

In the UK the first bank to go, Northern Rock, was simply taken over by the government. The shareholders got nothing. The bailout of Lloyds bank required the government taking a 40% stake. This is the way to go - if you need a bailout there should be a cost to the shareholders. otherwise you are just privatising profit and nationalising risk.

Not that UK regulation was great all round or the bailout perfect. It certainly failed to prevent the crisis which could have been done (no doubt the same applies in many countries). I looked at Northern Rock's accounts some time (an year, maybe?) before the crisis and was horrified by their reliance on interbank lending. it was obvious they could not cope with a rise in rates.

Bold of you to assume competency will overpower politics in our current era.

So far, the country I know best, the US, has been competent enough to avoid massive corporate bailouts except the aforementioned banks in 2008 and GM. The bailout of GM was not motivated by a desire to avoid a recession when a bubble pops.

If the AI labs become very influential and powerful, Washington might nationalize them, but that would be very different from bailing them out because they have become unprofitable and cannot attract additional investment from the private sector.

You forgot about the $9b bailout to Intel in August of 2025.

With the recent OpenAi deal with the government I am certain they would throw tons of money at OpenAi if it got real bad. But with upcoming IPO where they are expected to be valued at $840b, we would be a LONG way from them needing a bailout. Well past this current admin.

Despite politics, TARP was arguably an economic success story for the US treasury despite public sentiment. Whether it created moral hazard or not I suppoae is up for debate.

GM on the other hand should have been left to die.

However, I was obliquely referring to the open transactionality and patronage encouraged by the current administration, and how the AI / big tech players have, with few exceptions, gleefully joined in.

Unless they run out of money for bribes, I think it's inevitable that current government will bend over backwards to prop them up.

a bailout is a popular way in which public funds lose their publicness.

Do the examples of the banks and GM suggest that it is likely that AI companies will get a bailout to avoid the bubble popping?

The reason the banks bailouts did not involve nationalisation is that the US is very reluctant to nationalise anything.

The U.S. has an admin right now that has made it clear the only important metric for country health is the stock market, which is single-handedly propped up by AI right now.

That's why huge concessions nobody asked for were made to the AI industry in the Big Beautiful Bill.

"but why would a government feel the need to bail out AI labs"

Oh easy, with all the drones and sensors, AI means military power. Those who dare opposing the bailout of the local AI gigants want the other side to win.

/s

Unless there are some really, really major shortcuts found in inference, then it's always going to be hard to run a really great model locally. The costs of the PC + electric will usually be crazy compared to a $20/mo Claude sub.

But that $20/month is still heavily subsidised. You have to compare to the API costs, not the direct subscription.

It'd be nice if they do, but I don't really see how. Training these open-weight local LLMs is still insanely expensive and hard to do, even if it's cheaper and faster than what the big corps are doing.

I don't get the financial motive for someone to keep funding these open-weight model training programs other than just purposefully trying to kill the big AI providers.

Some open source models will cross the chasm, some big ai providers will too, and in both case they will have their specific use cases.

This has been my theory for a while: during this autumn Apple will release a version of Apple Intelligence that runs locally and works better than ChatGPT. They will do this because 1) they do not have an offering in AI yet 2) they have amazing hardware that even now almost can pull it off on open models and this will not be possible to replicate on android for a long time (presumably)

This will crush OpenAI.

Note: I am not talking about coding here - it will take a while longer but when it is optimized to the bone and llms output has stabilized, you will be running that too on local hardware. Cost will come down for Claude and friends too but why pay 5 when you can have it for free?

> This has been my theory for a while: during this autumn Apple will release a version of Apple Intelligence that runs locally and works better than ChatGPT.

In this theory, can you explain why Apple has announced it’s paying Google for Gemini too?

Eventually, this may be true. This autumn? Highly unlikely.

The Google Gemini deal is one of the reasons I think it is likely since Gemini works pretty local hw...

They won't for coding and images, but they will socially. Everyone I know who has invested in home AI use is mostly using it for 'things that might get you banned/limited'.

I'm quite impressed what is possible with just 12 to 16 GB of vram in terms of image generation.

When Apple gets their shit together.