> or in the cloud but way more expensive then it is today.

Why? It's widely understood that the big players are making profit on inference. The only reason they still have losses is because training is so expensive, but you need to do that no matter whether the models are running in the cloud or on your device.

If you think about it, it's always going to be cheaper and more energy-efficient to have dedicated cloud hardware to run models. Running them on your phone, even if possible, is just going to suck up your battery life.

> It's widely understood that the big players are making profit on inference.

This is most definitely not widely understood. We still don't know yet. There's tons of discussions about people disagreeing on whether it really is profitable. Unless you have proof, don't say "this is widely understood".

You can look at open source models hosted by various companies that have no reason to host them on a loss.

Uber ran their ridesharing at a loss for years. This is a very common way to gain market share.

What market share? We are talking commodity models where the host does not matter at all at OpenRouter etc.

Uber had massive VC investment and a moat. The companies he's referring to likely don't have much VC investment and zero moat.

I recently had Codex working for 80+ hrs non stop (as in literally that was a single running session in response to a single prompt!).

Even at $200 monthly subscription that kind of stuff burns through tokens at a rate where it's very difficult to believe that they are even breaking even, never mind profit.

thats nuts. what was it doing for 80 hrs?

Probably asked what’s the Answer to the Ultimate Question of Life, the Universe, and Everything

The reality is we can’t trust accounting earnings anyway.

We need to see the cash flows.

I don’t have “proof” but the existence of so many providers of free models on OpenRouter strongly suggests inference is running at a profit. There’s no winner-takes-all angle to being a faceless provider there (often the consumer doesn’t know who fulfilled the request), so there’s just no incentive at all for these small provider companies to exist unless inference is profitable under the right conditions.

>but the existence of so many providers of free models on OpenRouter strongly suggests inference is running at a profit

I don't think it suggests a profit, but rather a _hope_ for a _future_ profit, and a commitment to a strategy that may or may not pan out. Capitalism rewards those who are early to the party and commit to their bit.

The big players are plausibly making profits on raw API calls, not subscriptions. These are quite costly compared to third-party inference from open models, but even setting that up is a hassle and you as a end user aren't getting any subsidy. Running inference locally will make a lot of sense for most light and casual users once the subsidies for subscription access cease.

Also while datacenter-based scaleout of a model over multiple GPUs running large batches is more energy efficient, it ultimately creates a single point of failure you may wish to avoid.

> It's widely understood that the big players are making profit on inference.

If you add in the cost of training, it’s not profitable.

Not including the cost of training is a bit like saying the only cost of a cup of coffee is the paper cup it’s in. The only way OpenAI gets to charge for inference is by selling a product people can’t get elsewhere for much cheaper, which means billions in R&D costs. But because of competition, each model effectively has a “shelf life”.

At least Anthropic claims that they are profitable on a per model basis. But since both revenue and training costs are growing exponentially, and they need to pay for model N training today, and only get revenue for model N-1 today, the offset makes it look worse than it is.

Obviously that doesn’t help them turn a profit, until they can stop growing training costs exponentially.

So it’s really a race to see whether growth in revenue or training costs decelerates first.

[flagged]

They will always be training new models, so if training is expensive, that's just part of the business they are in.

Vast amounts of capital have been poured in, but they continue to raise more. Presumably because they need more.

Is the capital being invested without any expectation of ROI?

[deleted]

Laptop/desktop could work. Most systems are on charger most of time anyway

> It's widely understood that the big players are making profit on inference.

I love the whole “they are making money if you ignore training costs” bit. It is always great to see somebody say something like “if you look at the amount of money that they’re spending it looks bad, but if you look away it looks pretty good” like it’s the money version of a solar eclipse

The reason it matters is that if they are making a profit on inference, then when people use their services more, it cuts their losses. They might even break even eventually and start making a profit without raising the price.

But if they're losing money on inference, they will lose more money when people use their services more. There's no way to turn that around at that price.

We don't even have any evidence inference excluding training is actually profitable.

It is called sunk cost. The marginal cost is what sets the lower limit. They will always be able to sell at the marginal cost of inference.

> It's widely understood that the big players are making profit on inference.

Are they? Or are they just saying that to make their offerings more attractive to investors?

Plus I think most people using agents for coding are using subscriptions which they are definitely not profitable in.

Locally running models that are snappy and mostly as capable as current sota models would be a dream. No internet connection required, no payment plans or relying on a third party provider to do your job. No privacy concerns. Etc etc.

> Plus I think most people using agents for coding are using subscriptions which they are definitely not profitable in.

Where on earth do people get this idea? Subscriptions that are based around obscure, vendor defined "credits" are the perfect business model for vendors. They can change the amount you can use whenever they want.

It's likely they occasionally make a loss on some users but in general they are highly profitable for AI companies:

> Anthropic last month projected it would generate a 40% gross profit margin from selling AI to businesses and application developers in 2025

and

> OpenAI projected a gross margin of around 46% in 2025, including inference costs of both paying and nonpaying ChatGPT users.

https://archive.is/aKFYZ#selection-1075.0-1083.119

Both of those companies are losing hella money, dude just cuz they say they “expect” to be profitable doesn’t mean they are.

You can pick models that are snappy, or models that are as capable as SOTA. You don't really get both unless you spend extremely unreasonable amounts of money on what is essentially a datacenter-scale inference platform of your own, meant to service hundreds of users at once. (I don't care how many agent harnesses you spin up at once, you aren't going to get the same utilization as hundreds of concurrent users.)

This assessment might change if local AI frameworks start working seriously on support for tensor-parallel distributed inference, then you might get away with cheaper homelab-class hardware and only mildly unreasonable amounts of money.