So yes, but that doesn't negate the circular investment aspect, for most intents and purposes.

The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.

That's a risk.

I feel like the whole market at this point is just AI since big tech other than Apple are all massively invested into that. Everyone owns either the S&P or the total world ETF which are both heavily skewed towards big tech and this trade - so literally everybody is in it. It might go well for a few more quarters/years but once something breaks or gets exponentially cheaper this will take down the whole market with it.

It's just hard to tell the difference between "real" demand and "circular." That's the concern.

PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.

So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.

Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.

Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.

Imo the general risk factor is trying to get ahead of actual worldly use.

The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.

But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.

The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.

I'm starting to transition how we build software at our company due to the power of AI. No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies. There is going to be a giant sucking sound in India.

I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.

Ok... but extrapolating from this to "whole market" paradigms is speculative.

The (imo) question isn't how you produce software, but what the value of this software is. Are you going to make make/better software such that customers pay more, or buy more? Are those customers getting value of this kind?

The answer may be yes. But... it's not an automatic yes.

Instead of programming think of accounting. Say you experience what you are experiencing, but as an accountant. 6 person team replaced by 2-3 hotshots.

So... Maybe you can sell more/better accounting for a higher price. But... potential is probably pretty limited. Over time, maybe business practices will adjust and find uses for this newly abundant capacity.

Maybe you lower prices. Maybe the two hotshot earn as much as the previous team.

If you are reducing team size, and that's the primary benefit... the fired employees need to find useful emplyment elsewhere in the economy for surplus value to be realized.

Mediating all this is the law of diminishing returns. At any given moment, new marginal resources have less productive value than the current allocation.

And the day you don't have that drug what do you do? If anything you are training people to become dependent on one or more subscription services.

Except the dev that gets AI done in 5 hours will have a poorer mental model of the code. Whether that's important might or might not depend on whether that bites you in the ass at some point.

Don’t really agree with this.

That dev is productive with AI precisely _because_ they have a good mental model.

AI like other tools is a multiplier - it doesn’t make bad devs good, but it makes good devs significantly more productive.

Don't agree - the dev is productive because they have a good mental model of the problem space and can cajole the agent into producing code that agrees with the spec. The trend is for devs to become more like product managers (which is why you see some whip-smart product managers able to build products _without_ human devs)

But does it matter?

If you write a program in Python or JavaScript, you have a terrible mental model for how that code is actually executed in machine code. It's irrelevant though, you figure it out only when it's a problem.

Even if you don't have a great mental model, now you have AI to identify the problems and generate an explanation of the structure for you.

No, but you have a great mental model of the interface between your problem domain and the code, which is where you can affect change.

Outsourcing that to an AI SaaS might be ok I guess. Given past form there's going to be a rug-pull/bait-and-switch moment and dividends to start paying out.

> It's irrelevant though, you figure it out only when it's a problem.

For the past decade people have been clawing their eyes out over how sluggish their computers have become due to everything becoming a bloated Electron app. It's extremely relevant. Meanwhile, here you are seemingly trying to suggest that not only should everything be a bloated, inefficient mess, it should also be buggy and inscrutable, even moreso than it already is. The entire experience of using a computer is about to descend into a heretofore unimaginable nightmare, but hey, at least Jensen Huang got his bag.

> literally everybody

I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.

There are zero US pure-play AI companies which pay dividends, right?

You have to go pretty far down the list of holdings (under "Holding details") to find any big bets on AI:

https://www.vanguardinvestor.co.uk/investments/vanguard-ftse...

>Companies using the value of their shares to fund demand for their services.

That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.

The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).

True.

Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.

In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.

"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.

>(a) it's ability/desire to make such investments is still driven by stock-driven optimism

I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.

What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.

In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.

Given this price and Anthropic's strategic value, Google's investment seems reasonable.

But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.

So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.

And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.

And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.

And all these companies are paying lots of money for these AI training experts.

But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.

Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.

So what's worth 380 billion exactly? The brand?

These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.

Only the French have done it.

But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.

The technical task is not the business task... unless the task really is a commodity.

Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.

Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.

A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.

>So what's worth 380 billion exactly? The brand?

Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.

But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.

The tech industry goes through investment phases to produce oligopolies it turns around and enshittifies, parasitizing income off what it has built. Venture capital, acquisitions, acquihires, circular investments - It’s been incestuous for years. The question is whether competition from China’s sophisticated tech sector, which already surpasses the US in many areas, will put a pin in these plans this time round.

I don't agree with the "full cynicism" POV, but I do agree that TechnoChina's existence is a potential paradigm shifter.

But generally speaking, AI is currently pretty competitive and robust. Straightforward business model where users pay money and select the best deal are central. Market power is relatively dispersed.

So... Idk. Nvidia doesn't have competition. But Intel didn't have much competition either, and they drive the Moore's law bus for a long time.

Hardware has been less prone to enshitification. Maybe it's because the demand curve for compute doesn't have natural limits. Drive down price, and demand grows by enough that the total market grows.

Nvidia clearly has competition, that's what this deal with Google is about (TPUs).