OK investors, time to pull out of OpenAI and move all your money to ChatJimmy.

A related argument I raised a few days back on HN:

What's the moat with with these giant data-centers that are being built with 100's of billions of dollars on nvidia chips?

If such chips can be built so easily, and offer this insane level of performance at 10x efficiency, then one thing is 100% sure: more such startups are coming... and with that, an entire new ecosystem.

RAM hoarding is, AFAICT, the moat.

lol... true that for now though

Yeah, just cause Cisco had a huge market lead on telecom in the late '90s, it doesn't mean they kept it.

(And people nowadays: "Who's Cisco?")

You'd still need those giant data centers for training new frontier models. These Taalas chips, if they work, seem to do the job of inference well, but training will still require general purpose GPU compute

Next up: wire up a specialized chip to run the training loop of a specific architecture.

I think their hope is that they’ll have the “brand name” and expertise to have a good head start when real inference hardware comes out. It does seem very strange, though, to have all these massive infrastructure investment on what is ultimately going to be useless prototyping hardware.

Tools like openclaw start making the models a commodity.

I need some smarts to route my question to the correct model. I wont care which that is. Selling commodities is notorious for slow and steady growth.

Nvidia bought all the capacity so their competitors can't be manufactured at scale.

If I am not mistaken this chip was build specifically for the llama 8b model. Nvidia chips are general purpose.

You mean Nvidia?