Nvidia could flip a switch and start competing with former customers. They have the talent, the models, the HW, and they know how to quickly build out DCs.
Nvidia could flip a switch and start competing with former customers. They have the talent, the models, the HW, and they know how to quickly build out DCs.
That would not go well for nvidia. Why would they want to enter and compete in a market where everybody is losing money? And in so doing alienate the people that make them profitable?
They don't, not directly. That's why they've generally been winding down DGX Cloud. However, lots of folks dramatically underestimate their frontier models (Nemotron family), and they license those models (free) for embedding in MANY other, large tech companies' own products and platforms, which either directly or indirectly consume massive quantities of GPU time.
Nvidia is best known for selling huge volumes of GPUs to the hyperscalers & neoclouds, but I don't think lots of folks appreciate how many GPUs ISVs like Snowflake, Databricks, Teradata, etc consume, too, just by virtue of designing much of their internal products around CUDA & Nemotron.
If those customers end up profitable, it could be tempting for nVidia to vertically integrate.
I don't think it's as easy as others say, though.
Presumably Nvidia would have a significant advantage on hardware costs, and could run custom hardware that the rest of the market will never see
And maybe TSMC should make cellphones? If you're higher up the supply chain why go downstream into risk? It's financially irresponsible.
> If you're higher up the supply chain why go downstream into risk? It's financially irresponsible.
Well, classically, to capture more margin for yourself. In business school they call this Vertical Integration. Samsung did exactly this. AWS too.
They don't have the talent.
Are you suggesting Nvidia doesn't have talent in the AI industry?
NVIDIA has released NVIDIA Deep Learning Super Sampling (DLSS) and a Frame Generation model, NVIDIA Super Resolution (VSR) being the most popular/well known models. (DLSS is outstanding technology, despite the sometimes misleading marketing).
Nvidia has released countless models:
Alpamayo 1 (Car navigation model) Cosmos-Reason2 (reasoning vision language model) Nemotron 3 (Large Language Model series) Llama-Nemotron (Large Language Model series) Isaac GR00T (VLA Models) Nemotron OCR (Optical Character Recognition models)
Take a look at their HuggingFace Collections, almost 100 different collections with countless models inside each collection: https://huggingface.co/nvidia/collections
> Are you suggesting Nvidia doesn't have talent in the AI industry?
nVidia has an open position for system architect, orbital station AI datacenter
Sounds like you get to sit on the roof all day looking up.
Parakeet (speech recognition) is also a frontier model
NV has a massive amount of AI talent, and a lot of them have PhDs.
Are you suggesting they're lacking on the ultra-high-end? That is: 5-10M+ in comp to sign a single researcher/IC; industry rock star territory.
Major frontier AI labs do tend to have that type of talent in abundance. I'm sure NV has the equivalent when it comes to hardware design. Surely in AI research too, but perhaps not in the same quantities.
That's not how a smart business runs and that's now how Nvidia operates.
Jensen is smart. He's gone through over 30 years of tech cycles.
Nvidia actively commoditizes the LLM models. Look at Nemotron. They've avoided making a SOTA model solely to keep the hyperscalers (aka crack addicts) coming back for more GPUs.
As soon as the bubble bursts, they can release some open weight NemoMambaDiffusiontron and keep folks buying GPUs to run the damn thing.
It still wouldn't be smart to do so, as this would fall into the common business pitfall of thinking you could easily do the next stack layer of work.
And they have a moat - they can control the quantity and quality of hardware flowing to their competitions?
Competing with your own customers is not a good idea, especially before the bubble pops.