this article is so dumb. NVIDIA delivered what the market wanted - gamers dont need FP64, they dont waste silicon on it. now enterprise doesnt want FP64 anymore and they are reducing silicon for it too
weird way to frame delivering exactly what the consumer wants as a big market segmentation fuck the user conspiracy
Your framing is what's backwards. NVIDIA artificially nerfed FP64 for a long time before they started making multiple specialized variants of their architectures. It's not a conspiracy theory; it's historical fact that they shipped the same die with drastically different levels of FP64 capability. In a very real way, consumers were paying for transistors they couldn't use, subsidizing the pro parts.
> subsidizing the pro parts.
You got this wrong way around. It's the high margin (pro) products subsidizing low margin (consumer) products.
In general, yes, but when consumer parts are spending silicon area on features they can't use, it is happening in the other direction too.
This isn't really true, and it wouldn't be a big deal even if it was.
Die areas for consumer card chips are smaller than die areas for datacenter card chips, and this has held for a few generations now. They can't possibly be the same chips, because they are physically different sizes. The lowest-end consumer dies are less than 1/4 the area of datacenter dies, and even the highest-end consumer dies are only like 80% the area of datacenter dies. This implies there must be some nontrivial differentiation going on at the silicon level.
Secondly, you are not paying for the die area anyway. Whether a chip is obtained from being specially made for that exact model of GPU, or it is obtained from being binned after possibly defective areas get fused off, you are paying for the end-result product. If that product meets the expected performance, it is doing its job. This is not a subsidy (at least, not in that direction), the die is just one small part of what makes a usable GPU card, and excess die area left dark isn't even pure waste, as it helps with heat dissipation.
The fact that nVidia excludes decent FP64 from all of its prosumer offerings (*) can still be called "artificial" insofar as it was indeed done on purpose for market segmentation purposes, but it's not some trivial trick. They really are just not putting it into the silicon. This has been the case for longer than it wasn't by now, even.
* = The Quadro line of "professional" workstation cards nowadays are just consumer cards with ECC RAM and special drivers
> consumers were paying for transistors they couldn’t use
This is Econ 101 these days. It’s cheaper to design and manufacture 1 product than 2. Many many products have features that are enabled for higher paying customers, from software to kitchen appliances to cars, and much much more.
The combined product design is also subsidizing some of the costs for everyone, so be careful what you wish for. If you could use all the transistors you have, you’d be paying more either way, either because design and production costs go up, or because you’re paying for the higher end model and being the one subsidizing the existence of the high end transistors other people don’t use.