QLC NAND
The datasheet shows 3GB/s sequential write, which for 245.76TB means writing the whole drive takes around 22h45m. Odd that the endurance is specified as "1.0 SDWPD", which is almost meaningless since the drive takes roughly that long to write at full speed.
At scale, 1.9 times more energy is required for an HDD deployment
...but those HDDs are going to hold data for far more than twice as long. It's especially infuriating to see such secrecy and vagueness around the real endurance/retention characteristics for SSDs as expensive as these.
On the other hand, 60TB of SLC for the same price would probably be a great deal.
This is an ad, not a spec sheet. The vast majority of the people buying this understand the endurance and retention characteristics of this type of device. It isn't going on Amazon or Best Buy, and the target market knows how to ask the right questions.
Perhaps their usual buyers just care less about retention?
Those drives aren't going to be used for cold storage, and it is basically a guarantee that there will be checksums and some form of redundancy. Who cares whether the data is retained for 10 or for 15 years after writing when you can do a low-priority background scrub of the entire drive once a month, and when there are already mechanisms in place to account for full-drive failure?
QLC retention reported to be around 1 year in unpowered state. I would assume, that drive does background refresh, though. No idea what effect it has on total drive lifetime. It is still mean that if you use it for cold storage it has to be powered.
Why is it mean? Why would you want to use a technology that is unsuitable for cold storage for cold storage? You won't even get the power / IOPS benefit if all it does is an infrequent replication of data and is then switched off.
What kind of usage do you envision for 245TB drive with read speed of 3GB/sec?
I believe it has read speeds of 13GB/s, not 3 (unless you are referring to an equivalent array of 10 HDD). It will almost certainly be used to store training datasets and model weights. Which I assume are good use cases for fast sequential reads.
You can trivially modulate flash endurance by tweaking the reported space - the less space you report, the more spares you have.