I remember it differently. CUDA was built with the intention of finding/enabling something like deep learning. I thought it was unrealistic too and took it on faith in people more experienced than me, until I saw deep learning work.
Some of the near misses I remember included bitcoin. Many of the other attempts didn't ever see the light of day.
Luck in english often means success by chance rather than one's own efforts or abilities. I don't think that characterizes CUDA. I think it was eventual success in the face of extreme difficulty, many failures, and sacrifices. In hindsight, I'm still surprised that Jensen kept funding it as long as he did. I've never met a leader since who I think would have done that.
Nobody cared about deep learning back in 2007, when CUDA released. It wasn't until the 2012 AlexNet milestone that deep neural nets start to become en vogue again.
I clearly remember Cuda being made for HPC and scientific applications. They added actual operations for neural nets years after it was already a boom. Both instances were reactions, people already used graphics shaders for scientific purposes and cuda for neural nets, in both cases Nvidia was like oh cool money to be made.
Parallel computing goes back to the 1960s (at least). I've been involved in it since the 1980s. Generally you don't create an architecture and associated tooling for some specific application. The people creating the architecture only have a sketchy understanding of application areas and their needs. What you do is have a bright idea/pet peeve. Then you get someone to fund building that thing you imagined. Then marketing people scratch their heads as to who they might sell it to. It's at that point you observed "this thing was made for HPC, etc" because the marketing folks put out stories and material that said so. But really it wasn't. And as you note, it wasn't made for ML or AI either. That said in the 1980s we had "neural networks" as a potential target market for parallel processing chips so it's aways there as a possibility.
CUDA was profitable very early because of oil and gas code, like reverse time migration and the like. There was no act of incredible foresight from jensen. In fact, I recall him threatening to kill the program if large projects that made it not profitable failed, like the Titan super computer at oak ridge.
So it could just as easily have been Intel or AMD, despite them not having CUDA or any interest in that market? Pure luck that the one large company that invested to support a market reaped most of the benefits?
I remember it differently. CUDA was built with the intention of finding/enabling something like deep learning. I thought it was unrealistic too and took it on faith in people more experienced than me, until I saw deep learning work.
Some of the near misses I remember included bitcoin. Many of the other attempts didn't ever see the light of day.
Luck in english often means success by chance rather than one's own efforts or abilities. I don't think that characterizes CUDA. I think it was eventual success in the face of extreme difficulty, many failures, and sacrifices. In hindsight, I'm still surprised that Jensen kept funding it as long as he did. I've never met a leader since who I think would have done that.
Nobody cared about deep learning back in 2007, when CUDA released. It wasn't until the 2012 AlexNet milestone that deep neural nets start to become en vogue again.
I clearly remember Cuda being made for HPC and scientific applications. They added actual operations for neural nets years after it was already a boom. Both instances were reactions, people already used graphics shaders for scientific purposes and cuda for neural nets, in both cases Nvidia was like oh cool money to be made.
Parallel computing goes back to the 1960s (at least). I've been involved in it since the 1980s. Generally you don't create an architecture and associated tooling for some specific application. The people creating the architecture only have a sketchy understanding of application areas and their needs. What you do is have a bright idea/pet peeve. Then you get someone to fund building that thing you imagined. Then marketing people scratch their heads as to who they might sell it to. It's at that point you observed "this thing was made for HPC, etc" because the marketing folks put out stories and material that said so. But really it wasn't. And as you note, it wasn't made for ML or AI either. That said in the 1980s we had "neural networks" as a potential target market for parallel processing chips so it's aways there as a possibility.
CUDA was profitable very early because of oil and gas code, like reverse time migration and the like. There was no act of incredible foresight from jensen. In fact, I recall him threatening to kill the program if large projects that made it not profitable failed, like the Titan super computer at oak ridge.
So it could just as easily have been Intel or AMD, despite them not having CUDA or any interest in that market? Pure luck that the one large company that invested to support a market reaped most of the benefits?