There's a lot of pressure on the CUDA ecosystem at this point:

- most of the trillion dollar companies have their own chips with AI features (Apple, Google, MS, Amazon, etc.). Gpus and AI training are among their biggest incentives. They are super motivated to not donate major chunks of their revenue to nvidia.

- Mac users don't generally use nvidia anymore with their mac hardware and the apple's CPUs are a popular platform for doing stuff with AI.

- AMD, Intel and other manufacturers want in on the action

- The Chinese and others are facing export restrictions for Nvidia's GPUs.

- Platforms like mojo (a natively compiled python with some additional language features for AI) and others are getting traction.

- A lot of the popular AI libraries support things other than Nvidia at this point.

This just adds to that. Nvidia might have to open up CUDA to stay relevant. They do have a performance advantage. But forcing people to chose, inevitably leads to plenty of choice being available to users. And the more users choose differently the less relevant CUDA becomes.