I am not sure what context Jensen said that. But midjourney uses tpu. Apple uses tpu. They are no other frontier labs that use it, but Google + Anthropic is 2 out of 3 frontier lab so.....

You could reasonably say that "A majority of frontier labs uses TPU to train and serve their model."

Afaik, TPUs are only used for inference, not training. Maybe that was also what the quote referred to.

Mayhaps! But I think as far as google, anthropic[1] and apple[2] goes, they do use the tpus for training. Ofc v4 and v5 (older generations of tpus) were more specialized for search related embedding workloads and i could see people not using them for training.

[1]: We train and run Claude on a range of AI hardware—AWS Trainium, Google TPUs - April 6th, Anthropic on Google and Broadcom partnership [2]: "[Apple foundation model]... builds on top of JAX and XLA, and allows us to train the models with high efficiency and scalability on various training hardware and cloud platforms, including TPUs and both cloud and on-premise GPUs" - Apple in 2024

[deleted]