ROCm usually only supports two generations of consumer GPUs, and sometimes the latest generation is slow to gain support. Currently only RDNA 3 and RDNA 4 (RX 7000 and 9000) are supported: https://rocm.docs.amd.com/projects/install-on-linux/en/lates...

It's not ideal. CUDA for comparison still supports Turing (two years older than RDNA 2) and if you drop down one version to CUDA 12 it has some support for Maxwell (~2014).

Worse, RDNA3 and RDNA4 aren't fully supported, and probably won't be, as they only focus on chips that make them more money. If we didn't have Vulkan, every nerd in the world would demand either a Mac or an Intel with Nvidia chip. AMD keeps leaving money on the table.

Up until recently they didn't even support their cashcow Ryzen 395+ MAX properly. Idk about the argument that they only care about certain chips.

If you are on an unsupported AMD GPU, why would you ever consider switching to a newer AMD GPU, considering you know that it will reach the same sorry state as your current GPU?

Especially when as you say, the latest generation is slow to gain support, while they are simultaneously dropping old generations, leaving you with a 1-2 year window of support.

It's pretty crazy that a 6900XT/6950XT aren't supported.

Eh, YMMV. I was using rocm for minor AI things as far back as 2023 on an "unsupported" 6750XT [0]. Even trained some LoRAs. Mostly the issues were how many libs were cuda only.

[0] https://news.ycombinator.com/item?id=43207015

I have RX 6700XT, damn. AMD is shooting themselves in the foot

Try it before you give up, I got plenty of AI stuff working on my 6750XT years ago.