I have an RTX 6000 Pro Max-Q, which has 96GB VRAM. It identified the hardware correctly but incorrectly thought it had 4GB, at least if I interpret the RAM dropdown correctly.
Then it shows the full resolution models, which are completely unnecessary to run quality inference. Quantized models are routine for local inference and it should realize that.
Needs work.