For those that use Blender, in their section about Blender:
> We hope that, in the future, there will be real options other than NVIDIA for GPU-based rendering, as it is an area where competition is nearly non-existent.
And Checking opendata.blender.org, a NVIDIA GeForce RTX 4080 Laptop GPU scores 5301.8, while Intel Arc Pro B70 is still at 3824.64.
So there is still a bit more to go before Intel GPUs perform close to NVIDIA's.
Also the first section I jumped to :) To Intel's credit, seems they're slowly improving, the section starts with:
> Over the last year or two, Intel has worked to deliver serious optimizations for and compatibility with Blender GPU rendering on its Arc GPUs. Although NVIDIA has long held an advantage in the application, our last time looking at Intel’s cards indicated ongoing improvements. This round of testing is no different. We found that the Arc Pro B70 provided more than twice the performance of the B50, also beating the R9700 by 9%.
This is because Blender is in fact using CUDA?
Blender supports CUDA, HIP, OneAPI, and Metal. So Intel GPUs are performing poorly using their native API.
The key feature on intel platforms is the hardware de-noise acceleration (NVIDIA OptiX also works well.) Note, AMD OpenCL works quite well for some renderings, but blender flamenco likes consistent cluster hardware.
For 8k HDR10 media or 3+ screens the rtx 5090 32G model is going to be the minimum card people should buy. Just because you see 4 DP ports, doesn't mean the card can push bit-rates needed to fill an HDR10 display >60Hz.
The Mac Studio Pro unified >512GB ram/vram is a better LLM lab solution (Apple recently NERF'd it to 256GB.) Who cares if a task completes a bit slower, it doesn't matter given the lower error rates... and not costing $14k like an rtx 6000. =3
Great tutorial on getting blender to behave on mid-grade PC and laptops etc. :
https://www.youtube.com/watch?v=a0GW8Na5CIE