> The efficiency is a red herring - you will still use every CPU core for IO threads in ffmpeg, if you don't configure that away, which you do not. And it requires really annoying setup and premium features on stuff like Plex. It just makes no sense!
I would love to learn more about this! What can I do to fully optimize ffmpeg hardware encoding?
My use case is transcoding a massive media library to AV1 for the space gains. I am aware this comes with a slight drop in quality (which I would also be keen to learn about how to minimize), but so far, in my testing, GPU encoding has been the fastest/most efficient, especially with Nvidia cards.
GPU encoding is fast, but usually it produces poorer quality results because it avoids trying paths that are hard to do quickly on the GPU.
If you want to optimise, try different encoders (sounds like you've already done some of this) and lots of different settings - it'll involve a lot of tuning if you want to figure out the right balance for your particular media between quality/speed/size, while also making sure that your machine hurts as much as possible.
Driveby 2c as a video industry person: don't retranscode your media unless you've got them in a really space inefficient codec and you're seriously hurting for space. You'll burn a lot of power retranscoding, are you actually saving useful $$$ of storage in exchange for that spend? Storage is cheap, and there's always a better codec coming along you could retranscode into and save some more space. It's a vicious cycle: each generation has to encode the artifacts from the previous generations.
You would use your full system, saturating the CPU and GPU, including unlocking the number of simultaneous video sessions for consumer NVIDIA GPUs. That said, software AV1 looks a lot better than hardware AV1 per bit.