> By running all encoder instances in parallel, better parallelism can be obtained overall.
This makes a lot of sense for the live-streaming use case, and some sense for just generally transcoding a video into multiple formats. But I would love to see time-axis parallelization in ffmpeg. Basically quickly split the input video into keyframe chunks then encode each keyframe in parallel. This would allow excellent parallelization even when only producing a single output. (And without lowering video quality as most intra-frame parallelization does)
Encoders do some interframe analysis (motion, etc) as part of encoding P/B-frames; I wonder if this work could be done once and reused for all the encodings.
I would guess if you have it export multiple files from the same source in one go it already does but could be wrong.
This post is how meta engineers just recently submitted a patch with the ability to avoid starting a new process for every output encoding and so they can share the decoding step. Maybe that also includes sharing the motion estimation step, but I would be careful making such assumptions, FFMPEG has a lot of low hanging optimization work that hasn't been done just because someone hasn't done it yet.