Bhyve bridges are inefficient: every packet traverses NIC → CPU → bridge → VM, adding unnecessary copies that kill throughput. Switching to SR-IOV eliminated that overhead and I saturated the 10 GbE link.

I do not see how that follows. Memory bandwidth is measured in the hundreds of Gb/s. You can issue tens of unnecessary full memory copies before you bottleneck at a paltry 10 Gb/s.

It is much more likely there is something else terribly wrong in a network stack if it can not even drive a measly 10 Gb/s.

That assumes memory bandwidth is the issue, and not latency and/or CPU.

You used the new optimized bridges on FreeBSD 15?

On Linux?