I've never been convinced by an explanation of how QUIC applies for flows in the data center.
Ossification doesn't apply (or it shouldn't, IMHO, the point of Open Source software is that you can change it to fit your needs... if you don't like what upstream is doing, you should be running a local fork that does what you want... yeah, it's nicer if it's upstreamed, but try running a local fork of Windows or MacOS); you can make congestion control work for you when you control both sides; enterprise switches and routers aren't messing with tcp flows. If you're pushing enough traffic that this is an issue, the cost of QUIC seems way too high to justify, even if it helps with some issues.
I don't see why this exception to the end-to-end principle should exist. At the scale of single hosts today, with hundreds of CPUs and hundreds of tenants in a single system sharing a kernel, the kernel itself becomes an unwanted middlebox.