Why would anyone want to use a complex kludge like QUIC and be at the mercy of broken TLS libraries, when Wireguard implementations are ~ 5k LOC and easily auditable?
Have all the bugs in OpenSSL over the years taught us nothing?
Why would anyone want to use a complex kludge like QUIC and be at the mercy of broken TLS libraries, when Wireguard implementations are ~ 5k LOC and easily auditable?
Have all the bugs in OpenSSL over the years taught us nothing?
FWIW QUIC enforces TLS 1.3 and modern crypto. A lot smaller surface area and far fewer foot-guns. Combined with memory safe TLS implementations in Go and Rust I think it's fair to say things have changed since the heartbleed days.
> I think it's fair to say things have changed since the heartbleed days.
The Linux Foundation is still funding OpenSSL development after scathing review of the codebase[1], so I think it's fair to say things haven't changed a bit.
1: https://www.openbsd.org/papers/bsdcan14-libressl/
[dead]
QUIC allows identities to be signing keys, which are used to build public key infrastructure. You need to be able to sign things to do web-of-trust, or make arbitrary attestations.
Wireguard has a concept of identity as long term key pairs, but since the algorithm is based on Diffie-Hellman, and arriving at a shared secret ephemeral key, it's only useful for establishing active connections. The post-quantum version of Wireguard would use KEMs, which also don't work for general purpose PKI.
What we really need is a signature based handshake and simple VPN solution (like what Wireguard does for the Noise Protocol Framework), that a stream multiplexing protocol can be layered on top of. QUIC gets the layers right, in the right order (first encrypt, then deal with transport features), but annoyingly none of the QUIC implementations make it easy to take one layer without the other.
"Have all the bugs in OpenSSL over the years taught us nothing?"
TweetNaCL to the rescue.