The HTTP semantics are useful for anyone developing a web app but the wire protocol of HTTP itself is awful. Multiplexing didn’t arrive until HTTP 2.0 for example. So using HTTP for communication between a reverse proxy and a backend is very wasteful. There are security issues, such as when different parsers could even disagree on where the boundaries of a request ends.
Google for example has long wrapped HTTP into their own Stubby protocol between their frontline web servers and applications; it’s much faster and more featureful than using the HTTP wire protocol. It’s something that a typical company doesn’t need, but once the scale increases it becomes worthwhile to justify using a different wire protocol and developing all the tooling around that new wire protocol.
Won't argue with that, but it's a classic example of "Worse is better" [1]. It was simple and "good enough". Being ubiquitous is often more important than being efficient.
Most of the arguments for using HTTP reverse proxying over FastCGI or SCGI came down to ubiquity. It let you do things (like connect directly to your app servers with a web browser) that you couldn't do with FastCGI.
[1] https://dreamsongs.com/RiseOfWorseIsBetter.html
> Multiplexing didn’t arrive until HTTP 2.0 for example. So using HTTP for communication between a reverse proxy and a backend is very wasteful.
HTTP 2.0 multiplexing is tcp in tcp, it's asking for trouble. Just open more connections and let tcp be your multiplex. Depending on your connection rate, you can't really do 64k connections per frontend ip to each service ip:port, but if your rate isn't too high, 20-30k is feasible. most http based applications don't need or benefit from anywhere near that level of concurrency on frontend to backend. But if it's not enough, you can add more ips to the frontend or backend, or more ports to the backend.
I'm pretty sympathetic to the argument for FastCGI or similar as the protocol for frontend to backend though; having client set headers clearly separate from frontend set headers is very nice, and having clear agreement on message boundaries is of obvious value. Unless you're just doing a straight tcp proxy, in which case ProxyProtocol is good enough to transfer the original IPs and then pass data as-is.
> HTTP 2.0 multiplexing is tcp in tcp
It’s not. It doesn’t literally run another TCP congestion control algorithm inside a TCP tunnel. However I do agree that the implementation of multiplexing in HTTP/2.0 isn’t the best; it could have been better.
Don’t forget http pipelining!