The proxy vs packet capture debate is a bit of a non-debate in practice — the moment TLS is on (and it should always be on), packet capture sees nothing useful. eBPF is interesting for observability but it works at the network/syscall level — doing actual SQL-level inspection or blocking through eBPF would mean reassembling TCP streams and parsing the Postgres wire protocol in kernel space, which is not really practical.

I've been building a Postgres wire protocol proxy in Go and the latency concern is the thing people always bring up first, but it's the wrong thing to worry about. A proxy adds microseconds, your queries take milliseconds. Nobody will ever notice. The actual hard part — the thing that will eat weeks of your life — is implementing the wire protocol correctly. Everyone starts with simple query messages and thinks they're 80% done. Then you hit the extended query protocol (Parse/Bind/Execute), prepared statements, COPY, notifications, and you realize the simple path was maybe 20% of what Postgres actually does. Once you get through that though, monitoring becomes almost a side effect. You're already parsing every query, so you can filter them, enforce policies, do tenant-level isolation, rotate credentials — things that are fundamentally impossible with any passive approach.

You can decode TLS traffic with a little bit of effort, tho you have to control the endpoints which makes it a bit moot as if you control them you can just... enable query logging

True, but logging tells you what happened, a proxy lets you decide what's allowed to happen before it hits the database. Policy enforcement, tenant isolation, that kind of thing. They're complementary really.

Also, just to add to this, to run compile once and run anywhere, you need to have a BTF-enabled kernel.

Exactly, and that's one more reason I went with a userspace proxy — no kernel deps, runs anywhere, way easier to debug.

TLS for your database? Are you connecting outside of the local machine or VPN?

Yeah, more and more. Zero-trust is pushing TLS everywhere, even inside VPNs — lateral movement is a real thing. And several compliance frameworks now expect encryption in transit regardless of network topology. With connection pooling the overhead is basically zero anyway.

Indeed, if you're running the db in production and aren't using TLS, you're doing it wrong nowadays. Nearly every compliance framework will require it, and it's a very good idea anyway even if you don't care about compliance.

... but if it's over a VPN it's already encrypted in transit?

Encrypted in transit yes, but only between the VPN endpoints. Anything already inside the network (compromised host, rogue container, bad route) sees your queries in cleartext. TLS on the connection itself gives you end-to-end encryption between your app and Postgres, no matter what's going on in the network in between. Same reason people moved to HTTPS everywhere instead of just trusting the corporate firewall. And with connection pooling you pay the TLS handshake once and reuse it, so the overhead is basically nothing.

Maybe we're talking about different things. If there's a VPN link between the two severs there shouldn't be any "network in between"

Fair point, if it's a true point-to-point VPN between just the two boxes, there's not much "in between" to worry about. TLS on top is mostly defense in depth at that point. What I had in mind was the more common setup where your app and DB sit on a shared network (VPC, corporate LAN). The traffic between them is unencrypted, and you're trusting every piece of infrastructure in that path (switches, hypervisors, sidecar containers) to not be compromised.