I agree with nearly everything except your point (1).

Periodic polling is awkward on both sides: you add arbitrary latency _and_ increase database load proportional to the number of interested clients.

Events, and ideally coalesced events, serve the same purpose as interrupts in a uniprocess (versus distributed) system, even if you don't want a proper queue. This at least lets you know _when_ to poll and lets you set and adjust policy on when / how much your software should give a shit at any given time.

From a database load perspective, Postgres can get you pretty far. The reads triggered by each poll should be trivial index-only scans served right out of RAM. Even a modest Postgres instance should be able to handle thousands per second.

The limiting factor for most workloads will probably be the number of connections, and the read/write mix. When you get into hundreds or thousands of pollers and writing many things to the queue per second Postgres is going to lose its luster for sure.

But in my experience with small/medium companies, a lot of workloads fit very very comfortably into what Postgres can handle easily.