> Do you really need a queue? (Alternative: periodic polling of a DB)
In my experience it’s not the reads, but the writes that are hard to scale up. Reading is cheap and can be sometimes done off a replica. Writing to a PostgreSQL at high sustained rate requires careful tuning and designs. A stream of UPDATEs can be very painful, INSERTs aren’t cheap, and even a batched COPY blocks can be tricky.
Plus of course you can take out the primary even with a read from a replica. It's not a trivial feat, but you can achieve it with the combination of streaming replication and an hours-long read from the replica for massive analytical workloads. For large reads Postgres will create temporary tables as needed, and when those in the replica end up far enough, the cascading effect through replication backpressure will cause primary to block further writes from getting through...
The scars from that kind of outage will never truly heal.
IME (...don't ask) it's easy enough if you forget to set idle in transaction timeout, though I haven't... tried... on replicas
Postgres’ need (modulo running it on ZFS) for full-page writes [0], coupled with devs’ apparent need to use UUIDv4 everywhere - along with over-indexing - is a recipe to drag writes down to the floor, yes.
0: https://www.enterprisedb.com/blog/impact-full-page-writes
Did you try uuidv7 yet?