I dont disagree, and I am trying to argue for it myself, and have used postgres as a "queue" or the backlog of events to be sent (like outbox pattern). But what if I have 4 services that needs to know X happened to customer Y? I feel like it quickly becomes cumbersome with a postgres event delivery to make sure everyone gets the events they need delivered. The posted link tries to address this at least.
The standard approach, which Kafka also uses beneath all the libraries hiding it from you, is:
The publisher has a set of tables (topics and partitions) of events, ordered and with each event having an assigned event sequence number.
Publisher stores no state for consumers in any way.
Instead, each consumer keeps a cursor (a variable holding an event sequence number) indicating how far it has read for each event log table it is reading.
Consumer can then advance (or rewind) its own cursor in whatever way it wishes. The publisher is oblivious to any consumer side state.
This is the fundamental piece of how event log publishing works (as opposed to queues which is something else entirely; and the article talks about both usecases).
Call me dumb - I'll take it! But if we really are trying to keep it simple simple...
Then you just query from event_receiver_svcX side, for events published > datetime and event_receiver_svcX = FALSE. Once read set to TRUE.
To mitigate too many active connections have a polling / backoff strategy and place a proxy infront of the actual database to proactively throttle where needed.
But event table:
| event_id | event_msg_src | event_msg | event_msg_published | event_receiver_svc1 | event_receiver_svc2 | event_receiver_svc3 |
|----------|---------------|---------------------|---------------------|---------------------|---------------------|---------------------|
| evt01 | svc1 | json_message_format | datetime | TRUE | TRUE | FALSE |