Think about this for a second. Kafka offsets are a thing, consumer groups are a thing. It's trivial to ensure that only one message is delivered to only one consumer if that's what you want. Consumer groups track their offset and then commit the offset, the message stays in Kafka but it won't be read again.
This IMO is better behaviour than RabbitMQ since you can always re-read messages once they have been processed, whereas generally with MQ systems the message is then marked for deletion and asynchronously deleted.
> It's trivial to ensure that only one message is delivered to only one consumer
Exactly-once delivery is one of the hardest distributed systems problems. If you’ve “trivially” solved it, please show us your solution.
> It's trivial to ensure that only one message is delivered to only one consumer if that's what you want. Consumer groups track their offset and then commit the offset, the message stays in Kafka but it won't be read again. This IMO is better behaviour than RabbitMQ
The trivial solution is to use Kafka. They're clearly saying that Kafka makes it trivial, not that it's trivial to solve from scratch.
What the parent poster described isn’t what makes Kafka’s “exactly once” semantics work. It’s the use of an idempotency token associated with each publication, which effectively turns “at-least-once” semantics into effectively “exactly once” via deduplication.
> better behaviour than RabbitMQ since you can always re-read messages once they have been processed
I can imagine, a 1 Billion dollar transaction accidentally gets processed by ten thousand client nodes due to a client app synchronization bug, company rethinks its dumb data dumper server strategy...news at 11.