I really believe this is the way: Event log tables in SQL. I have been doing it a lot.
A downside is the lack of tooling client side. For many using Kafka is worth it simply for the tooling in libraries consumer side.
If you just want to write an event handler function there is a lot of boilerplate to manage around it. (Persisting read cursors etc)
We introduced a company standard for one service pulling events from another service that fit well together with events stored in SQL.
https://github.com/vippsas/feedapi-spec
Nowhere close to Kafka's maturity in client side tooling but it is an approach for how a library stack could be built on top making this convenient and have the same library toolset support many storage engines. (On the server/storage side, Postgres is of course as mature as Kafka...)
With the advent of tools like llms in editors, it is now viable to create clients and solve these gaps quite easily. It feels like the next low hanging fruit to do in many places not client friendly enough.
I for one really dislike Kafka and this looks like a great alternative
I'll soon get to make technology choices for a project (context: we need an MQTT broker) and Kafka is one of the options, but I have zero experience with it. Aside from the obivous red flag that is using something for the first time in a real project, what is it that you dislike about Kafka?
Note: by "client" I mean "consuming application reading from a Kafka topic"
Not your parent poster, but Kafka is often treated like a message broker and it ain't that. Specifically, it has no concept of NACK-ing messages, it is either processed or not processed. There's no way to the client to say "skip this message and hand it to another worker" or "I have this weird message but I don't know how to process it, can you take it back?".
What people very commonly do is to instead move the unprocessed message to a dead-letter-queue, which at least clears the upstream queue but means you have to sift through the dead-letter-queue and figure out how to rescue messages.
Also people often think "I can read 100 messages in a batch and handle them individually in the client" while not considering that if some of the messages fail to send (or crash the client, losing the entire batch), Kafka isn't monitoring to say "hey you haven't verified that message 12 and 94 got processed correctly, do you want to keep working on them or should I assign them to someone else?"
Basically, in Kafka, the offset pointer should only be incremented after the client is 100% sure it is done with the message and the output has been written to durable storage if you care about the outcome. Otherwise you risk "skipping" messages because the client crashed or otherwise burped when trying to process the message.
Also Kafka topic partitions are semi-parallel streams that are not necessarily time ordered relative to each other... It's just another pinch point.
Consider exploring NATS Jetstream and its MQTT 3.1.1 mode and see if it suits your MQTT needs? Also I love Bento for declarative robust streaming ETL.