My approach is: select max(id), and commit with id=max(id)+1. If commit worked, then all good. If commit failed because of unique index violation, repeat the transaction from the beginning. I think it should work correctly with proper transaction isolation level.

That limits you to a few tens of TPS since everything is trying to write the same row which must happen serially. I wouldn't start out with that solution since it'll be painful to change to something more scalable later. Migrating to something better will probably involve more writes per txn during the migration, so it gets even worse before it gets better.

The counter in another table used in the article also serializes all writers to the table. Probably better than the max() approach but still serial.

There needs to be serialization happening somewhere, either by writers or readers waiting for their turn.

What Kafka "is" in my view is simply the component that assigns sequential event numbers. So if you publish to Kafka, Kafka takes the same locks...

How to increase throughput is add more shards in a topic.

Does the additional read query cause concern? Or mostly this is ok? (i'm sure the answer depends on scale)