Once used object storage as queue, you can implement queue semantic at the application level, with one object per entry.

But the application was fairly low volume in Data and usage, So eventual consistency and capacity was not an issue. And yes timestamp monotonicity is not guaranteed when multiple client upload at the same time so unique id was given to each client at startup and used for to add guarantee of entries name. Metadata and prefix were used to indicate state of object during processing.

Not ideal, but it was cheaper that a DB or a dedicated MQ. The application did not last, but would try again the approach if adapted to stuation.

The application I'm interested in is a log-based artifact registry. Volume would be very low. Much more important is the immutability and durability of the log.

I was thinking that writes could be indexed/prefixed into timestamp buckets according to the clients local time. This can't be trusted, of course. But the application consumers could detect and reject any writes whose upload timestamp exceeds a fixed delta from the timestamp bucket it was uploaded to. That allows for arbitrary seeking to any point on the log.