Seeing a ton of adoption of this after the Minio debacle

https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.

RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.

Anyone have any advice for swapping this in for Minio?

I have not tried either myself, but I wanted to mention that Versity S3 Gateway looks good too.

https://github.com/versity/versitygw

I am also curious how Ceph S3 gateway compares to all of these.

When I was there, DigitalOcean was writing a complete replacement for the Ceph S3 gateway because its performance under high concurrency was awful.

They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.

Unrelated, but one of the more annoying aspects of whatever software they use now is lack of IPv6 for the CDN layer of DigitalOcean Spaces. It means I need to proxy requests myself. :(

I'd be curious to know how versitygw compares to rclone serve S3.

Disclaim: I work on SeaweedFS.

Why skipping SeaweedFS? It rank #1 on all benchmarks, and has a lot of features.

I confirm this, I used SeaweedFS to serve 1M users daily with 56 million images / ~100TB with 2 servers + HDD only, while Minio can't do this. Seaweedfs performance is much better than Minio's. The only problem is that SeaweedFS documentation is hard to understand.

SeaweedFS is also so optimized for small objects, it can't store larger objects (max 32GiB[1]).

Not a concern for many use-cases, just something to be aware of as it's not a universal solution.

[1]: https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#st...

Not correct. The files are chunked into smaller pieces and spread to all volume servers.

Well, then I suggest updating the incorrect readme. It's why I've ignored SeaweedFS.

SeaweedFS is very nice and takes quite an effort to lose data.

can you link benchmarks

It is in the parent comment.

> but for entirely non-technical reasons we had to exclude it

Able/willing to expand on this at all? Just curious.

They seem to have gone all-in on AI, for commits and ticket management. Not interested in interacting with that.

Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).

Not the same person you asked, but my guess would be that it is seen as a chinese product.

RustFS appears to be very early-stage with no real distributed systems architecture: https://github.com/rustfs/rustfs/pull/884

I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.

Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.

What is this based on, honest question as from the landing page I don't get that impression. Are many committers China-based?

https://rustfs.com.cn/

> Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing

> Beijing ICP Registration No. 2024061305-1

Oh, I misread the initial comment and thought they had to exclude Garage. Thanks!

From what I have seen in the previous discussions here (since and before Minio debacle) and at work, Garage is a solid replacement.

Seaweed looks good in those benchmarks, I haven't heard much about it for a while.

[deleted]