First off, I don't think there is anything wrong with MinIO closing down its open source. There are simply too many people globally who use open source without being willing to pay for it. I started testing various alternatives a few months ago, and I still believe RustFS will emerge as the winner after MinIO's exit. I evaluated Garage, SeaweedFS, Ceph, and RustFS. Here are my conclusions:
1. RustFS and SeaweedFS are the fastest in the object storage field.
2. The installation for Garage and SeaweedFS is more complex compared to RustFS.
3. The RustFS console is the most convenient and user-friendly.
4. Ceph is too difficult to use; I wouldn't dare deploy it without a deep understanding of the source code.
Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
Furthermore, Milvus gave RustFS a very high official evaluation. Based on technical benchmarks and other aspects, I believe RustFS will ultimately win.
https://milvus.io/blog/evaluating-rustfs-as-a-viable-s3-comp...
GPL for open source and commercial license for the enterprise lawyers.
Unfortunately, a majority seems to hate GPL these days even though it prevents most of the worst corporate behaviors.
Minio was AGPL, which was a perfectly fine tradeoff IMO. But apparently that wasn't good enough.
AGPL doesn't help when you want to kill your free offering to move people onto the paid tier. But quite frankly, that isn't a problem GPL is meant to solve.
Elvin here from RustFS. Appreciate the feedback, especially coming from the Milvus team—we’ve followed your work for a long time.
You’re right about the "tension" in OSS. That’s exactly why we are pledging to keeping the RustFS core engine permanently open-source. We want to provide the solid, open foundation you mentioned so that teams like yours don't feel forced to build and maintain a storage layer from scratch.
On the sustainability question—you've described the challenge better than most. We're still figuring out the right model, and I don't think anyone has a perfect answer yet. What we do know is that we're building something technically excellent first, and we're committed to doing it in a way that keeps the core open.
Huge thanks for your contributions to the open-source world! Milvus is an incredibly cool product and a staple in my daily stack.
It’s been amazing to watch Milvus grow from its roots in China to gaining global trust and major VC backing. You've really nailed the commercialization, open-source governance, and international credibility aspects.
Regarding RustFS, I think that—much like Milvus in the early days—it just needs time to earn global trust. With storage and databases, trust is built over years; users are naturally hesitant to do large-scale replacements without that long track record.
Haha, maybe Milvus should just acquire RustFS? That would certainly make us feel a lot safer using it!
Garage installation is easy.
1. Download or build the single binary into your system (install like `/usr/local/sbin/garage`)
2. Create a file `/etc/garage.toml`:
3. Start it with `garage server` or just have an AI write an init script or unit file for you. (You can pkill -f /usr/local/sbin/garage to shut it down.)Also, NVIDIA has a phenomenal S3 compatible system that nobody seems to know about named AIStore: https://aistore.nvidia.com/ It's a bit more complex, but very powerful and fast (faster than MinIO - slightly less space efficient than MinIO because it maintains a complete copy of an object on a single node so that the object doesn't have to be reconstituted as it would on MinIO.) It also can be a proxy in front of other S3 systems, including AWS S3 or GCS etc and offer a single unified namespace to your clients.
IMO, Seaweedfs is still too much of a personal project, it's fast for small files, but keep good and frequent backups in a different system if you choose it.
I personally will avoid RustFS. Even if it was totally amazing, the Contributor License Agreement makes me feel like we're getting into the whole Minio rug-pull situation all over again, and you know what they say about doing the same thing and expecting a different result..
If you are on Hetzner, I created a ready to use Terraform module that spins up a single node GarageFs server https://pellepelster.github.io/solidblocks/hetzner/web-s3-do...
As someone about to learn the basics of Terraform, with an interest in geo-distributed storage, and with some Hetzner credit sitting idle... I came across the perfect comment this morning.
I might extend this with ZeroFS too.
Garage is indeed an excellent project, but I think it has a few drawbacks compared to the alternatives: Metadata Backend: It relies on SQLite. I have concerns about how well this scales or handles high concurrency with massive datasets. Admin UI: The console is still not very user-friendly/polished. Deployment Complexity: You are required to configure a "layout" (regions/zones) to get started, whereas MinIO doesn't force this concept on you for simple setups. Design Philosophy: While Garage is fantastic for edge/geo-distributed use cases, I feel its overall design still lags behind MinIO and RustFS. There is a higher barrier to entry because you have to learn specific Garage concepts just to get it running.
It uses LMDB by default, but CAN use SQLite as an alternative [1]. LMDB is already used by OpenLDAP [2][3] and seems pretty bullet-proof.
[1] https://garagehq.deuxfleurs.fr/documentation/cookbook/real-w... [2] https://www.symas.com/mdb [3] https://www.openldap.org/
Okay, I'll correct my mistake,thx.
Regarding aistore the recommended prod configuration is kubernetes, which brings in a huge amount of complexity. Also, one person (Alex Aizman) has about half of the total commits in the project, so it seems like the bus factor is 1.
I could see running Aistore in single binary mode for small deployments, but for anything large and production grade I would not touch Aistore. Ceph is going to be the better option IMO, it is a truly collaborative open source project developed by multiple companies with a long track record.
> RustFS and SeaweedFS are the fastest in the object storage field.
I'm not sure if SeaweedFS is comparable. It's based on Facebook's Haystack design, which is used to address a very specific use case: minimizing the IOs, in particular the metadata lookup, for accessing individual objects. This leads to many trade-offs. For instance, its main unit of operations is on volumes. Data is appended to a volume. Erasure coding is done per volume. Updates are done at volume level, and etc.
On the other hand, a general object store goes beyond needle-in-a-haystack type of operations. In particular, people use an object store as the backend for analytics, which requires high-throughput scans.
> 4. Ceph [...]
MinIO was more for the "mini" use case (or more like "anything not large scale", with a very broad definition of large scale). Here "works out of the box" is paramount.
And Ceph is more for the maxi use case. Here in depth fine tuning, highly complex setups, distributed setups and similar are the norm. Hence out of the box small scale setup experience is bearly relevant.
So they really don't fill out the same space, even through their functionality overlaps.
Definitely, ceph shines in the 1-100 petabyte range whereas minio excelled in the 0-1 petabyte range.
I want to like RustFS, but it feels like there's so much marketing attached to the software it turns me off a little. Even a little rocket emoji and benchmark in the Github about page. Sometimes less is more. Look at the ty Github home page - 1 benchmark on the main page, the description is just "An extremely fast Python type checker and language server, written in Rust.".
Haha, +1. I really like RustFS as a product, but the marketing fluff and documentation put me off too. It reads like non-native speakers relying heavily on AI, which explains a lot. Honestly, they really need to bring in some native English speakers to overhaul the docs. The current vibe just doesn't land well with a US audience.
> too many people globally who use open source without being willing to pay for it.
That's an odd take... open source is a software licensing model, not a business model.
Unless you have some knowledge that I don't, MinIO never asked for nor accepted donations from users of their open source offerings. All of their funding came from sales and support of their enterprise products, not their open source one. They are shutting down their own contributions to the open source code in order to focus on their closed enterprise products, not due to lack of community engagement or (as already mentioned) community funding.
> That's an odd take... open source is a software licensing model, not a business model.
Yes, open-source is a software license model, not a business model. It is also not a software support model.
This change is them essentially declaring that MinIO is EOL and will not have any further updates.
For comparison, Windows 10 which is a paid software released in the same year as first minio release i.e. 2015 is already EOL.
>This change is them essentially declaring that MinIO is EOL and will not have any further updates.
Just fork it!
Simply forking it won't work. The legal risks have been well-documented. Under their AGPL + Commercial model, the moment your fork gets too popular, MinIO can just shut you down. This is exactly why the smart money and talent have already moved on to systems like RustFS, SeaweedFS, and Garage instead of trying to maintain a doomed fork.
The only risk is if you’re trying to bootstrap your competitors with their open source contribution, and have a paid private integrations.
AGPL means you cannot do this. This is less of a risk, than it is the explicit intention of the Licence.
I respectfully disagree with the notion that open source is strictly a licensing model and not a business model. For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine. History has shown that simply donating a project to a foundation (like Apache or CNCF) isn't a silver bullet; many projects under those umbrellas still struggle to find the resources they need to thrive. The ideal path—and the best outcome for users globally—is a "middle way" where: The software remains open and maintained. The core team has a viable way to survive and fund development. Open code ensures security, transparency, and a trustworthy software supply chain. However, the way MinIO has handled this transition is, in my view, the most disappointing approach possible. It creates a significant trust gap. When a company pivots this way, users are left wondering about the integrity of the code—whether it’s the potential for "backdoors" or undisclosed data transmission. I hope to see other open-source object storage projects mature quickly to provide a truly transparent and reliable alternative.
> For an open-source project to achieve long-term reliability and growth, it must be backed by a sustainable commercial engine
You mean like Linux, Python, PostgreSQL, Apache HTTP Server, Node.js, MariaDB, GNU Bash, GNU Coreutils, SQLite, VLC, LibreOffice, OpenSSH?
Actually, Linux reinforces my point. It isn't powered solely by volunteers; it thrives because the world's largest corporations (Intel, Google, Red Hat, etc.) foot the bill. The Linux Foundation is massively funded by corporate members, and most kernel contributors are paid engineers. Without that commercial engine, Linux would not have the dominance it does today. Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale. There is nothing wrong with making money while sustaining open source. The problem is MinIO's specific approach. Instead of a symbiotic relationship, they treated the community as free QA testers and marketing pawns, only to pull up the ladder later. That’s a "bait-and-switch," not a sustainable business model.
> Actually, Linux reinforces my point.
Not many open source projects are Linux-sized. Linux is worth billions of dollars and enabled Google and Redhat to exist, so they can give back millions, without compulsion, and in a self-interested way.
Random library maintainer dude should not expect their (very replaceable) library to print money. The cool open source tool/utility could be a 10-person company, maybe 100 tops, but people see dollar-signs in their eyes based on number of installs/GitHub stars, and get VC funding to take a swing for billions in ARR.
I remember when (small scale) open source was about scratching your own itch without making it a startup via user-coercion. It feels like the 'Open source as a growth-hack" has metastasized into "Now that they are hooked, entire user base is morally obligated to give me money". I would have no issue if a project included this before it gets popular - but that may prevent popular adoption. So it rubs me the wrong way when folk want to have their cake and eat it.
> Even OpenAI had to pivot away from its original non-profit, open principles to survive and scale.
Uh, no, OpenAI didn't pivot from being open in order to survive.
They survived for 7 years before ChatGPT was released. When it was, they pivoted the _instant_ it became obvious that AI was about to be a trillion-dollar industry and they weren't going to miss the boat of commercialization. Yachts don't buy themselves, you know!
> Yachts don't buy themselves, you know!
No but open source rugpulls do!
> Although many people criticize RustFS, suggesting its CLA might be "bait," I don't think such a requirement is excessive for open source software, as it helps mitigate their own legal risks.
What legal risks does it help mitigate?
RustFS has rug-pull written all over it. You can bookmark this comment for the future. 100% guaranteed it will happen. Only question is when.
I’m Elvin from the RustFS team in the U.S. Thanks for pointing out the issues with our initial CLA. We realized the original wording was overreaching and created a lot of distrust about the project's future.
We’ve officially updated the CLA to a standard License Grant model. Under these new terms, you retain full ownership of your contributions, and only grant us a non-exclusive license to use them. You can check the updated CLA here: https://github.com/rustfs/rustfs/blob/main/CLA.md.
More importantly, the RustFS team is officially pledging to keep our core repository permanently open-source. We are committed to an open-core engine for the long term, not a "bait and switch."
It is better, but FYI for context:
Lol, maybe you should fund the RustFS team yourself or sponsor a top-tier legal team for them. If you can help them rewrite their CLAs and guarantee they'll never face any IP risks down the road, then sure, you're 100% right.
Interesting that all your comments are shilling for RustFS
Fair point on the frequency of my comments, but there’s a nuance to the CLA discussion. Even with Apache 2.0, many major projects (like those under the CNCF or Apache Foundation) require a CLA to ensure the project has the legal right to distribute the code indefinitely.
My focus on the CLA is about building a solid foundation for RustFS so it doesn't face the licensing "re-branding" drama we've seen with other storage projects recently. It’s about long-term stability for the community, not just a marketing ploy.
And again - what IP risk does a CLA solve, that a DCO wouldn't? Like, IANAL so I certainly could be missing something, but I'd like to hear what it might be.
I’m also maintaining an open-source project and have spent significant time drafting our CLA, so I completely understand the concerns surrounding them.
While DCO is excellent for tracking provenance, we opted for a CLA primarily to address explicit patent grants and sublicensing rights—areas where a standard DCO often lacks the comprehensive legal coverage that a formal agreement provides.
It’s a common and sustainable practice in the industry to keep the core code open-source while developing enterprise features. Without a solid CLA in place, a project faces massive legal hurdles later on—whether that’s for future commercialization or even the eventual donation of the project to an open-source foundation like the CNCF or Apache Foundation. We're just trying to ensure long-term legal clarity for everyone involved.
I run Ceph in my k8s cluster (using rook) -- 4 nodes, 2x 4TB enterprise SSDs on each node. It's been pretty bulletproof; took some time to set up and familiarize with Ceph but now it's simple to operate.
Claude Code is amazing at managing Ceph, restoring, fixing CRUSH maps, etc. It's got all the Ceph motions down to a tee.
With the tools at our disposal nowadays, saying "I wouldn't dare deploy it without a deep understanding of the source code" seems like an overexaggeration!
I encourage folks to try out Ceph if it supports their usecase.
Considering the hallucinations I routinely deal with about databases, there isn’t a chance in hell I would trust an LLM to manage my storage for me.
If you setup ceph correctly (multiple failure domains, correct replication rules across failure domains, monitors spread across failure domain, osds are not force purged) it is actually pretty hard to break it. Rook helps a lot too as rook makes it easier to set up ceph correctly.
It looks like this article is biased. It only benchmarked RustFS.
In my experience, SeaweedFS has at least 3–5× better performance than MinIO. I used MinIO to host 100 TB of images to serve millions of users daily.
Gosh, Ceph what a pita. Never again LOL. I wouldn't even want an LLM to suffer working on it.
Haha, totally get you! I think if you forced an LLM to manage a large-scale Ceph cluster, it would probably start hallucinating about retirement.