Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.

I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.

> > Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun.

> Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software).

MinIO is dealing with two out of the three issues, and the company is partially providing work for free, how is that "completely different"?

The MinIO business model was a freemium model (well, Open Source + commercial support, which is slightly different). They used the free OSS version to drive demand for the commercially licensed version. It’s not like they had a free community version with users they needed to support thrust upon them — this was their plan. They weren’t volunteers.

You could argue that they got to the point where the benefit wasn’t worth the cost, but this was their business model. They would not have gotten to the point where the could have a commercial-only operation without the adoption and demand generated from the OSS version.

Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.

> Running a successful OSS project is often a thankless job. Thanks for doing it. But this isn’t that.

No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users. It's worse if you are not being paid, but I'm not sure why you are asserting dealing with bullshit is just peachy if you are being paid.

If that is the case why did minio start with the open source version? If there were only downsides? Sounds like stupid business plan

They wanted adoption and a funnel into their paid offering. They were looking out for their own self-interest, which is perfectly fine; however, it’s very different from the framing many are giving in this thread of a saintly company doing thankless charity work for evil homelab users.

Where did I say there were only downsides? There are definitely upsides to this business model, I'm just refuting the idea that because there are for profit motives the downsides go away.

I hate when people mistreat the people that provide services to them: doesn't matter if it's a volunteer, underpaid waitress or well paid computer programmer. The mistreatment doesn't become "ok" because the person being mistreated is paid.

I doubt that minio pulled the open source version because they were mistreated. Really yeah there are some projects where this is a problem, but it’s mostly because the project only has a single maintainer.

People are angry about minio , but that’s because of their rugpull.

The minio people did a lot of questionable things even before the rugpull. They tried to claim AGPL infects software over the network, on a previous version of https://min.io/compliance

> Combining MinIO software as part of a larger software stack triggers your GNU AGPL v3 obligations. The method of combining does not matter. When MinIO is linked to a larger software stack in any form, including statically, dynamically, pipes, or containerized and invoked remotely, the AGPL v3 applies to your use. What triggers the AGPL v3 obligations is the exchanging data between the larger stack and MinIO.

> No, even if you are being paid, it's a thankless, painful job to deal with demanding, entitled free users.

So… aren't they providing (paid) support? same thing…

Absurd comparison.

“I don’t want to support free users” is completely different than “we’re going all-in on AI, so we’re killing our previous product for both open source and commercial users and replacing it with a new one”

I can also highly recommend SeaweedFS for development purposes, where you want to test general behaviour when using S3-compatible storage. That's what I mainly used MinIO before, and SeaweedFS, especially with their new `weed mini` command that runs all the services together in one process is a great replacement for local development and CI purposes.

I've been using rustfs for some very light local development and it looks.. fine: )

Ironically rustfs.com is currently failing to load on Firefox, with 'Uncaught TypeError: can't access property "enable", s is null'. They shoulda used a statically checked language for their website...

My Firefox access is working fine. The version is 147.0.3 (aarch64)

I'm running Firefox 145.0.2 on amd64.

It seems like the issue may be that I have WebGL disabled. The console includes messages like "Failed to create WebGL context: WebGL creation failed: * AllowWebgl2:false restricts context creation on this system."

Oh well, guess I can't use rustfs :}

I just disabled webgl on my firefox and it worked fine.

Your problems could be caused by a whiny fan. Here is the source https://github.com/rustfs/rustfs

I like the way multiple people feel the need to defend a buggy website with their anecdotal n=1 evidence.

It’s not difficult to make a website that works for everyone.

Oh is it the website that's failing? I kind of assumed it was the web UI for the software. Workarounds sort of kind of make sense there.. Maybe.. But the website. That's bad.

can vouch for SeaweedFS, been using it since the time it was called weedfs and my managers were like are you sure you really want to use that ?

Not seeing any one else comment about it, but I would caution against relying on Wasabi primarily. They actively and silently corrupted a lot of my data and still billed me for it. You'll just start seeing random 500s when trying to get data down from your bucket and it's just gone, no recovery, but it still counts as stored data so you're still paying for it.

Nothing wrong? Does minio grant the basic freedoms of being able to run the software, study it, change it, and distribute it?

Did minio create the impression to its contributors that it will continue being FLOSS?

Yes the software is under AGPL. Go forth and forkify.

The choice of AGPL tells you that they wanted to be the only commercial source of the software from the beginning.

> the software is under AGPL. Go forth and forkify.

No, what was minio is now aistor, a closed-source proprietary software. Tell me how to fork it and I will.

> they wanted to be the only commercial source of the software

The choice of AGPL tells me nothing more than what is stated in the license. And I definitely don't intend to close the source of any of my AGPL-licensed projects.

> Tell me how to fork it and I will.

https://github.com/minio/minio/fork

The fact that new versions aren't available does nothing to stop you from forking versions that are. Or were - they'll be available somewhere, especially if it got packaged for OS distribution.

The only packages I find of aistor, are binary packages. Not only that, the aistor license agreement explicitly states the following:

> You may not modify, reverse engineer, decompile, disassemble, or create derivative works of the Software.

Do you consider this a breach of the AGPL?

So fork the last minio, and work from there... nobody is stopping you.

aistor is proprietary software[1]. Having an old version of your software be open source does not make your software open-source. Why does this need an explanation?

[1] https://www.min.io/legal/aistor-free-agreement

You aren't entitled to the product of someone else's work even if they gave away older versions of that work... What is so hard for you to understand about that?

No, I no longer am, because aistor/minio decided they no longer respect their users' freedom. It's as simple as that -- aistor is unethical and borders on malware.

> And I definitely don't intend to close the source of any of my AGPL-licensed projects.

If a commercial company has "core" version under AGPL, it usually means their free version is an extended demo of the commercial product.

[deleted]

Wasabi looks like a service.

Any recommendation for an in-cluster alternative in production?

Is that SeaweedFS?

I’ve never heard of SeaweedFS, but Ceph cluster storage system has an S3-compatible layer (Object Gateway).

It’s used by CERN to make Petabyte-scale storage capable of ingesting data from particle collider experiments and they're now up to 17 clusters and 74PB which speaks to its production stability. Apparently people use it down to 3-host Proxmox virtualisation clusters, in a similar place as VMware VSAN.

Ceph has been pretty good to us for ~1PB scalable backup storage for many years, except that it’s a non-trivial system administration effort and needs good hardware and networking investment, and my employer wasn't fully backing that commitment. (We’re moving off it to Wasabi for S3 storage). It also leans more towards data integrity than performance, it's great at being massively-parallel and not so rapid at being single thread high-IOPs.

https://ceph.io/en/users/documentation/

https://docs.ceph.com/en/latest/

https://indico.cern.ch/event/1337241/contributions/5629430/a...

Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS and made heavy use of gluster's async geo-replication feature to keep two storage arrays in sync that were far away over a slow link. This was done after getting fed up with rsync being so slow and always thrashing the disks having to scan many TBs every day.

While there is a geo-replication feature for Ceph, I cannot keep using ZFS at the same time, and gluster is no longer developed, so I'm currently looking for an alternative that would work for my use case if anyone knows of a solution.

> "Ceph is a non-starter for me because you cannot have an existing filesystem on the disk. Previously I used GlusterFS on top of ZFS"

I became a Ceph admin by accident so I wasn't involved in choosing it and I'm not familiar with other things in that space. It's a much larger project than a clustered filesystem; you give it disks and it distributes storage over them, and on top of that you can layer things like the S3 storage layer, its own filesystem (CephFS) or block devices which can be mounted on a Linux server and formatted with a filesystem (including ZFS I guess, but that sounds like a lot of layers).

> "While there is a geo-replication feature for Ceph"

Several; the data cluster layer can do it in two ways (stretch clusters and stretch pools), the block device layer can do it in two ways (journal based and snapshot based), the CephFS filesystem layer can do it with snapshot mirroring, and the S3 object layer can do it with multi-site sync.

I've not used any of them, they all have their trade-offs, and this is the kind of thing I was thinking of when saying it requires more skills and effort. for simple storage requirements, put a traditional SAN, a server with a bunch of disks, or pay a cheap S3 service to deal with it. Only if you have a strong need for scalable clusters, a team with storage/Linux skills, a pressing need to do it yourself, or to use many of its features, would I go in that direction.

https://docs.ceph.com/en/latest/rados/operations/stretch-mod...

https://docs.ceph.com/en/latest/rbd/rbd-mirroring/

https://docs.ceph.com/en/latest/cephfs/cephfs-mirroring/

https://docs.ceph.com/en/latest/radosgw/multisite/

Ceph is a non-starter because you need a team of people managing it constantly

I'm not posting to convince people they should use it, just that it's a really cool piece of open source infrastructure that I think is less well known, and I resepect it. It is very configurable and tunable, has a lot of features, command lines, and things to learn, and that does need people with skills and time.

That said, it doesn't need constant management; it's excellent at staying up even while damaged. As long as the cluster has enough free space it will rebuild around any hardware failure without human intervention, it doesn't need hot spares; if you plan it carefully then it has no single point of failure. (The original creator introduces the design choice of 'placement groups' and tradeoffs in this video[1]).

Most of the management time I've spent has been ageing hardware flaking out without actually failing - old disks erroring on read, controllers failing and dropping all the disks temporarily causing tens of seconds of read latency which had knock-on effects, or when we filled it too full and it went read-only. Other management work has been learning my way around it, upgrades, changing the way we use it for different projects, onboarding and offboarding services that use it, all of which will vary with what you actually do with it.

I've spent less time with VMware VSAN, but VSAN does a lot less, it takes your disks and gives you a VMFS datastore and maybe an iSCSI target. There can't be many alternatives which do what Ceph does, and require less skill and effort, and don't involve paying a vendor to manage it for you and give you a web interface?

[1] https://www.youtube.com/watch?v=PmLPbrf-x9g

That's was not my experience. Deploying and configuring ceph was a nightmare due to the mountain of options and considerations, but once it was deployed, ceph is extremely hands-off and resilient.

Yeah sure. I manage a ceph cluster (4PB) and have a few other responsibilities at the same time.

I can tell you that ceph is something I don't need to touch every month. Other things I have to baby more regularly