I just need something that can do S3 and is reliable and not slow.
Oh, simply that.I'm a simple man, I just need edge delivered cdn content that never fails and responds within 20ms.
I just need something that can do S3 and is reliable and not slow.
Oh, simply that.I'm a simple man, I just need edge delivered cdn content that never fails and responds within 20ms.
I don't think that is what they are looking for. They just want something with an s3 compatible API they can run on their local network or maybe even on the same host.
So, why not write to a shared wrapper/facade?
If you split the interaction API out to an interface detailing actual program interaction with the service, then write an s3 backend and an FS backend. Then you could plug in any backend as desired and write agnostic application code.
Personally I end up there anyways testing and specifying the third party failure modes.
what's the point then? Just api around FS?
For a lot of project that would be sufficient. I've worked on projects that "required" an S3 storage solution. Not because it actually did, but because it needed some sort of object/file storage which could be accesses from somewhere, might be a Java application running in JBoss, might be a SpringBoot application in a container, on Kubernetes, Nomad or just on a VM.
Like it or not, S3 has become the de facto API for object storage for many developers. From the operations side of things, managing files is easier and already taken care of by your storage solution, be it a SAN, NAS or something entirely different. Being able to backup and manage whatever is stored in S3 with your existing setup is direct saving.
If you actually use a large subset of S3s features this might not be good solution, but in my experience you have a few buckets and a few limited ACLs and that's it.
would you not just say "edge delivered content"?