The primary benefit of containerization is isolation. Before docker, you'd drop all your code on a shared host so you had to manage your dependencies carefully. Specifically I remember having to fight with mysql gem a lot to make sure that there no conflicts between installed versions. With docker, you build your image, test it and ship it.
We had vm-per-app before docker, so it was still build the image, test, and ship, but it actually had everything it needed inside the vm.
Docker helps with the portability due to it's ubiquitous it is now, but it's not like the vm requirement went away, the docker image still generally runs in a vm in any serious environment, and a lot more attention has to be paid to the vm:docker pairing than the previous hypervisor:vm pairing.
It is very funny to me that the sibling comment calls this "a very broken setup" and for you "it doesn't sound like a big deal".
It's all about perspectives, or you really just never had to deal with it.
The happy path ain't a big deal. But think of the unhappy ones:
* What if a server gets rebooted (maybe it crashed) for any reason anywhere in the process. Maybe you lost internet while doing the update. Were you still dropping tarballs? Did the server get it? Did it start with the new version while the other servers are still on the old one?
* What about a broken build (maybe gem problem, maybe migration problem, may other). All your servers are on it, or only one? How do you revert (push an older tarball)
A lot more manual processes. Depends on the tool you had. Good tooling to handle this is more prevalent nowadays.
I use Kubernetes for almost everything (including my pet projects) and I see the value it brings, even if for increased complexity (although k3s is a pretty good middle ground). But none of these things you mentioned are unsolvable or require manual intervention.
> What if a server gets rebooted
Then the rsync/scp would fail and I would notice it in deployment logs. Or it should be straightforward to monitor current version across a fleet of baremetal.
> Maybe you lost internet while doing the update
True, but even Ansible recommends running a controller closer to target machines.
> What about a broken build
That's what tests are for.
> maybe migration problem
That's trickier, but unrelated to deployment method.
Never said they were unsolvable. You asked for elaboration about pains of back then before lots of the tools most take for granted existed. You seem to think we are talking about massive problems, but it's more about a thousand papercuts.
> What about a broken build. All your servers are on it, or only one?
The ones you pushed the image are on the new image, the ones you didn't push the image are on the old image.
> How do you revert (push an older tarball)
Yes, exactly, you push the older version.
The command pushes a version into the servers. It does exactly what that says. There's nothing complicated to invent about it.
All the interpreted frameworks use the same semantics, because it works extremely well. It tends to work much better than container orchestration, that's for sure.
> A lot more manual processes.
It's only manual if it's not automated... exactly like creating a container, by the way.
The primary benefit of containerization is isolation. Before docker, you'd drop all your code on a shared host so you had to manage your dependencies carefully. Specifically I remember having to fight with mysql gem a lot to make sure that there no conflicts between installed versions. With docker, you build your image, test it and ship it.
We had vm-per-app before docker, so it was still build the image, test, and ship, but it actually had everything it needed inside the vm.
Docker helps with the portability due to it's ubiquitous it is now, but it's not like the vm requirement went away, the docker image still generally runs in a vm in any serious environment, and a lot more attention has to be paid to the vm:docker pairing than the previous hypervisor:vm pairing.
I haven't shipped to a shared host since the 00's. We deployed to isolated VMs a decade before docker.
It is very funny to me that the sibling comment calls this "a very broken setup" and for you "it doesn't sound like a big deal".
It's all about perspectives, or you really just never had to deal with it.
The happy path ain't a big deal. But think of the unhappy ones:
* What if a server gets rebooted (maybe it crashed) for any reason anywhere in the process. Maybe you lost internet while doing the update. Were you still dropping tarballs? Did the server get it? Did it start with the new version while the other servers are still on the old one?
* What about a broken build (maybe gem problem, maybe migration problem, may other). All your servers are on it, or only one? How do you revert (push an older tarball)
A lot more manual processes. Depends on the tool you had. Good tooling to handle this is more prevalent nowadays.
I use Kubernetes for almost everything (including my pet projects) and I see the value it brings, even if for increased complexity (although k3s is a pretty good middle ground). But none of these things you mentioned are unsolvable or require manual intervention.
> What if a server gets rebooted
Then the rsync/scp would fail and I would notice it in deployment logs. Or it should be straightforward to monitor current version across a fleet of baremetal.
> Maybe you lost internet while doing the update
True, but even Ansible recommends running a controller closer to target machines.
> What about a broken build
That's what tests are for.
> maybe migration problem
That's trickier, but unrelated to deployment method.
> How do you revert (push an older tarball)
By... pushing an older tarball?
Never said they were unsolvable. You asked for elaboration about pains of back then before lots of the tools most take for granted existed. You seem to think we are talking about massive problems, but it's more about a thousand papercuts.
> What if a server gets rebooted
You push the image again.
> What about a broken build. All your servers are on it, or only one?
The ones you pushed the image are on the new image, the ones you didn't push the image are on the old image.
> How do you revert (push an older tarball)
Yes, exactly, you push the older version.
The command pushes a version into the servers. It does exactly what that says. There's nothing complicated to invent about it.
All the interpreted frameworks use the same semantics, because it works extremely well. It tends to work much better than container orchestration, that's for sure.
> A lot more manual processes.
It's only manual if it's not automated... exactly like creating a container, by the way.