I've been writing Rails code since 2007. There's a reason the stack has gotten more complicated with time, and virtually no team has ever done it right by this definition.

The trouble with an omakase framework is not just that you have to agree to the initial set of choices but that you have to agree with every subsequent choice that's made, and you have to pull your entire dev team along for the ride. It's a very powerful framework, but the maintainers are generally well-meaning humans who do not possess a crystal ball, and many choices were made that were subsequently discarded. Consequently, my sense is that there are very few vanilla Rails apps in the wild anywhere.

(I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset. Docker and k8s bring a lot of pain. It's not worse than that was.)

> I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset.

If this is what you remember, then you remember a very broken setup. Even an “ancient” Capistrano deployment system is better than that.

Or there was “git push heroku main” or whatever it was back in the day. Had quite a moment when I first did that from a train – we take such things for granted now of course...

Honestly this is still a great way to deploy apps and still some of the best DX there is, IMO.

Costs a crap ton for what it is, but it is nice.

Yeah, it also wasn’t difficult to do the equivalent without heroku via post-commit hook.

Honestly, even setting up autoscaling via AMIs isn’t that hard. Docker is in many ways the DevOps equivalent of the JS front end world: excessive complexity, largely motivated by people who have no idea what the alternatives are.

I was working on Rails apps before AMIs or Heroku.

Me too. I'm not responding specifically to you with the parent comment. That said, "autoscaling", as a concept, didn't really exist prior to AWS AMIs (or Heroku, I guess).

My point is that a lot of devs reach to Docker because they think they need it to do these "hard" things, and they immediately get lost in the complexity of that ecosystem, having never realized that there might be a better way.

My recollection is that this is what many Capistrano setups were doing under the covers. Capistrano was just an orchestration framework for executing commands across multiple machines.

More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.

> More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.

Well, OK, so you remember a bad setup that was bad for whatever reason. My point is that there's nothing about your remembered system that was inherent to Rails, and there were (and are) tons of ways to deploy that didn't do that (just like any other framework).

Capistrano can do whatever you want it to do, of course, so maybe someone wrote a deployment script that rsynced a tarball, touched a file, etc., to restart a server, but it's not standard. The plain vanilla Cap deploy script, IIRC, does a git pull from your repo to a versioned directory, runs the asset build, and restarts the webserver via signal.

This was before Git! (Subversion had its meager charms.) Even after Git became widespread, some infra teams were uncomfortable installing a dev tool like Git on production systems, so a git pull was out of the question.

The main issue that, while not unique to Rails, plagued the early interpreted-language webapps I worked on was that the tail end of early CI pipelines didn't spit out a unified binary, just a bag of blessed files. Generating a tarball helped, but you still needed to pair it with some sort of an unpack-and-deploy mechanism in environments that wouldn't or couldn't work with stock cap deploy, like the enterprise. (I maintained CC.rb for several years.) Docker was a big step up IMV because all of the sudden the output could be a relatively standardized binary artifact.

This is fun. We should grab a beer and swap war stories.

If you call a stable, testable and „reproducible” (by running locally or on some dev machine) tarball worse than git pull then you are the one who is killing the solutions that work with unpredictable and unsafe world. I think beer with swapping stories is a good idea, because I would love to learn what to avoid.

Capistrano lost its meaning when autoscaling went mainstream (which was around 15 years ago now), yet people kept using it in elastic environments with poor results.

The parent wasn’t describing an autoscaling deployment system.

Rails has a container-based deployment if you actually need that level of complexity.

GP was talking about pre-docker deployments. You could totally deploy immutable Rails AMIs without both Docker and Capistrano.

AMIs were still pretty novel at the time I started (around 2007 like the GP). The standard deployment in the blogs/books was using Capistrano to scp the app over to like a VPS (we did colo) and then run monit or god to reboot the mongrels. We have definitely improved imho!

Totally, around that time I did that too (although I was working with LAMP stacks so no Capistrano), but with the rise of AWS, Capistrano got outdated. I know that not everyone jumped board on cloud that early, and even the ones that did, there was an adaptation period where EC2 machines were treated just like colo machines. But Ruby also used to be the hipster thing before 2010 so... :)

Anyway, never liked Capistrano so I'm probably biased

> rsyncing or dropping a tarball into a fleet of instances

Could you elaborate? Doesn't sound like a big deal.

The primary benefit of containerization is isolation. Before docker, you'd drop all your code on a shared host so you had to manage your dependencies carefully. Specifically I remember having to fight with mysql gem a lot to make sure that there no conflicts between installed versions. With docker, you build your image, test it and ship it.

We had vm-per-app before docker, so it was still build the image, test, and ship, but it actually had everything it needed inside the vm.

Docker helps with the portability due to it's ubiquitous it is now, but it's not like the vm requirement went away, the docker image still generally runs in a vm in any serious environment, and a lot more attention has to be paid to the vm:docker pairing than the previous hypervisor:vm pairing.

I haven't shipped to a shared host since the 00's. We deployed to isolated VMs a decade before docker.

It is very funny to me that the sibling comment calls this "a very broken setup" and for you "it doesn't sound like a big deal".

It's all about perspectives, or you really just never had to deal with it.

The happy path ain't a big deal. But think of the unhappy ones:

* What if a server gets rebooted (maybe it crashed) for any reason anywhere in the process. Maybe you lost internet while doing the update. Were you still dropping tarballs? Did the server get it? Did it start with the new version while the other servers are still on the old one?

* What about a broken build (maybe gem problem, maybe migration problem, may other). All your servers are on it, or only one? How do you revert (push an older tarball)

A lot more manual processes. Depends on the tool you had. Good tooling to handle this is more prevalent nowadays.

I use Kubernetes for almost everything (including my pet projects) and I see the value it brings, even if for increased complexity (although k3s is a pretty good middle ground). But none of these things you mentioned are unsolvable or require manual intervention.

> What if a server gets rebooted

Then the rsync/scp would fail and I would notice it in deployment logs. Or it should be straightforward to monitor current version across a fleet of baremetal.

> Maybe you lost internet while doing the update

True, but even Ansible recommends running a controller closer to target machines.

> What about a broken build

That's what tests are for.

> maybe migration problem

That's trickier, but unrelated to deployment method.

> How do you revert (push an older tarball)

By... pushing an older tarball?

Never said they were unsolvable. You asked for elaboration about pains of back then before lots of the tools most take for granted existed. You seem to think we are talking about massive problems, but it's more about a thousand papercuts.

> What if a server gets rebooted

You push the image again.

> What about a broken build. All your servers are on it, or only one?

The ones you pushed the image are on the new image, the ones you didn't push the image are on the old image.

> How do you revert (push an older tarball)

Yes, exactly, you push the older version.

The command pushes a version into the servers. It does exactly what that says. There's nothing complicated to invent about it.

All the interpreted frameworks use the same semantics, because it works extremely well. It tends to work much better than container orchestration, that's for sure.

> A lot more manual processes.

It's only manual if it's not automated... exactly like creating a container, by the way.

This is why I've always had a soft spot for Sinatra