> I'm old enough to remember what it was like to deploy a Rails application pre-Docker: rsyncing or dropping a tarball into a fleet of instances and then `touch`ing the requisite file to get the app server to reset.
If this is what you remember, then you remember a very broken setup. Even an “ancient” Capistrano deployment system is better than that.
Or there was “git push heroku main” or whatever it was back in the day. Had quite a moment when I first did that from a train – we take such things for granted now of course...
Honestly this is still a great way to deploy apps and still some of the best DX there is, IMO.
Costs a crap ton for what it is, but it is nice.
Yeah, it also wasn’t difficult to do the equivalent without heroku via post-commit hook.
Honestly, even setting up autoscaling via AMIs isn’t that hard. Docker is in many ways the DevOps equivalent of the JS front end world: excessive complexity, largely motivated by people who have no idea what the alternatives are.
I was working on Rails apps before AMIs or Heroku.
Me too. I'm not responding specifically to you with the parent comment. That said, "autoscaling", as a concept, didn't really exist prior to AWS AMIs (or Heroku, I guess).
My point is that a lot of devs reach to Docker because they think they need it to do these "hard" things, and they immediately get lost in the complexity of that ecosystem, having never realized that there might be a better way.
My recollection is that this is what many Capistrano setups were doing under the covers. Capistrano was just an orchestration framework for executing commands across multiple machines.
More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
> More than that, I worked for many enterprises that were using Rails but had their own infrastructure conventions and requirements, and were unable or unwilling to explore tools like Capistrano or (later) Heroku.
Well, OK, so you remember a bad setup that was bad for whatever reason. My point is that there's nothing about your remembered system that was inherent to Rails, and there were (and are) tons of ways to deploy that didn't do that (just like any other framework).
Capistrano can do whatever you want it to do, of course, so maybe someone wrote a deployment script that rsynced a tarball, touched a file, etc., to restart a server, but it's not standard. The plain vanilla Cap deploy script, IIRC, does a git pull from your repo to a versioned directory, runs the asset build, and restarts the webserver via signal.
This was before Git! (Subversion had its meager charms.) Even after Git became widespread, some infra teams were uncomfortable installing a dev tool like Git on production systems, so a git pull was out of the question.
The main issue that, while not unique to Rails, plagued the early interpreted-language webapps I worked on was that the tail end of early CI pipelines didn't spit out a unified binary, just a bag of blessed files. Generating a tarball helped, but you still needed to pair it with some sort of an unpack-and-deploy mechanism in environments that wouldn't or couldn't work with stock cap deploy, like the enterprise. (I maintained CC.rb for several years.) Docker was a big step up IMV because all of the sudden the output could be a relatively standardized binary artifact.
This is fun. We should grab a beer and swap war stories.
If you call a stable, testable and „reproducible” (by running locally or on some dev machine) tarball worse than git pull then you are the one who is killing the solutions that work with unpredictable and unsafe world. I think beer with swapping stories is a good idea, because I would love to learn what to avoid.
Capistrano lost its meaning when autoscaling went mainstream (which was around 15 years ago now), yet people kept using it in elastic environments with poor results.
The parent wasn’t describing an autoscaling deployment system.
Rails has a container-based deployment if you actually need that level of complexity.
GP was talking about pre-docker deployments. You could totally deploy immutable Rails AMIs without both Docker and Capistrano.
AMIs were still pretty novel at the time I started (around 2007 like the GP). The standard deployment in the blogs/books was using Capistrano to scp the app over to like a VPS (we did colo) and then run monit or god to reboot the mongrels. We have definitely improved imho!
Totally, around that time I did that too (although I was working with LAMP stacks so no Capistrano), but with the rise of AWS, Capistrano got outdated. I know that not everyone jumped board on cloud that early, and even the ones that did, there was an adaptation period where EC2 machines were treated just like colo machines. But Ruby also used to be the hipster thing before 2010 so... :)
Anyway, never liked Capistrano so I'm probably biased