I think 1 is a myth. It’s easy to deploy as long as you don’t care about atomic updates, like the newly uploaded version of foo.php importing bar.php which hasn’t been uploaded yet. Solve that, say with a DAG of which files to upload in which order, and it’s no longer easier than anything else.

Like many other things, PHP makes it easier to do the wrong thing than other languages which make you do the same thing correctly.

I worked at a place that did git pull as the release process - it was a big site but I never heard of there being any issues (though the code was on life support so no huge changes were happening).

They switched to blue/green deploys for the new site (which I suspect was done at the server level, not with symlinks or the like).

> It’s easy to deploy as long as you don’t care about atomic updates

Does that matter if a bit of downtime is acceptable?

No, but it's moving the goalpost quite a bit. "Just copying a bunch of files around" is definitely easier than, say, deploying a new Docker container containing a Python app or a Rust or Go binary, etc. But neither is it nearly so robust.

Wouldn't that be better solved by uploading everything to a v2 directory and then renaming the directories?

Maybe. You could probably get pretty far with atomically moving a symlink so that the filesystem view always looks at either all the old or all the new files.

However, even that doesn't handle in-flight requests that have their view of the files swapped out from under them. Yes, that's a small time window for an error to happen, but it's definitely not instantaneous.

The safer solution would be to update the server config to point at the new directory and reload the webserver, but now you're way past just uploading the new files.

Its pretty instant. Hitting inflight request still finishes with the old version since the code thats run is already in memory.

I dont think its very different from changing proxy to point to different port.

That's not quite right. Imagine some (horrid) code like:

  $conn->query('SELECT * FROM giant_table ORDER BY foo LIMIT 1');
  require 'old.php';
such that there's a significant interval between the request being spawned and it later including another file. The duration of the query is the opportunity for 'old.php' to go away, which would cause a 500 error.

The difference is that you can have 2 ports listening at once and can close the first once it's drained of connections.

There's no fundamentally safe way to upgrade a bucket-of-files PHP app without tooling complex enough to rival another language's deployment.

I don't believe thats how PHP works (atleast not anymore). When the request is made the code is first compiled to opcodes and only after that's done the opcodes are run. In most production environments these opcodes are even cached so even if you delete the project it will run.

In any case you would have to hit some few milisecond window in this opcache generation to break single request but even that might be unlikely thanks to how filesystems read files?

In that example, I'm pretty sure that the 'require' line is compiled to opcodes, but not executed, until that line is reached. Supporting evidence: https://stackoverflow.com/questions/37880749/when-is-a-php-i...

So if there's a 10 second gap between the start of execution and the 'require' line being reached and evaluated, then any incompatible changes to the file being required within that 10 seconds will cause an error.

That actually makes sense because the codepath could be huge with huge surfaces of unused code.

With OpCache this could be solved so i guess lessin for me - deploy like this with opcache on.

Well, now you just have to manage cache invalidation. Piece of cake!

I kid, I kid, but seriously, now you have a different set of issues.

This is how its done in many deploy tools in PHP world with help of git. I think it works so well nobody even thinks about how it works.

That's a perfectly reasonable approach, so long as you understand why it's a risky operation and can tolerate the consequences, including customers seeing errors in their browser. If that's OK for your use case, then rock on! If you can't tolerate that, then you have to have switch to a more complex upgrade system, like blue-green deploys behind a load balancer or such. In other words, the deployment method of a Rust or Go or Python or Java app.

In a sense thats blue-green deployment just on filesystem level? PHP is always run behind proxy/webserver (mostly ngnix nowdays)

But you are right there is no reason why you couldn't have two instances of the php app runing and switch between them. For some reason the PHP deployment services i've used seem to use the filesystem approach and i doubt it's laziness or incompetence.

I'd content that it's out of ignorance, and I don't mean that in a mean or nasty way. I've heard lots of pushback from PHP devs that it's way easier to update than sites written in other languages are, but I think it's genuinely due to a lack of understanding of why those languages recommend other upgrade processes. Those processes solve real, genuine problems that also affect PHP, but they're dismissed as overkill or enterprisey or overly complicated.

And all that may be true for a trivial website. If you've written a personal project with 10,000 hits per year, YOLO. Go for it. The odds of it affecting one of those users is vanishingly tiny, and so what if it does? But if you're hosting something like a Wordpress site for a large company with lots of traffic, it's crucial to understand why "just rsync the files over" is not an acceptable deployment method.

Sorry but we were not talking about “rsyncing the files over”. We are talking about what services that i've used like Forge or Ploi do where you deploy project into separate folder and then switch symlink. You can even roll it back.

I have a feeling you want to dunk on poor dumb PHP developer but like Forge is by the people who created Laravel. I believe they would put some thought into it. Maybe just maybe small chance of one bad request is not such a bad deal.

It is literally exactly the same issue, just with slightly less of an error window. I don't think those devs are poor and dumb, but I do think it's likely they've been working in environments where production errors are more tolerated than in other environments.

> Maybe just maybe small chance of one bad request is not such a bad deal.

If your company is OK with that, seriously, sincerely, right on! Keep doing this and move on to other problems.

I had thought about it and you are just pulling my nose.

If you have very long database query and you update your app in middle of it using blue-green load balancer you get to same production error. It is the same thing just implemented slightly differently because of PHP characteristics allow this and with different systems you have to use different strategy.

So yeah have good feeling about us PHP devs having bad deployment strategies.

That is… exactly wrong. I encourage you to consider why that would not be the case.

It is not the same issue, due to how opcache works. No one remotely competent runs PHP without opcache in 2025.