Maybe. You could probably get pretty far with atomically moving a symlink so that the filesystem view always looks at either all the old or all the new files.

However, even that doesn't handle in-flight requests that have their view of the files swapped out from under them. Yes, that's a small time window for an error to happen, but it's definitely not instantaneous.

The safer solution would be to update the server config to point at the new directory and reload the webserver, but now you're way past just uploading the new files.

Its pretty instant. Hitting inflight request still finishes with the old version since the code thats run is already in memory.

I dont think its very different from changing proxy to point to different port.

That's not quite right. Imagine some (horrid) code like:

  $conn->query('SELECT * FROM giant_table ORDER BY foo LIMIT 1');
  require 'old.php';
such that there's a significant interval between the request being spawned and it later including another file. The duration of the query is the opportunity for 'old.php' to go away, which would cause a 500 error.

The difference is that you can have 2 ports listening at once and can close the first once it's drained of connections.

There's no fundamentally safe way to upgrade a bucket-of-files PHP app without tooling complex enough to rival another language's deployment.

I don't believe thats how PHP works (atleast not anymore). When the request is made the code is first compiled to opcodes and only after that's done the opcodes are run. In most production environments these opcodes are even cached so even if you delete the project it will run.

In any case you would have to hit some few milisecond window in this opcache generation to break single request but even that might be unlikely thanks to how filesystems read files?

In that example, I'm pretty sure that the 'require' line is compiled to opcodes, but not executed, until that line is reached. Supporting evidence: https://stackoverflow.com/questions/37880749/when-is-a-php-i...

So if there's a 10 second gap between the start of execution and the 'require' line being reached and evaluated, then any incompatible changes to the file being required within that 10 seconds will cause an error.

That actually makes sense because the codepath could be huge with huge surfaces of unused code.

With OpCache this could be solved so i guess lessin for me - deploy like this with opcache on.

Well, now you just have to manage cache invalidation. Piece of cake!

I kid, I kid, but seriously, now you have a different set of issues.