The author makes a fair point that the language is no longer the fractal of bad design it was in 2009, but doesn't make the case for starting a green field project with it in 2025.
What does it do better than other languages? The article mentions features that sound like parity with other modern languages, but nothing that stands out.
> What does it do better than other languages?
Shared nothing architecture. If you're using e.g. fastapi you can store some data in memory and that data will be available across requests, like so
This is often the fastest way to solve your immediate problem, at the cost of making everything harder to reason about. PHP persists nothing between requests, so all data that needs to persist between requests must be explicitly persisted to some specific external data store.Non-php toolchains, of course, offer the same upsides if you hold them right. PHP is harder to hold wrong in this particular way, though, and in my experience the upside of eliminating that class of bug is shockingly large compared to how rarely I naively would have expected to see it in codebases written by experienced devs.
I hadn't really thought about PHP through this lens. But it's so much a part of where it came from as a preprocessor for text. It was a first-class part the stateless design of the OG internet. Now everyone wants all things persisted all the time, and leads to crazy state problems.
Also because it's a language for the web, and HTTP is stateless.
But that's Python, no?
Edit: Oh, you showed an example against Python! Now I get it!
I think the advantages are pretty much what they always were.
1. Easy deployment - especially on shared hosting 2. Shared nothing between requests means easy concurrency AND parallelism 3. Mixing with HTML means you do not need a separate template language
Not everyone will see the third as an advantage, and many web frameworks, including PHP ones, prefer a separate, more restrictive, template language. It can be a footgun, but it is very convenient sometimes.
I think #1 has been outdated for a long time. FTP or copying files through CPanel hasn't been more convenient than workflows like `fly deploy` where you don't even need to know about a remote file system nor about a server that's already running. And php-fpm being called by nginx is also more complicated than just "node script.js" or running a compiled Go binary.
While I never actually wanted it, #2 was kinda cool spiritually. Same with CGI or a Cloudflare edge worker.
People who manually copy files are not going to use anything more sophisticated. It was not what I had in mind and I do not think its a fair comparison, nor is a pricey and proprietary platform.
These days I imagine people are more likely to be using git pull or rsync anyway.
> And php-fpm being called by nginx is also more complicated than just "node script.js" or running a compiled Go binary.
Apache with mod_php is still an option AFAIK. It is also definitely easy to find everything pre-configured on share hosting. Then there is FrankenPHP.
Might not be the easiest option for everyone, but it is going to be for some people.
I think 1 is a myth. It’s easy to deploy as long as you don’t care about atomic updates, like the newly uploaded version of foo.php importing bar.php which hasn’t been uploaded yet. Solve that, say with a DAG of which files to upload in which order, and it’s no longer easier than anything else.
Like many other things, PHP makes it easier to do the wrong thing than other languages which make you do the same thing correctly.
I worked at a place that did git pull as the release process - it was a big site but I never heard of there being any issues (though the code was on life support so no huge changes were happening).
They switched to blue/green deploys for the new site (which I suspect was done at the server level, not with symlinks or the like).
> It’s easy to deploy as long as you don’t care about atomic updates
Does that matter if a bit of downtime is acceptable?
No, but it's moving the goalpost quite a bit. "Just copying a bunch of files around" is definitely easier than, say, deploying a new Docker container containing a Python app or a Rust or Go binary, etc. But neither is it nearly so robust.
Wouldn't that be better solved by uploading everything to a v2 directory and then renaming the directories?
Maybe. You could probably get pretty far with atomically moving a symlink so that the filesystem view always looks at either all the old or all the new files.
However, even that doesn't handle in-flight requests that have their view of the files swapped out from under them. Yes, that's a small time window for an error to happen, but it's definitely not instantaneous.
The safer solution would be to update the server config to point at the new directory and reload the webserver, but now you're way past just uploading the new files.
Its pretty instant. Hitting inflight request still finishes with the old version since the code thats run is already in memory.
I dont think its very different from changing proxy to point to different port.
That's not quite right. Imagine some (horrid) code like:
such that there's a significant interval between the request being spawned and it later including another file. The duration of the query is the opportunity for 'old.php' to go away, which would cause a 500 error.The difference is that you can have 2 ports listening at once and can close the first once it's drained of connections.
There's no fundamentally safe way to upgrade a bucket-of-files PHP app without tooling complex enough to rival another language's deployment.
I don't believe thats how PHP works (atleast not anymore). When the request is made the code is first compiled to opcodes and only after that's done the opcodes are run. In most production environments these opcodes are even cached so even if you delete the project it will run.
In any case you would have to hit some few milisecond window in this opcache generation to break single request but even that might be unlikely thanks to how filesystems read files?
In that example, I'm pretty sure that the 'require' line is compiled to opcodes, but not executed, until that line is reached. Supporting evidence: https://stackoverflow.com/questions/37880749/when-is-a-php-i...
So if there's a 10 second gap between the start of execution and the 'require' line being reached and evaluated, then any incompatible changes to the file being required within that 10 seconds will cause an error.
That actually makes sense because the codepath could be huge with huge surfaces of unused code.
With OpCache this could be solved so i guess lessin for me - deploy like this with opcache on.
Well, now you just have to manage cache invalidation. Piece of cake!
I kid, I kid, but seriously, now you have a different set of issues.
This is how its done in many deploy tools in PHP world with help of git. I think it works so well nobody even thinks about how it works.
That's a perfectly reasonable approach, so long as you understand why it's a risky operation and can tolerate the consequences, including customers seeing errors in their browser. If that's OK for your use case, then rock on! If you can't tolerate that, then you have to have switch to a more complex upgrade system, like blue-green deploys behind a load balancer or such. In other words, the deployment method of a Rust or Go or Python or Java app.
In a sense thats blue-green deployment just on filesystem level? PHP is always run behind proxy/webserver (mostly ngnix nowdays)
But you are right there is no reason why you couldn't have two instances of the php app runing and switch between them. For some reason the PHP deployment services i've used seem to use the filesystem approach and i doubt it's laziness or incompetence.
I'd content that it's out of ignorance, and I don't mean that in a mean or nasty way. I've heard lots of pushback from PHP devs that it's way easier to update than sites written in other languages are, but I think it's genuinely due to a lack of understanding of why those languages recommend other upgrade processes. Those processes solve real, genuine problems that also affect PHP, but they're dismissed as overkill or enterprisey or overly complicated.
And all that may be true for a trivial website. If you've written a personal project with 10,000 hits per year, YOLO. Go for it. The odds of it affecting one of those users is vanishingly tiny, and so what if it does? But if you're hosting something like a Wordpress site for a large company with lots of traffic, it's crucial to understand why "just rsync the files over" is not an acceptable deployment method.
Sorry but we were not talking about “rsyncing the files over”. We are talking about what services that i've used like Forge or Ploi do where you deploy project into separate folder and then switch symlink. You can even roll it back.
I have a feeling you want to dunk on poor dumb PHP developer but like Forge is by the people who created Laravel. I believe they would put some thought into it. Maybe just maybe small chance of one bad request is not such a bad deal.
It is literally exactly the same issue, just with slightly less of an error window. I don't think those devs are poor and dumb, but I do think it's likely they've been working in environments where production errors are more tolerated than in other environments.
> Maybe just maybe small chance of one bad request is not such a bad deal.
If your company is OK with that, seriously, sincerely, right on! Keep doing this and move on to other problems.
I had thought about it and you are just pulling my nose.
If you have very long database query and you update your app in middle of it using blue-green load balancer you get to same production error. It is the same thing just implemented slightly differently because of PHP characteristics allow this and with different systems you have to use different strategy.
So yeah have good feeling about us PHP devs having bad deployment strategies.
That is… exactly wrong. I encourage you to consider why that would not be the case.
It is not the same issue, due to how opcache works. No one remotely competent runs PHP without opcache in 2025.
The comparison would be towards other languages in its class: Python, Ruby, Javascript.
Besides the shared nothing architecture mentioned by sibling:
- A more mature community and ecosystem for open source packages e.g. basics like following semver
- One single clear option for package management, which is also by far best in class
- Simply better performance except maybe compared to javascript
While the rest of the options may tick one of the above boxes, none of them ticks all 3.
Maybe it is a good base for vibe coding since there is a lot of code around?
There's a lot of bad code around, and I think the protections of a (good) static type system are likely of particular use in that scenario. I'd be interested in reading a test of that prediction if anyone has done it (but not interested enough to do it myself).
Honestly the author doesn't even make a great case that PHP has improved since 2009. His arguments mostly seemed to be "don't use the old busted way, there's a better way now". But if you have to go out of your way to remember to not use the old busted way, sooner or later you will shoot yourself in the foot. Having good defaults matters, and the author seems to ignore that.
I think you're underestimating how hard it is to shoot yourself in the foot when using the PHP language defaults and the defaults for any modern PHP framework - it's genuinely hard to do.
I still don't think PHP is a good idea for a greenfield project or anything, but they have done a good job of hiding all the footguns.
> I think you're underestimating how hard it is to shoot yourself in the foot when using the PHP language defaults and the defaults for any modern PHP framework - it's genuinely hard to do.
Agreed. I remember happily starting a couple of new PHP projects in the last decade and the frameworks felt like working in any other programming language.