I'm all-in on SSR. The client shouldn't have any state other than the session token, current URL and DOM.
Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.
I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.
There are even browser standards that can mitigate some of the navigation delay concerns:
https://developer.mozilla.org/en-US/docs/Web/API/Speculation...
> networks and servers are will only get faster
this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.
Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.
Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834
The reality is that you can't know which one is better and you should be able to decide at request time.
If you could simply drop in a library to any of your existing SSR apps that:
- is 50kb (gzipped)
- requires no further changes required from you (either now or in the future)
- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
would you do it?
The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.
The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).
> - enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
> would you do it?
No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.
> low bandwidth requirements
If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.
I just want to point out that your argument is now contradictory.
You're stating that networks and latency will only improve, and that this is a reason to prefer SSR.
You're also stating that 50kb feels too heavy.
But at 8kb of SSR'd plaintext, you're ~6 page loads away from breaking even with the 50kb of content that will be cached locally, and you yourself are arguing that the transport for that 50kb is only getting better.
Basically: you're arguing it's not a problem to duplicate all the data for the layout on every page load because networks are good and getting better. But also arguing that the network isn't good enough to load a local-first layout engine once, even at low multiples of your page size.
So which is it?
---
Entirely separate of the "rest of the owl" argument, with which I agree.
> the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response
I thought we were living in a utopia where fast high-speed internet was ubiquitous everywhere? What's the fuss over 50kb? 5mb should be fine in this fantasy world.
> "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise
That's an assumption you're making, but that doesn't necessarily have to be true. I offered you what amounts to a magic button (drop this script in, done), not a full implementation exercise.
If it really were just a matter of dropping a 50kb script in (nothing else) would you do it? Where's the size cutoff between "no" and "yes" for you?
> Everything is domain specific when it comes to sync-based latency hiding techniques.
Yes and no. To actually add it to your app right now would most likely require domain-specific techniques. But that doesn't imply that a more general technique won't appear in the future, or that an existing technique can't be sufficiently generalized.
> the extra 50kb gzipped is starting to feel kind of heavy
Yeah - but we can reasonably assume it's a one-and-done cached asset that effectively only has to be downloaded once for your app.
And when your avian ISP dies, how do you make requests?
If everything you need - application logic, css, media, data etc... - is cached on your device, you can carry on.
You're being far too myopic about all of this. There's many different use cases and solutions, all of which have their tradeoffs. Sometimes ssr is appropriate, other times local first is. You can even do both - SSR html in the service worker with either isomorphic JavaScript, or WASM.
We can all agree, though, that React and it's derivations are never appropriate.
RightToolForTheRightJob!
Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?
These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.
Then again, hotel booking or similar has no need for something like that.
Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.
This is called happy-path engineering, and it's really frustrating for people who don't live on the happy path.
Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.
Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.
uh have you ever tried pinging a server in your same city? It's usually substantially <30ms. I'm currently staying at a really shitty hotel that has 5mbps wifi, not to mention I'm surrounded by other rooms, and I can still ping 8.8.8.8 in 20ms. From my home internet, which is /not/ fiber, it's 10ms.
If you've ever used speedtest.net, you've almost certainly been using benchmarking against a server in your own city, or at least as close as your ISPs routing will allow. Ookla servers are often specifically optimized for and sometimes hosted/owned by ISPs to give the best possible speeds. Google's DNS servers use anycast magic to get similar results. Basically no service you actually use outside of very, very large providers is likely to be anywhere near you an won't get that kind of latency, even with a very good ISP and LAN.
10ms is a best case for DOCSIS3.0/3.1, it means you have near optimal routing and infrastructure between you and the node or are using some other transport like ethernet that is then fed by fiber. I currently get 24ms to my local Ookla host a couple of miles away over a wired connection with a recent DOCSIS3.1 modem. Hotel internet is likely to be backed by business fiber. They're likely throttling you.
I worked for an ISP for several years, there's a huge range of service quality even within the same provider, zipcode, and even same location depending on time of day.
if I'm able to ping 8.8.8.8 in 10ms, doesn't that mean that my rtt latency to get out of the docsis-part of the network is 10ms, and that any latency beyond that will be the same regardless of docsis/fiber?
Yes, the majority of that latency is likely coming from DOCSIS. You're likely in a major city, large enough that Google has an anycast DNS server nearby.
latency != throughput
The use case for SSR now and in the future is on initial page load, especially on mobile.
After that, with competent engineering everything should be faster on the client, since it only needs state updates, not a complete re-render
If you don't have competent engineering, SSR isn't going to save you