A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.

This is not the point, or other numbers matter more, then yours.

In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.

So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.

> Now they rewrite their frontend in react and it feels sluggish and slow.

I find this is a common sentiment, but is there any evidence to find that React itself is actually the culprit of GH's supposed slowdown? GH has updated their architecture many times over and their scale has increased by orders of magnitude, quite literally serving up over a billion git repos.

Not to mention that the implementation details of any React application can make or break its performance.

Modern web tech often becomes a scapegoat, but the web today enables experiences that were simply impossible in the pre-framework era. Whatever frustrations we have with GitHub’s UI, they don’t automatically indict the tools it’s built with.

It's more of a "holding it wrong" situation with the datastores used with React, rather than directly with React itself, with updated data being accessed too high in the tree and causing large chunks of the page to be unnecessarily rerendered.

This was actually the recommended way to do it for years with the atom/molecule/organism/section/page style of organizing React components intentionally moving data access up the tree into organism and higher. Don't know what current recommendations are.

I don't see how GH's backend serving a billion repos would affect the speed of their frontend javascript. React is well known to be slow, but if you need numbers, you can look at the js-framework-benchmark and see how many React results are orange and red.

https://github.com/krausest/js-framework-benchmark

Sure, React has overhead. No one disputes that. But pointing to a few red squares on a synthetic benchmark doesn’t explain the actual user experience on GitHub today. Their entire stack has evolved, and any number of architectural choices along the way could impact perceived performance.

Used properly, React’s overhead isn’t significant enough on its own to cause noticeable latency.

> Now they rewrite their frontend in react and it feels sluggish and slow.

And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.

Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area. And the techniques to "fix" this usually only mitigate the problem in read-scenarios.

The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.

My take is, that a performant backend gets you so much runway, that you can reduce a lot of complexity in the frontend. And yes, sometimes that means to have globally distributed databases.

But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.

This is an absolutely bonkers tradeoff to me. Globally distributed databases are either 1. a very complex infrastructure problem (especially if you need multiple writable databases), or 2. lock you into a vendor's proprietary solution (like Cloudflare D1).

All to avoid writing a bit of JavaScript.

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.

Here's the creator of Datastar demonstrating a WebGL app being updated at 144FPS from the server: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848

This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.

Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.

To see this first hand try this website if you're in Europe (maybe it's also fast in the US, not sure):

https://www.jpro.one/?

The entire thing is a JavaFX app (i.e. desktop app), streaming DOM diffs to the browser to render its UI. Every click is processed server side (scrolling is client side). Yet it's actually one of the faster websites out there, at least for me. It looks and feels like a really fast and modern website, and the only time you know it's not the same thing is if you go offline or have bad connectivity.

If you have enough knowledge to efficiently use your database, like by using pipelining and stored procedures with DB enforced security, you can even let users run the whole GUI locally if they want to, and just have it do the underlying queries over the internet. So you get the best of both worlds.

There was a discussion yesterday on HN about the DOM and how it'd be possible to do better, but the blog post didn't propose anything concrete beyond simplifying and splitting layout out from styling in CSS. The nice thing about JavaFX is it's basically that post-DOM vision. You get a "DOM" of scene graph nodes that correspond to real UI elements you care about instead of a pile of divs, it's reactive in the Vue sense (you can bind any attribute to a lazily computed reactive expression or collection), it has CSS but a simplified version that fixes a lot of the problems with web CSS and so on and so forth.

At least for me this site is completely broken on mobile. I'm not saying it's not possible to write sites for mobile using this tech... But it's not a great advert at all.

I haven't tried it on mobile. That could be, but my point was limited to latency and programming model.

You can use JavaFX to make mobile apps. So it's likely just that the authors haven't bothered to do a mobile friendly version.

Hardly a surprise, given that:

> The entire thing is a JavaFX app (i.e. desktop app)

Besides, this discussion is not about whether or not a site is mobile-friendly.

> Every click is processed server side

On this site, every mouse move and scroll is sent to the server. This is an incredibly chatty site--like, way more than it needs to be to accomplish this. Check the websocket messages in Dev Tools and wave the mouse around. I suspect that can be improved to avoid constantly transmitting data while the user is reading. If/when mobile is supported, this behavior will be murder for battery life.

>the only time you know it's not the same thing is if you go offline or have bad connectivity.

So, like most of the non-first world? Hell, I'm in a smaller town/village next to my capital city for a month and internet connection is unreliable.

Having said that, the website was usable for me - I wouldn't say it's noticeably fast, but it was not show either.

I feel like it depends a lot on what kind of website you're using. Note taking app? Definitely should work offline. CRUD interface? You already need to be constantly online, since every operation needs to talk to the server.

I'm not impressed. On mobile, the docs are completely broken and unreadable. Visiting a different docs subpage breaks the back button.

Firefox mobile seems to think the entire page is a link. This means I can't highlight text for instance.

Clicking on things feels sluggish. The responses are fast, but still perceptible. Do we really need a delay for opening a hamburger menu?

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

Many of us don't have to worry about this. My entire country is within 25ms RTT of an in-country server. I can include a dozen more countries within an 80ms RTT. Lots of businesses focus just on their country and that's profitable enough, so for them they never have to think about higher RTTs.

If you put your server e.g. in Czechia you can provide ~20ms latency for the whole of Europe :)

Any providers in Czechia you'd recommend? It's not a market I know.

React isn't the problem. You can write a very fast interface in React. Its (usually) too many calls to the backend that slow everything to a crawl

actually if you live near a city the edge network is 6ms RTT ping away, that’s 3ms each direction, so if e.g. a virtual scroll frontend is windowing over a server array retained in memory, you can get there and back over websocket, inclusive of the windowing, streaming records in and out of the DOM at the edges of the viewport, and paint the frame, all in less than 8ms 120hz frame budget, and the device is idle, with only the visible resultset in client memory. That’s 120hz network. Even if you don’t live near a city, you can probably still hit 60hz. It is not 2005 anymore. We have massively multiplayer video games, competitive multiplayer shooters and can render them in the cloud now. Linear is office software, it is not e-sports, we’re not running it on the subway or in Africa. And AI happens in the cloud, Linear’s website lead text is about agents.

Those are theoretical numbers for a small elite. Real world numbers for most of the planet are orders of magnitude worse.

it is my actual numbers from my house in the Philadelphia suburbs right now, 80 miles away from the EWR data center outside NYC. Feel free to double them, you’re still inside the 60hz frame budget with better than e-sports latency

edit: I am 80 miles from EWR not 200

Like they said, for a small elite. If you don't see yourself as such, adjust your view.

what is your ping to fly.io right now?

90ms for me. My fiber connection is excellent and there is no jitter--fly.io's nearest POP is just far away. You mentioned game streaming so I'll mention that GeForce Now's nearest data center is 30ms away (which is actually fine). Who is getting 6ms RTT to a data center from their house, even in the USA?

More relevantly... who wants to architect a web app to have tight latency requirements like this, when you could simply not do that? GeForce Now does it because there's no other way. As a web developer you have options.

who said anything about designing for tight latency requirements? My argument is that, for Linear’s market - programmers and tech workers either in an office or working remotely near a city - the latency requirements are not tight at all relative to the baseline capacity. We live on zoom! I have little patience for someone whose 400ms jitter is breaking up the zoom call, I had better ping than that on AOL in 1999, you want to have a tech career you need to have good internet, and AI has just cemented this. I have cross-atlantic zoom calls with my team in europe every day without perceptible lag or latency. We laugh, we joke, we crosstalk all with realtime body language. If SF utilities have decayed to the point where you can’t get fast internet living 20 miles from the backbone, then the jobs are going overseas. Eastern europe has lower ping to Philly than jitter guy has to the edge. And people in this thread are lecturing me about privilege!

[deleted]

Mine's 167-481ms (high jitter). It's the best internet I can get right now, a few suburbs south of San Francisco. Comcast was okayish, lower mean latency, but it had enough other problems that T-Mobile home internet was a small improvement.

update, Friday evening now and my RTT to EWR is now 8.5ms (4.2 each way), up from 6 point something this morning.

From Philadelphia suburbs to my actual Fly app in:

EWR 8.5ms (NYC)

SJC 75ms (California)

CDG 86ms (France, cross atlantic)

GRU 126.2 (Brazil)

HKG 225.3 (Hong Kong)

Now try from Idaho or Botswana

I can't help but feel this is missing the point. Ideally, next refresh click latency is a fantastic goal, we're just not even close to that.

For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.

If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms

What does web request latency have to do with it? News articles or simple forms take 5 seconds to load. Why? This is not bounded by latency.