> For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.

How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.

Hacker News comment sections are the only part of the internet that still feel "impossibly fast" to me. Even on Android, thousands of comments can scroll as fast as the OS permits, and the DOM is so simple that I've reopened day-old tabs to discover the page is still loaded. Even projects like Mastodon and Lemmy, which aren't beholden to modern web standards, have shifted to significant client-side scripting that lacks the finesse to appear performant.

The modern webbroswer trick of "you haven't looked at this tab in an hour, so we killed/unloaded it" is infuriating.

To be fair... Lots of people just never close their tabs. So there's very real resource limitations. I've seen my partner's phone with a few hundred tabs open.

I would like this feature if I had more control over it. The worst part is when clicking a tab that was unloaded which makes a new (fresh) web request when I don't want it to

In firefox it's possible to disable it: https://firefox-source-docs.mozilla.org/browser/tabunloader/ . Enabled is probably the reasonable default for it though.

Particularly if you have maybe 40 tabs open and 128GB of ram.

Back in 2018 I worked for a client that required we used Jira. It was so slow that the project manager set everything up in Excel during our planning meetings. After the meeting she would manually transfer it to Jira. She spent most of her time doing this. Each click in the interface took multiple seconds to respond, so it was impossible to get into a flow.

Hm. While I'm not even remotely excited by Jira (or any other PM software), I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

Were some extras installed? Or is this one of those tools that needs a highly performant network?

The problem with Jira (and other tools) is it inevitably gets too many customizations: extra fields, plugins, mandatory workflows, etc. Instead of becoming a tool to manage work, it starts getting in the way of real work and becomes work itself.

I've seen on perm Jira at large companies get that slow. I'm not sure if it's the plugins or just the company being stingy on hardware.

Yeah it’s probably both. Underfunded IT department, probably one or two people who aren’t allowed to say no.

I can easily believe either, but I am still curious what the failure mode(s) is (/are).

Underconfigured hardware and old installations neglected are the ones I've encountered.

Large numbers of custom workflows and rules can do it, too, but most have been the first.

> I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

I have only seen a few self hosted jira, but all of those were mind numbingly slow.

Jira cloud, on the other hand, now compared to 2018 is faster from what I remember, I still call it painful any time I am trying to be quick about something, most of the time though it is only annoying.

It is faster than it was back then - I've been using it for 10+ years. Hating every moment of it. But it is definitely better than it was.

At this point I think I would try to automate this pointless time sink with a script and jira API.

100%. Their API isnt even bad. I made a script to pull lots of statistics and stuff from jira.

Looking at the software development today, is as if the pioneers failed to pass on the torch onto the next generation of developers.

While I see strict safety/reliability/maintainability concerns as a net positive for the ecosystem, I also find that we are dragged down by deprecated concepts at every step of our way.

There's an ever-growing disconnect. On one side we have what hardware offers ways of achieving top performance, be it specialized instruction sets or a completely different type of a chip, such as TPUs and the like. On the other side live the denizens of the peak of software architecture, to whom all of it sounds like wizard talk. Time and time again, what is lauded as convention over configuration, ironically becomes a maintenance nightmare that it tries to solve as these conventions come with configurations for systems that do not actually exist. All the while, these conventions breed an incompetent generation of people who are not capable of understanding underlying contracts and constraints within systems, myself included. It became clear that, for example, there isn't much sense to learn a sql engine's specifics when your job forces you to use Hibernate that puts a lot of intellectual strain into following OOP, a movement characterized by deliberately departing away from performance, in favor of being more intuitive, at least in theory.

As limited as my years of experience are, i can't help but feel complacent in the status quo, as long as I don't take deliberate actions to continuously deepen my knowledge and working on my social skills to gain whatever agency and proficiency that I can get my hands on

People forget how hostile and small the old Internet felt at times.

Developers of the past weren't afraid to tell a noob (remember that term?) to go read a few books before joining the adults at the table.

Nowadays it seems like devs have swung the other way and are much friendlier to newbs (remember that distinction marking a shift?).

Stockholm syndrome

A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.

This is not the point, or other numbers matter more, then yours.

In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.

So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.

> Now they rewrite their frontend in react and it feels sluggish and slow.

I find this is a common sentiment, but is there any evidence to find that React itself is actually the culprit of GH's supposed slowdown? GH has updated their architecture many times over and their scale has increased by orders of magnitude, quite literally serving up over a billion git repos.

Not to mention that the implementation details of any React application can make or break its performance.

Modern web tech often becomes a scapegoat, but the web today enables experiences that were simply impossible in the pre-framework era. Whatever frustrations we have with GitHub’s UI, they don’t automatically indict the tools it’s built with.

It's more of a "holding it wrong" situation with the datastores used with React, rather than directly with React itself, with updated data being accessed too high in the tree and causing large chunks of the page to be unnecessarily rerendered.

This was actually the recommended way to do it for years with the atom/molecule/organism/section/page style of organizing React components intentionally moving data access up the tree into organism and higher. Don't know what current recommendations are.

I don't see how GH's backend serving a billion repos would affect the speed of their frontend javascript. React is well known to be slow, but if you need numbers, you can look at the js-framework-benchmark and see how many React results are orange and red.

https://github.com/krausest/js-framework-benchmark

Sure, React has overhead. No one disputes that. But pointing to a few red squares on a synthetic benchmark doesn’t explain the actual user experience on GitHub today. Their entire stack has evolved, and any number of architectural choices along the way could impact perceived performance.

Used properly, React’s overhead isn’t significant enough on its own to cause noticeable latency.

> Now they rewrite their frontend in react and it feels sluggish and slow.

And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.

Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area. And the techniques to "fix" this usually only mitigate the problem in read-scenarios.

The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.

My take is, that a performant backend gets you so much runway, that you can reduce a lot of complexity in the frontend. And yes, sometimes that means to have globally distributed databases.

But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.

This is an absolutely bonkers tradeoff to me. Globally distributed databases are either 1. a very complex infrastructure problem (especially if you need multiple writable databases), or 2. lock you into a vendor's proprietary solution (like Cloudflare D1).

All to avoid writing a bit of JavaScript.

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.

Here's the creator of Datastar demonstrating a WebGL app being updated at 144FPS from the server: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848

This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.

Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.

To see this first hand try this website if you're in Europe (maybe it's also fast in the US, not sure):

https://www.jpro.one/?

The entire thing is a JavaFX app (i.e. desktop app), streaming DOM diffs to the browser to render its UI. Every click is processed server side (scrolling is client side). Yet it's actually one of the faster websites out there, at least for me. It looks and feels like a really fast and modern website, and the only time you know it's not the same thing is if you go offline or have bad connectivity.

If you have enough knowledge to efficiently use your database, like by using pipelining and stored procedures with DB enforced security, you can even let users run the whole GUI locally if they want to, and just have it do the underlying queries over the internet. So you get the best of both worlds.

There was a discussion yesterday on HN about the DOM and how it'd be possible to do better, but the blog post didn't propose anything concrete beyond simplifying and splitting layout out from styling in CSS. The nice thing about JavaFX is it's basically that post-DOM vision. You get a "DOM" of scene graph nodes that correspond to real UI elements you care about instead of a pile of divs, it's reactive in the Vue sense (you can bind any attribute to a lazily computed reactive expression or collection), it has CSS but a simplified version that fixes a lot of the problems with web CSS and so on and so forth.

At least for me this site is completely broken on mobile. I'm not saying it's not possible to write sites for mobile using this tech... But it's not a great advert at all.

I haven't tried it on mobile. That could be, but my point was limited to latency and programming model.

You can use JavaFX to make mobile apps. So it's likely just that the authors haven't bothered to do a mobile friendly version.

Hardly a surprise, given that:

> The entire thing is a JavaFX app (i.e. desktop app)

Besides, this discussion is not about whether or not a site is mobile-friendly.

> Every click is processed server side

On this site, every mouse move and scroll is sent to the server. This is an incredibly chatty site--like, way more than it needs to be to accomplish this. Check the websocket messages in Dev Tools and wave the mouse around. I suspect that can be improved to avoid constantly transmitting data while the user is reading. If/when mobile is supported, this behavior will be murder for battery life.

>the only time you know it's not the same thing is if you go offline or have bad connectivity.

So, like most of the non-first world? Hell, I'm in a smaller town/village next to my capital city for a month and internet connection is unreliable.

Having said that, the website was usable for me - I wouldn't say it's noticeably fast, but it was not show either.

I feel like it depends a lot on what kind of website you're using. Note taking app? Definitely should work offline. CRUD interface? You already need to be constantly online, since every operation needs to talk to the server.

I'm not impressed. On mobile, the docs are completely broken and unreadable. Visiting a different docs subpage breaks the back button.

Firefox mobile seems to think the entire page is a link. This means I can't highlight text for instance.

Clicking on things feels sluggish. The responses are fast, but still perceptible. Do we really need a delay for opening a hamburger menu?

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

Many of us don't have to worry about this. My entire country is within 25ms RTT of an in-country server. I can include a dozen more countries within an 80ms RTT. Lots of businesses focus just on their country and that's profitable enough, so for them they never have to think about higher RTTs.

If you put your server e.g. in Czechia you can provide ~20ms latency for the whole of Europe :)

Any providers in Czechia you'd recommend? It's not a market I know.

React isn't the problem. You can write a very fast interface in React. Its (usually) too many calls to the backend that slow everything to a crawl

actually if you live near a city the edge network is 6ms RTT ping away, that’s 3ms each direction, so if e.g. a virtual scroll frontend is windowing over a server array retained in memory, you can get there and back over websocket, inclusive of the windowing, streaming records in and out of the DOM at the edges of the viewport, and paint the frame, all in less than 8ms 120hz frame budget, and the device is idle, with only the visible resultset in client memory. That’s 120hz network. Even if you don’t live near a city, you can probably still hit 60hz. It is not 2005 anymore. We have massively multiplayer video games, competitive multiplayer shooters and can render them in the cloud now. Linear is office software, it is not e-sports, we’re not running it on the subway or in Africa. And AI happens in the cloud, Linear’s website lead text is about agents.

Those are theoretical numbers for a small elite. Real world numbers for most of the planet are orders of magnitude worse.

it is my actual numbers from my house in the Philadelphia suburbs right now, 80 miles away from the EWR data center outside NYC. Feel free to double them, you’re still inside the 60hz frame budget with better than e-sports latency

edit: I am 80 miles from EWR not 200

Like they said, for a small elite. If you don't see yourself as such, adjust your view.

what is your ping to fly.io right now?

90ms for me. My fiber connection is excellent and there is no jitter--fly.io's nearest POP is just far away. You mentioned game streaming so I'll mention that GeForce Now's nearest data center is 30ms away (which is actually fine). Who is getting 6ms RTT to a data center from their house, even in the USA?

More relevantly... who wants to architect a web app to have tight latency requirements like this, when you could simply not do that? GeForce Now does it because there's no other way. As a web developer you have options.

who said anything about designing for tight latency requirements? My argument is that, for Linear’s market - programmers and tech workers either in an office or working remotely near a city - the latency requirements are not tight at all relative to the baseline capacity. We live on zoom! I have little patience for someone whose 400ms jitter is breaking up the zoom call, I had better ping than that on AOL in 1999, you want to have a tech career you need to have good internet, and AI has just cemented this. I have cross-atlantic zoom calls with my team in europe every day without perceptible lag or latency. We laugh, we joke, we crosstalk all with realtime body language. If SF utilities have decayed to the point where you can’t get fast internet living 20 miles from the backbone, then the jobs are going overseas. Eastern europe has lower ping to Philly than jitter guy has to the edge. And people in this thread are lecturing me about privilege!

[deleted]

Mine's 167-481ms (high jitter). It's the best internet I can get right now, a few suburbs south of San Francisco. Comcast was okayish, lower mean latency, but it had enough other problems that T-Mobile home internet was a small improvement.

update, Friday evening now and my RTT to EWR is now 8.5ms (4.2 each way), up from 6 point something this morning.

From Philadelphia suburbs to my actual Fly app in:

EWR 8.5ms (NYC)

SJC 75ms (California)

CDG 86ms (France, cross atlantic)

GRU 126.2 (Brazil)

HKG 225.3 (Hong Kong)

Now try from Idaho or Botswana

I can't help but feel this is missing the point. Ideally, next refresh click latency is a fantastic goal, we're just not even close to that.

For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.

If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms

What does web request latency have to do with it? News articles or simple forms take 5 seconds to load. Why? This is not bounded by latency.

I also winced at "impossibly fast" and realize that it must refer to some technical perspective that is lost on most users. I'm not a front end dev, I use linear, I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it. (I don't mean to say optimization isn't cool)

> I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it.

We almost forgot that's the point. Speed is good design, the absence of something being in the way. You notice a janky cross platform app, bad electron implementation, or SharePoint, because of how much speed has been taken away instead of how much has been preserved.

It's not the whole of good design though, just a pretty fundamental part.

Sports cars can go fast even though they totally don't need to, their owners aren't necessarily taking them to the track, but if they step on it, they go, it's power.

Second this. I use Linear as well and I didn't noticed anything close to "impossibly fast", it's faster than Jira for sure, but nothing spectacular.

If you get used to Jira, especially Ubisofts internally hosted Jira (which was in an oversubscribed 10yo server that was constantly thrashing and hosted half a world away) ... well, it's easy for things to feel "impossibly fast".

In fact in the Better Software Conference this year there were people discussing the fact that if you care about performance people think your software didn't actually do the work: because they're not used to useful things being snappy.

Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.

Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.

This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.

Yeah their startup times aren’t great. They’re making a trade off by loading a ton of data up front, though to be fair a lot of the local first web tooling didn’t really exist when they were founded - the nascent Zero Sync framework’s example project is literally a Linear clone that they use as their actual bug tracker, it loads way faster and has similarly snappy performance, so seems clear that it can be done better.

That said at this point Linear has more strengths than just interaction speed, mainly around well thought out integrations.

Maybe it doesn't scale well then? I syncd my linear with GitHub. Has a few thousand issues. Lightning fast. Perhaps you guys have way more issues?

I hate to be a hacker news poster who responds to a positive post with negativity, but I was also surprised at the praise in the article.

I don’t find Linear to be all that quick, but apparently Mac OS thinks it’s a resource hog (or has memory leaks). I leave linear open and it perpetually has a banner that tells me it was killed and restarted because it was using too much memory. That likely colors my experience.

Trite remark. The author was referring to behaviour that has nothing to do with “how the web has become.”

It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.

To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!

Funny how reasonable performance is now treated as some impossible lost art on the web sometimes.

I posted a little clip [1] of development on a multiplayer IDE for tasks/notes (local-first+e2ee), and a lot of people asked if it was native, rust, GPU rendered or similar. But it's just web tech.

The only "secret ingredients" here are using plain ES6 (no frameworks/libs), having data local-first with background sync, and using a worker for off-UI-thread tasks. Fast web apps are totally doable on the modern web, and sync engines are a big part of it.

[1] https://x.com/wcools/status/1900188438755733857

I was also surprised to read this, because Linear has always felt a little sluggish to me.

I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

150ms is sluggish? 4000ms is normal?

The comments are absolutely wild in here with respect to expectations.

150 ms is definitely on the “not instantaneous” side: https://ux.stackexchange.com/a/42688

The stated 500 ms to 1500 ms are unfortunately quite frequent in practice.

Interesting fact: the 50ms to 100ms grace period only works at the very beginning of a user interaction. You get that grace period when the user clicks a button, but when they're typing in text, continually scrolling, clicking to interrupt an animation, or moving the mouse to trigger a hover event, it's better to provide a next-frame response.

This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks

I think under 400ms is fast enough for loading a new page or dialog. For loading search suggestions or opening a date picker or similar, probably not.

Web applications have become too big and heavy. Corps want to control everything. A simple example would be a simple note taking app which apparently also has to sync throughout devices. They are going to store every note you take on their servers, who knows if they really delete your deleted notes. They'll also track how often you visited your notes for whatever reasons. Wouldn't surprise me if the app also required geolocation and stuff like that for whatever reason. Mix that with lots of users and you will have loading times unheard of with small scale apps. Web apps should scale down but like with everything we need more more more bigger better faster.

> a simple note taking app which apparently also has to sync throughout devices

that is the entire point of the app, surely! whether or not the actual implementation is bad, syncing across devices is what users want in a note taking app for the most part.

I only take notes on one device, if I can take them on my phone I can pull out my phone and look at them while using other devices

It is definitely ridiculous. It's not just a nitpick too, it's ludicrous how sloooow and laggy typing text in a monstrosity like Jira is, or just reading through an average news site. Makes everything feel like a slog.

Indeed. I have been using it for 5-6 months in a new job and I didn't notice it being faster than the typical web app.

If anything it is slow because it is a pain to navigate. I have browser bookmarks for my most frequented pages.

one of my day to day responsibilities involves using a portal tied to MSFT dynamics on the back end and it is the laggiest and most terrible experience ever. we used to have java apps that ran locally and then moved to this in the name of cloud migration and it feels like it was designed by someone whose product knowledge was limited to the first 2/5 lessons in a free Coursera (RIP) module

Since it’s so easy then I’m rooting for you to make some millions with performant replacements for other business tools, should be a piece of cake

I don't know if 'the web' in general is fair, here the obvious comparison is Jira, which is dog slow & clunky.