> Using Zero is another option, it has many similarities to Electric, while also directly supporting mutations.

The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.

You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.

If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.

This ends up being really useful for:

- Any reasonably sized app. You can't sync all data to client.

- Fast startup. Most apps have publicly visible views that they want to load fast.

- Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.

So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().

Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.

[deleted]

I developed an open-source task management software based on CRDT with a local-first approach. The motivation was that I primarily manage personal tasks without needing collaboration features, and tools like Linear are overly complex for my use case.

This architecture offers several advantages:

1. Data is stored locally, resulting in extremely fast software response times 2. Supports convenient full database export and import 3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client 4. Simplified feature development, requiring only local logic operations

There are also some limitations:

1. Only suitable for text data storage; object storage services are recommended for images and large files 2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences 3. Implementing collaborative features with end-to-end encryption is relatively complex

The technical architecture is designed as follows:

1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development

2. Data processing flow: User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.

3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.

4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.

If you're interested in this project, please visit https://github.com/hamsterbase/tasks

very cool!

I'm all-in on SSR. The client shouldn't have any state other than the session token, current URL and DOM.

Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.

I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.

There are even browser standards that can mitigate some of the navigation delay concerns:

https://developer.mozilla.org/en-US/docs/Web/API/Speculation...

> networks and servers are will only get faster

this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.

Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.

Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834

The reality is that you can't know which one is better and you should be able to decide at request time.

If you could simply drop in a library to any of your existing SSR apps that:

- is 50kb (gzipped)

- requires no further changes required from you (either now or in the future)

- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation

would you do it?

The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.

The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).

> - enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation

> would you do it?

No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.

> low bandwidth requirements

If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.

I just want to point out that your argument is now contradictory.

You're stating that networks and latency will only improve, and that this is a reason to prefer SSR.

You're also stating that 50kb feels too heavy.

But at 8kb of SSR'd plaintext, you're ~6 page loads away from breaking even with the 50kb of content that will be cached locally, and you yourself are arguing that the transport for that 50kb is only getting better.

Basically: you're arguing it's not a problem to duplicate all the data for the layout on every page load because networks are good and getting better. But also arguing that the network isn't good enough to load a local-first layout engine once, even at low multiples of your page size.

So which is it?

---

Entirely separate of the "rest of the owl" argument, with which I agree.

> the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response

I thought we were living in a utopia where fast high-speed internet was ubiquitous everywhere? What's the fuss over 50kb? 5mb should be fine in this fantasy world.

> "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise

That's an assumption you're making, but that doesn't necessarily have to be true. I offered you what amounts to a magic button (drop this script in, done), not a full implementation exercise.

If it really were just a matter of dropping a 50kb script in (nothing else) would you do it? Where's the size cutoff between "no" and "yes" for you?

> Everything is domain specific when it comes to sync-based latency hiding techniques.

Yes and no. To actually add it to your app right now would most likely require domain-specific techniques. But that doesn't imply that a more general technique won't appear in the future, or that an existing technique can't be sufficiently generalized.

> the extra 50kb gzipped is starting to feel kind of heavy

Yeah - but we can reasonably assume it's a one-and-done cached asset that effectively only has to be downloaded once for your app.

And when your avian ISP dies, how do you make requests?

If everything you need - application logic, css, media, data etc... - is cached on your device, you can carry on.

You're being far too myopic about all of this. There's many different use cases and solutions, all of which have their tradeoffs. Sometimes ssr is appropriate, other times local first is. You can even do both - SSR html in the service worker with either isomorphic JavaScript, or WASM.

We can all agree, though, that React and it's derivations are never appropriate.

RightToolForTheRightJob!

Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?

These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.

Then again, hotel booking or similar has no need for something like that.

Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.

This is called happy-path engineering, and it's really frustrating for people who don't live on the happy path.

Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.

Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.

uh have you ever tried pinging a server in your same city? It's usually substantially <30ms. I'm currently staying at a really shitty hotel that has 5mbps wifi, not to mention I'm surrounded by other rooms, and I can still ping 8.8.8.8 in 20ms. From my home internet, which is /not/ fiber, it's 10ms.

If you've ever used speedtest.net, you've almost certainly been using benchmarking against a server in your own city, or at least as close as your ISPs routing will allow. Ookla servers are often specifically optimized for and sometimes hosted/owned by ISPs to give the best possible speeds. Google's DNS servers use anycast magic to get similar results. Basically no service you actually use outside of very, very large providers is likely to be anywhere near you an won't get that kind of latency, even with a very good ISP and LAN.

10ms is a best case for DOCSIS3.0/3.1, it means you have near optimal routing and infrastructure between you and the node or are using some other transport like ethernet that is then fed by fiber. I currently get 24ms to my local Ookla host a couple of miles away over a wired connection with a recent DOCSIS3.1 modem. Hotel internet is likely to be backed by business fiber. They're likely throttling you.

I worked for an ISP for several years, there's a huge range of service quality even within the same provider, zipcode, and even same location depending on time of day.

if I'm able to ping 8.8.8.8 in 10ms, doesn't that mean that my rtt latency to get out of the docsis-part of the network is 10ms, and that any latency beyond that will be the same regardless of docsis/fiber?

Yes, the majority of that latency is likely coming from DOCSIS. You're likely in a major city, large enough that Google has an anycast DNS server nearby.

latency != throughput

The use case for SSR now and in the future is on initial page load, especially on mobile.

After that, with competent engineering everything should be faster on the client, since it only needs state updates, not a complete re-render

If you don't have competent engineering, SSR isn't going to save you

ElectricSQL and TanStack DB are great, but I wonder why they focus so much on local first for the web over other platforms, as in, I see mobile being the primary local first use case since you may not always have internet. In contrast, typically if you're using a web browser to any capacity, you'll have internet.

Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.

Because building local first with web technologies is like infinity harder than building local first with native app toolkits.

Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.

Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.

Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.

Well .... that's all true, until you want to deploy. Historically deploying desktop apps has been a pain in the ass. App stores barely help. That's why devs put up with the web's problems.

Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):

https://hydraulic.dev/

You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.

and if you didn't like or cared to learn CoreData? just jam a sqlite db in your application and read from it, it's just C. This was already working before Angular or even Backbone

Sure they're harder to build but my question is mainly why build them (for web in particular)? I don't see the benefits for a web app where I'll usually be online versus a mobile app where I may frequently have internet shortages when out and about.

I don't think Apple's solution syncs seamlessly, I needed to use CRDTs for that, that's still an unsolved problem for both mobile and web.

> Because building local first with web technologies is like infinity harder than building local first with native app toolkits.

You just have to write one for every client, no big deal, right? Just 2-5 (depending on if you have mobile clients and if you decide to support Linux too) times the effort.

You even say it yourself, you'll have to use Apple's sync and data solutions, and figure it out for Windows, Android and maybe Linux. Should be easy to sync data between the different storage and sync options...

Oh, and you have to figure out how to build, sign and update for all OSes too. Pay the Apple fee, the Microsoft whatever nonsense to not get your software flagged as malware on installation. It's around a million times easier to develop and deploy a web application, and that's why most developers and companies are defaulting to that, unless they have very good reasons.

I think this is a fascinating and deep question, that I ponder often.

I don't feel like I know all the answers, but as the creator of Replicache and Zero here is why I feel a pull to the web and not mobile:

- All the normal reasons the web is great – short feedback loop, no gatekeepers, etc. I just prefer to build for the web.

- The web is where desktop/productivity software happens. I want productivity software that is instant. The web has many, many advantages and is the undisputed home of desktop software now, but ever since we went to the web the interaction performance has tanked. The reason is because all software (including desktop) is client/server now and the latency shows up all over the place. I want to fix that, in particular.

- These systems require deep smarts on the client – they are essentially distributed databases, and they need to run that engine client-side. So there is the question of what language to implement in. You would think that C++/Rust -> WASM would be obvious but there are really significant downsides that pull people to doing more and more in the client's native language. So you often feel like you need to choose one of those native languages to start with. JS has the most reach. It's most at home on the desktop web, but it also reaches mobile via RN.

- For the same reason as prev, the complex productivity apps that are often targeted by sync engines are often themselves written using mainly web tech on mobile. Because they are complex client-side systems and they need to pick a single impl language.

Mobile has really strong offline-primitives compared to the web.

But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.

I think the current crop of sync engines greatly benefit from being web-first because they are still young and getting lots of updates. And mobile updates are a huge pain compared to webapp updates.

The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.

Yes. PWAs only need now Dev adoption. It is up to us to fight the App store monopolies.

Because web apps run in a web browser, which is the opposite of a local first platform.

Local-first is actually the default in any native app

In this case it's not about being able to use the product at all, but the joy from using an incredibly fast and responsive product, which therefore you want to use local-first.

Not a lot of mention for the collaboration aspect that local first / sync engines enabled. I've been building a project using Zero that is meant to replace a Google Sheet a friend of mine uses for his business. He routinely gets on a Google Meet with a client, they both open the Sheet and then go through the data.

Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.

Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.

I've been very impressed by Jazz -- it enables great DX (you're mostly writing sync, imperative code) and great UX (everything feels instant, you can work offline, etc).

Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.

All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.

Yeah, Jazz is amazing. The DX is unmatched. My issue when I used it was, they mainly supported passkey-based encryption, which was poorly implemented on windows. That made it kind of a non-starter for me, although I'm sure they'll support traditional auth methods soon. But I love that it's end-to-end encrypted and it's super fun to use.

Local-First & Sync-Engines are the future. Here's a great filterable datatable overview of the local-first framework landscape: https://www.localfirst.fm/landscape

My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).

How is the database migration support for these tools?

Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well

I can't speak to the other tools, but we built PowerSync using a schemaless protocol under the hood, specifically for this reason. Most of the time you don't need to implement migrations at all. For example adding a new column just works, as the data is already there when the schema is rolled forward.

Reminds me of Meteor back in the day.

Whatever happened to meteor? They made it sound so great. What I didn't like was the tight coupling to mongodb.

For me it was the lack of confirmation with the backend. When it was the next big thing, it sent changes to the backend without waiting for a response. This made the interface crazy fast but I just couldn't take the risk of the FE being out-of-sync with the backend. I hope they grew out of that model but I never took it serious for that one reason.

Yeah I built my first startup on Meteor, and the prototype for my second one, but there was so many weird state bugs after it got more complicated that we had to eventually switch back to normal patterns to scale it.

Thank you for this, I'm going to have to check out Triplit. Have you tried InstantDB? It's the one I've been most interested in trying but haven't yet.

They're also the past...

I remember being literally 12 when google docs was launched, which featured real-time sync, and a collaborative cursor. I remember thinking that this is how all web experience will be in the future, at the time 'cloud computing' was the buzzword - I (incorrectly) thought realtime collaboration was the very definition of cloud computing.

And then it just... never happened. 20 years went by, and most web products are still CRUD experiences, such as this site included.

The funny thing is it feels like it's been on the verge of becoming mainstream for all this time. When meteor.js got popular I was really excited, and then with react surely it was gonna happen - but even now, it's still not the default choice for new software.

I'm still really excited to see it happen, and I do think it will happen eventually - it's just trickier than it looks, and it's tricky to make the tooling so cheap that it's worth it in all situations.

Real-time collaboration ? Discord (not fundamentally different than IRC, which has been around since the 90s), Zoom (or any other teleconferencing software)

This site being a CRUD app is a feature. Sometimes simplicity is best. I wouldn't want realtime updates, too distracting.

I feel the same way. The initial magic of real-timeness felt like a glimpse into a future that... where is it?

I'm still excited about the prospects of it — shameless plug: actually building a tool with one-of-a-kind messaging experience that's truly real-time in the Google docs collaboration way (no compose box, no send button): https://kraa.io/hackernews

That's a really cool project! The realtime message aspect reminds me a bit of https://honk.me/, but for like, docs.

Agreed, I did a talk about exactly this earlier this year:

https://www.youtube.com/watch?v=RjV3Dm5giko

Interesting talk. I think it's just a matter of making tooling where it's so easy, cheap, and simple enough that doing realtime doesn't introduce any extra time cost to a business compared to CRUD.

The speed of light is rather unaccommodating.

We run into human-perceptible relativistic limits in latency. Light takes 56ms to travel half the earth's circumference, and our signals are often worse off. They don't travel in an idealized straight path, get converted to electrons and radio waves, and have to hop through more and more hoops like load balancers and DDOS protections.

In many cases latency is worse than it used to be.

as you point out so vividly, the speed of light is actually not a problem given you can ping across an ocean in sub 100ms (not a laser beam, actual packets through underwater pipes). 56ms is acceptable latency for realtime video

Man why arent couchdb / pouchdb not listed? Still works like a charm!

"Works like a charm" / "not listed" does not really do it justice, its much worse. All of the mentioned "solutions" will inevitably lose data on conflicts one way or another and i am not aware of anything from that school of thought, that has full control over conflict resolution as couchdb / pouchdb does. Apparently vibe coding crdt folks do not value data safety over some more developer ergonomics. It is an tradeoff to make for your own hobby projects if you are honest about it but i don't understand how this is just completely ignored in every single one of these posts.

Meteor was/is a very similar technology. And I did some fairly major projects with it.

Meteor was amazing, I don't understand why it never got sustainable traction.

I think this blog post may provide some insight: https://medium.com/@sachagreif/an-open-letter-to-the-new-own...

Roughly: Meteor required too much vertical integration on each part of the stack to survive the strongly changing landscape at the time. On top of that, a lot of the teams focus shifted to Apollo (which at least from a commercial point of view seems to have been a good decision).

Seems like meteor is still actively developed and is Framework agnostic! https://github.com/meteor/meteor

Tight coupling to MongoDB, fragmented ecosystem / packages, and react came out soon after and kind of stole its lunch money.

It also had some pretty serious performance bottlenecks, especially when observing large tables for changes that need to be synced to subscribing clients.

I agree though, it was a great framework for its day. Auth bootstrapping in particular was absolutely painless.

non-relational, document oriented pubsub architecture based on MongoDB, good for not much more than chat apps. For toy apps (in 2012-2016) – use firebase (also for chat apps), for crud-spectrum and enterprise apps - use sql. And then React happened and consumed the entire spectrum of frontend architectures, bringing us to GraphQL, which didn't, but the hype wave left little oxygen remaining for anything else. (Even if it had, still Meteor was not better.)

I'm the defacto maintainer of the Meteor MySQL integration. Since 2015, I've been involved in the design and maintenance of six different Meteor webapps for real-time geospatial applications built for B2B and B2C.

Given this, I reject your assertion that Meteor is limited to MongoDB and "toy apps".

[deleted]
[deleted]

Meteor is alive and well and actively maintained. It just doesn't get attention for some reason. Version 3.3.1 was released 4 days ago.

I really like electric approach and it has been on my radar for a long time, because it just leaves the writing complexity to you and the API.

Most of the solutions with 2 way sync I see work great in simple rest and hobby "Todo app" projects. Start adding permissions and evolving business logic, migrations, growing product and such, and I can't see how they can hold up for very long.

Electric gives you the sync for reads with their "views", but all writes still happen normally through your existing api / rest / rpc. That also makes it a really nice tool to adopt in existing projects.

> Electric’s approach is compelling given it works with existing Postgres databases. However, one gap remains to fill, how to handle mutations?

Just to note that, with TanStack DB, Electric now has first class support for local writes / write-path sync using transactional optimistic mutations:

https://electric-sql.com/blog/2025/07/29/local-first-sync-wi...

My kingdom for a team organised by org mode files through a got repo

But how is conflicting data handled?

For instance one closes an something and another aborts the same thing.

We're using dexie+rxjs. A killer combination.

Described here https://blog-doe.pages.dev/p/my-front-end-state-management-a...

I've already made improvements to that approach. decoupling of backend and front end actually feels like you're reducing complexity.

Are you using the cloud sync with Dexie? I built an app on it but it seems to have a hard time switching from local to cloud mode and vice versa. I’m not sure they ever thought people would want to but why bother making cloud set up calls for users that didn’t want it.

Nope, locally. Roughly I'm doing something like that https://gist.github.com/vladmiller/0be83755e65cf5bd942ffba22...

Example is a bit bad, but roughly shows how we're using. I have built a custom sync API that accepts in the body a list of <object_id>:<short_hash> and returns a kind of json-list in a format

<id>:<hash>:<json_object>\n <id>:<hash>:<json_object>\n

API compares what client knows vs. current state and only returns the objects that were updated/created and separately objects that were removed. Not ideal for large collections (but then again why did I store 50mb of historical data on the client in the first place? :D)

I'm also building a local first editor and rolling my own CRDTs. There are enormous challenges to make it work. For example the storage size issue mentioned in the blog, I end up using with yjs' approach which only increase the clock for upsertion, and for deletion remove the content and only remain deleted item ids which can be efficiently compressed since most ids are continuous.

In case you missed it and it's relevant, there was an automerge v3 announcement posted the other day here which claimed some nice compression numbers as well

As far as I know, automerge is using DAG history log and garbage collecting by comparing the version clock heads of 2 clients. That is different than yjs. I have not followed their compression approach in v3 yet, will check if having time.

I've been down this rabbit hole as well. Many of the sync projects seem great at first glance (and are very impressive technically) but perhaps a bit idealistic. Reactive queries are fantastic from a dx perspective, but any of the "real" databases running in the browser like sqlite or pglite store database pages in IndexedDB as there are some data longevity issues with OPFS (IIRC Safari aggressively purges this with a week of inactivity). Maybe the solution is just storing caches in the users' home directory with the filesystem api, like a native application.

Long story short, if requirements aren't strictly real time collaborative and online-enabled, I've found rolling something yourself more in the vein of a "fat client" works pretty well too for a nice performance boost. I generally prefer using IndexedDB directly— well via Dexie, which has reactive query support.

Local first is fantastic. But something that I can't figure out is why the OG of local first, RxDB, never gets any love.

As far as I can tell, it's VASTLY more capable than all of these new options. It has full-text search, all sorts of query optimizations, different storage backends in both the browser and server, and more.

RxDB is the OG? I thought it was PouchDB.

Fair enough, rxdb was originally built on pouchdb, but now doesn't support it anymore since that project has a lot of issues.

https://rxdb.info/rx-storage-pouchdb.html

My point/question still stands though - rxdb seems to be vastly more capable than all of the new tools that get all the attention. Very peculiar

Local-first buys you instant UX by moving state to the client, and then makes everything else a little harder

    > instant UX
I do not get the hype. At all.

"Local first" and "instant UX" are the least of my concerns when it comes to project management. "Easy to find things" and "good visibility" are far more important. Such a weird thing to index on.

I might interact with the project management tool a few times a day. If I'm so frequently accessing it as an IC or an EM that "instant UX" becomes a selling point, then I'm doing something wrong with my day.

UI performance is "a weird thing to index on"?

Yes? If that's the primary selling point for a project manager versus being just a really damn good project manager with good visibility?

I've never used a project manager and thought to myself "I want to switch because this is too slow". Even Jira. But I have thought to myself "It's too difficult to build a good workflow with this tool" or "It's too much work to surface good visibility".

This is not a first-person shooter. I don't care if it's 8ms vs 50ms or even 200ms; I want a product that indexes on being really great at visibility.

It's like indexing your buying decision for a minivan on whether it can do the quarter mile at 110MPH @ 12 seconds. Sure, I need enough power and acceleration, but just about any minivan on the market is going to do an acceptable and safe speed and if I'm shopping for a minivan, its 1/4 mile time is very low on the list. It's a minivan; how often am I drag racing in it? The buyer of the minivan has a purpose for buying the minivan (safety, comfort, space, cost, fuel economy, etc.) and trap speed is probably not one of them.

It's a task manager. Repeat that and see how silly it sounds to sweat a few ms interaction speed for a thing you should be touching only a few times a day max. I'm buying the tool that has the best visibility and requires the least amount of interaction from me to get the information I need.

> any minivan on the market is going to do an acceptable and safe speed

Growing up my folks had an old Winnebago van that took 2+ minutes to hit 60mph which made highway merges a white-knuckle affair, especially uphill. Performance was a criteria they considered when buying their next minivan. Whereas modern minivans all have an acceptable acceleration -- it's still important, it's just no longer one you need to think about.

However, not all modern interfaces provide an acceptable response time, so it's absolutely a valid criteria.

As an example, we switched to a SaaS version of Jira recently and things became about an order of magnitude slower. Performing a search now takes >2000ms, opening a filter dropdown takes ~1500ms, filtering the dropdown contents takes another ~1500ms. The performance makes using it a qualitatively different experience. Whereas people used to make edits live during meetings I've noticed more people just jotting changes down in notebooks or Excel spreadsheets to (hopefully remember to) make the updates after the meeting. Those who do still update it live during meetings often voice frustration or sometimes unintentionally perform an operation twice because there was no feedback that it worked the first time.

Going from ~2000ms to ~200ms per UI operation is an enormous improvement. But past that point there are diminishing returns: from ~200ms to ~20ms is less necessary unless it's a game or drawing tool, and going from 20ms to 2ms is typically overoptimization.

2000ms isn’t network latency, it’s the db query. Moving a slow query from the cloud (high compute, fast network under your control) to the client (low compute, unreliable network, not under your control) is not going to make it faster and you’ve damaged reliability. All to save 50ms network latency.

I urge you to try and set your display to 25 Hz. I don't quite feel it yet at 30 Hz, although the latter is more widely available as an option.

It all depends on what we do consider "good enough". 200ms total page render time would be "blazing fast" for me already. I've just clicked around Github (supposed to be globally fast, can we agree?) and the SPA page changes are 1-1.5s to complete.

To continue my example above, your computer peripherals are probably good enough. Have you considered what it would be like with a garbage-tier mouse? Similarly, maybe you wouldn't notice the difference to a better mouse. I do, because a standard office mouse is not the pace I'm moving at. (No, I'm not some the Flash, I am just fast and precise with my mouse.)

If anything, this gives us a glimpse of what's possible. The latency benchmark[1] of text editors has given us something to think about. In the past decade (already?!) that article was probably the sole reason for drawing public attention to this topic[2] . For example, JetBrains have since put considerable work into improving their IDEs (IntelliJ IDEA etc). They had called it "zero latency" mode.

[1]: https://pavelfatin.com/typing-with-pleasure/ [2]: small study from 2023 https://dl.acm.org/doi/fullHtml/10.1145/3626705.3627784

I mostly agree with you on this but JIRA tends to push the envelope in terms of unresponsiveness of its UX. As an IC I only really use it to create/update/search tickets but I find myself waiting a half to couple of seconds for certain flows, especially for finding old tickets.

Not quite the same as responsiveness but editing text fields in JIRA have a tendency of not saving in progress work if you accidentally escape out. Also hyperlinking between the visual and text mode is pretty annoying since you can easily forget which mode you’re in.

Honestly as I type these out there are more and more frustrations I can think of with JIRA. Will we ever move away? Not anytime soon. It integrates with everything and that’s hard to replace.

It’s still frustrating though.

Yeah but Jira is a good example of where the culture building the software shows they don’t give a shit about performance or usability. They tend to go together. I agree the talk about Linear being good because it is fast is weird but I think a lot of people who say that are actually saying the workflows are sensible and the ux is good. The feeling of speed is partially because you don’t have to click 17 times to do something simple.

I think there is a mismatch between most commenters on HN and who is making purchasing decisions for something like Linear: it would the PGM/TPM org or leadership pushing it and they are touching the tool a lot more often. Even if a small speed up ultimately doesn't make a difference in productivity, the perceived snapiness makes it feel "better/more modern" than what they currently have.

That said, I really enjoy Linear (it reminds me a lot of buganizer at Google). The speed isn't something I notice much at all, it's more the workflow/features/feel.

There is a certain point where UX responsiveness has a huge impact on how the product gets used.

I hate Jira with a burning passion simply because it is slow where I live (in China, with a VPN). Even minor interactions, like clicking on a task’s description to edit it, takes about 2 seconds. Opening a task from a list takes around 5 seconds.

The result is that I and my coworkers avoid using Jira unless we really have to. Ad-hoc work that wasn’t planned as part of the sprint just doesn’t get tracked because doing so is unreasonably painful.

Actually it's both instant UX and far simpler code. With Zero I define a table schema, relationships, and permissions in beautifully simple code, then I also write my queries including relations with a nice typed query-builder, and I get not just instant UX but also perfectly fetched and synced data with permissions and mutations that are fully consistent without any extra thought.

Doing that with any other system, sync-engine or not, requires a huge mess of code and ends up implementing some sort of ad-hoc glue code to make even a part of this work.

I'm building an app right now with it and I'm currently so much further ahead in development than I could be with any other setup, with no bugs or messy code.

Your choice is either 100% server-side like v1 Rails, or some sort of ad-hoc sync/update system. My argument is either you should stick 100% server side, or go all the way client properly with a good sync engine. It's the middle part that sucks, and while there's a chunk of apps that benefit from fully server, it's not really an argument that you can build much faster responding apps client-side and that users generally prefer it, rightly so.

Id say you are underreporting how much harder everything else becomes but yes, definitely agreed

this is such a clean and articulate way of putting it. The discussion around here the last few days about local and the role it is going to play has been phenomenal and really genuine

Local first is super interesting and absolutely needed - I think most of the bugs I run into with web apps have to do with sync, exacerbated by poor internet connectivity. The local properties don't interest me as much as request ordering and explicit transactions. You aren't guaranteed that requests resolve in order, and thus can result in a lot of inconsistencies. These local-first sync abstractions are a bit like bringing a bazooka to a water gun fight - it would be interesting to see some halfway approaches to this problem.

Automerge + Keyhive is the future https://www.inkandswitch.com/project/keyhive/

Local first is amazing. I have been building a local first application for Invoicing since 2020 called Upcount https://www.upcount.app/.

First I used PouchDB which is also awesome https://pouchdb.com/ but now switched to SQLite and Turso https://turso.tech/ which seems to fit my needs much better.

I’ve been working on a small browser app that is local first and have been trying to figure out how to pair it with static hosting. It feels like this should be possible but so far the tooling all seems stuck in the mindset of having a server somewhere.

My use case is scoring live events that may or may not have Internet connection. So normal usage is a single person but sometimes it would be nice to allow for multi person scoring without relying on centralized infrastructure.

I was in the same boat and I found Nostr is a perfect fit. You can write a 100% client side no-server app and persist your data to relays.

Here's the app I built if you want to try it out: https://github.com/chr15m/watch-later

Honestly, having used InstantDB (one of the providers listed in their post), I think it'd be a pretty nice fit.

I've been writing a budget app for my wife and I and I've made it 100% free with 3rd party hosting:

* InstantDB free tier allows 1 dev. That's the remote sync.

* Netlify for the static hosting

* Free private gitlab ci/cd for running some email notification polling, basically a poor mans hosted cron.

I may end up doing that, but I really wish there was a true p2p option that doesn’t have me relying on someone not rug pulling their free tier sync server.

Yeah... true p2p is pretty hard though, to the point that even stuff like WebRTC requires external servers to setup the data sync portion. It would be nice to develop something that worked at that layer though.

IIUC, InstantDB is open source with a docker container you can run yourself, but at this point it's designed to run in a more cloud-like environment than I'd like. Last time I checked there was at least one open PR to make it easier to run in a different environment, but I haven't check in recently.

I spent some more time hunting due to this conversation and I’m hopeful that yjs + y-WebRTC + PeerJS will solve this. I also see that there are a few libraries that enable QR codes to replace PeerJS as a truly offline WebRTC initialization with true p2p connectivity. Looks quite promising

Unfortunately, p2p is *inhales* fxxxed! due to how modern internet networks are set up. NAT (potentially VPNed at carrier level), lack of IPv6, firewalls blocking incoming traffic, dysfunctioning UPnP, blocked UDP. Next tier issues: legal, that bound user identities to IPs, showing your IP publically is a privacy risk first and a security risk second (DoS).

It seems like it'll be impossible without an overlay network (like Yggdrasil, i2p), but these will be too heavy for mobile devices without a dedicated functioning relay... here we go again.

Check out "Distributed Quantum Computing across an optical network link" by D.Main. Et Al. 2024. As it seems that the goals of Linear, are closely aligned with distributed Quantum Computing. Having decided I need a distributed Website with quantum entanglement. For my Web shop to sell "Federation Crypto currency" I find this post very instructive, towards that goal. " It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this " Does this imply compile time goals of browser are potentially compatible with a Distributed Quantum Computing Website? a local-first approach

It's starting to feel to me that a lot of tech is just converging on other platforms solutions. This for example sounds incredibly similar to how a mobile app works (on the surface). Of course it goes the other way too, with mobile tech taking declarative UIs from the Web.

Some problem on the site. Too much traffic?

    Secure Connection Failed
    An error occurred during a connection to bytemash.net. PR_END_OF_FILE_ERROR
    Error code: PR_END_OF_FILE_ERROR

It looks like I was missing a www subdomain CNAME for the underlying github pages site. I think it's fixed now.

I still see the same error

Ok, it works, problem was probably on my end.

> For the uninitiated, Linear is a project management tool that feels impossibly fast. Click an issue, it opens instantly. Update a status and watch in a second browser, it updates almost as fast as the source. No loading states, no page refreshes - just instant, interactions.

How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.

Hacker News comment sections are the only part of the internet that still feel "impossibly fast" to me. Even on Android, thousands of comments can scroll as fast as the OS permits, and the DOM is so simple that I've reopened day-old tabs to discover the page is still loaded. Even projects like Mastodon and Lemmy, which aren't beholden to modern web standards, have shifted to significant client-side scripting that lacks the finesse to appear performant.

The modern webbroswer trick of "you haven't looked at this tab in an hour, so we killed/unloaded it" is infuriating.

To be fair... Lots of people just never close their tabs. So there's very real resource limitations. I've seen my partner's phone with a few hundred tabs open.

I would like this feature if I had more control over it. The worst part is when clicking a tab that was unloaded which makes a new (fresh) web request when I don't want it to

In firefox it's possible to disable it: https://firefox-source-docs.mozilla.org/browser/tabunloader/ . Enabled is probably the reasonable default for it though.

Particularly if you have maybe 40 tabs open and 128GB of ram.

Back in 2018 I worked for a client that required we used Jira. It was so slow that the project manager set everything up in Excel during our planning meetings. After the meeting she would manually transfer it to Jira. She spent most of her time doing this. Each click in the interface took multiple seconds to respond, so it was impossible to get into a flow.

Hm. While I'm not even remotely excited by Jira (or any other PM software), I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

Were some extras installed? Or is this one of those tools that needs a highly performant network?

The problem with Jira (and other tools) is it inevitably gets too many customizations: extra fields, plugins, mandatory workflows, etc. Instead of becoming a tool to manage work, it starts getting in the way of real work and becomes work itself.

I've seen on perm Jira at large companies get that slow. I'm not sure if it's the plugins or just the company being stingy on hardware.

Yeah it’s probably both. Underfunded IT department, probably one or two people who aren’t allowed to say no.

I can easily believe either, but I am still curious what the failure mode(s) is (/are).

Underconfigured hardware and old installations neglected are the ones I've encountered.

Large numbers of custom workflows and rules can do it, too, but most have been the first.

> I've never noticed it being that bad. Annoying? Absolutely! But not that painfully slow.

I have only seen a few self hosted jira, but all of those were mind numbingly slow.

Jira cloud, on the other hand, now compared to 2018 is faster from what I remember, I still call it painful any time I am trying to be quick about something, most of the time though it is only annoying.

It is faster than it was back then - I've been using it for 10+ years. Hating every moment of it. But it is definitely better than it was.

At this point I think I would try to automate this pointless time sink with a script and jira API.

100%. Their API isnt even bad. I made a script to pull lots of statistics and stuff from jira.

Looking at the software development today, is as if the pioneers failed to pass on the torch onto the next generation of developers.

While I see strict safety/reliability/maintainability concerns as a net positive for the ecosystem, I also find that we are dragged down by deprecated concepts at every step of our way.

There's an ever-growing disconnect. On one side we have what hardware offers ways of achieving top performance, be it specialized instruction sets or a completely different type of a chip, such as TPUs and the like. On the other side live the denizens of the peak of software architecture, to whom all of it sounds like wizard talk. Time and time again, what is lauded as convention over configuration, ironically becomes a maintenance nightmare that it tries to solve as these conventions come with configurations for systems that do not actually exist. All the while, these conventions breed an incompetent generation of people who are not capable of understanding underlying contracts and constraints within systems, myself included. It became clear that, for example, there isn't much sense to learn a sql engine's specifics when your job forces you to use Hibernate that puts a lot of intellectual strain into following OOP, a movement characterized by deliberately departing away from performance, in favor of being more intuitive, at least in theory.

As limited as my years of experience are, i can't help but feel complacent in the status quo, as long as I don't take deliberate actions to continuously deepen my knowledge and working on my social skills to gain whatever agency and proficiency that I can get my hands on

People forget how hostile and small the old Internet felt at times.

Developers of the past weren't afraid to tell a noob (remember that term?) to go read a few books before joining the adults at the table.

Nowadays it seems like devs have swung the other way and are much friendlier to newbs (remember that distinction marking a shift?).

Stockholm syndrome

A web request to a data center even with a very fast backend server will struggle to beat 8ms (120hz display) or even 16ms (60hz display), the budget for next frame painting a navigation. You need to have the data local to the device and ideally already in memory to hit 8ms navigation.

This is not the point, or other numbers matter more, then yours.

In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.

So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.

> Now they rewrite their frontend in react and it feels sluggish and slow.

I find this is a common sentiment, but is there any evidence to find that React itself is actually the culprit of GH's supposed slowdown? GH has updated their architecture many times over and their scale has increased by orders of magnitude, quite literally serving up over a billion git repos.

Not to mention that the implementation details of any React application can make or break its performance.

Modern web tech often becomes a scapegoat, but the web today enables experiences that were simply impossible in the pre-framework era. Whatever frustrations we have with GitHub’s UI, they don’t automatically indict the tools it’s built with.

It's more of a "holding it wrong" situation with the datastores used with React, rather than directly with React itself, with updated data being accessed too high in the tree and causing large chunks of the page to be unnecessarily rerendered.

This was actually the recommended way to do it for years with the atom/molecule/organism/section/page style of organizing React components intentionally moving data access up the tree into organism and higher. Don't know what current recommendations are.

I don't see how GH's backend serving a billion repos would affect the speed of their frontend javascript. React is well known to be slow, but if you need numbers, you can look at the js-framework-benchmark and see how many React results are orange and red.

https://github.com/krausest/js-framework-benchmark

Sure, React has overhead. No one disputes that. But pointing to a few red squares on a synthetic benchmark doesn’t explain the actual user experience on GitHub today. Their entire stack has evolved, and any number of architectural choices along the way could impact perceived performance.

Used properly, React’s overhead isn’t significant enough on its own to cause noticeable latency.

> Now they rewrite their frontend in react and it feels sluggish and slow.

And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.

Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area. And the techniques to "fix" this usually only mitigate the problem in read-scenarios.

The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.

My take is, that a performant backend gets you so much runway, that you can reduce a lot of complexity in the frontend. And yes, sometimes that means to have globally distributed databases.

But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.

This is an absolutely bonkers tradeoff to me. Globally distributed databases are either 1. a very complex infrastructure problem (especially if you need multiple writable databases), or 2. lock you into a vendor's proprietary solution (like Cloudflare D1).

All to avoid writing a bit of JavaScript.

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.

Here's the creator of Datastar demonstrating a WebGL app being updated at 144FPS from the server: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848

This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.

Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.

To see this first hand try this website if you're in Europe (maybe it's also fast in the US, not sure):

https://www.jpro.one/?

The entire thing is a JavaFX app (i.e. desktop app), streaming DOM diffs to the browser to render its UI. Every click is processed server side (scrolling is client side). Yet it's actually one of the faster websites out there, at least for me. It looks and feels like a really fast and modern website, and the only time you know it's not the same thing is if you go offline or have bad connectivity.

If you have enough knowledge to efficiently use your database, like by using pipelining and stored procedures with DB enforced security, you can even let users run the whole GUI locally if they want to, and just have it do the underlying queries over the internet. So you get the best of both worlds.

There was a discussion yesterday on HN about the DOM and how it'd be possible to do better, but the blog post didn't propose anything concrete beyond simplifying and splitting layout out from styling in CSS. The nice thing about JavaFX is it's basically that post-DOM vision. You get a "DOM" of scene graph nodes that correspond to real UI elements you care about instead of a pile of divs, it's reactive in the Vue sense (you can bind any attribute to a lazily computed reactive expression or collection), it has CSS but a simplified version that fixes a lot of the problems with web CSS and so on and so forth.

At least for me this site is completely broken on mobile. I'm not saying it's not possible to write sites for mobile using this tech... But it's not a great advert at all.

I haven't tried it on mobile. That could be, but my point was limited to latency and programming model.

You can use JavaFX to make mobile apps. So it's likely just that the authors haven't bothered to do a mobile friendly version.

Hardly a surprise, given that:

> The entire thing is a JavaFX app (i.e. desktop app)

Besides, this discussion is not about whether or not a site is mobile-friendly.

> Every click is processed server side

On this site, every mouse move and scroll is sent to the server. This is an incredibly chatty site--like, way more than it needs to be to accomplish this. Check the websocket messages in Dev Tools and wave the mouse around. I suspect that can be improved to avoid constantly transmitting data while the user is reading. If/when mobile is supported, this behavior will be murder for battery life.

>the only time you know it's not the same thing is if you go offline or have bad connectivity.

So, like most of the non-first world? Hell, I'm in a smaller town/village next to my capital city for a month and internet connection is unreliable.

Having said that, the website was usable for me - I wouldn't say it's noticeably fast, but it was not show either.

I feel like it depends a lot on what kind of website you're using. Note taking app? Definitely should work offline. CRUD interface? You already need to be constantly online, since every operation needs to talk to the server.

I'm not impressed. On mobile, the docs are completely broken and unreadable. Visiting a different docs subpage breaks the back button.

Firefox mobile seems to think the entire page is a link. This means I can't highlight text for instance.

Clicking on things feels sluggish. The responses are fast, but still perceptible. Do we really need a delay for opening a hamburger menu?

> Unless you are running some really complicated globally distributed backend your roundtrip will always be higher than 80ms for all users outside your immediate geographical area.

Many of us don't have to worry about this. My entire country is within 25ms RTT of an in-country server. I can include a dozen more countries within an 80ms RTT. Lots of businesses focus just on their country and that's profitable enough, so for them they never have to think about higher RTTs.

If you put your server e.g. in Czechia you can provide ~20ms latency for the whole of Europe :)

Any providers in Czechia you'd recommend? It's not a market I know.

React isn't the problem. You can write a very fast interface in React. Its (usually) too many calls to the backend that slow everything to a crawl

actually if you live near a city the edge network is 6ms RTT ping away, that’s 3ms each direction, so if e.g. a virtual scroll frontend is windowing over a server array retained in memory, you can get there and back over websocket, inclusive of the windowing, streaming records in and out of the DOM at the edges of the viewport, and paint the frame, all in less than 8ms 120hz frame budget, and the device is idle, with only the visible resultset in client memory. That’s 120hz network. Even if you don’t live near a city, you can probably still hit 60hz. It is not 2005 anymore. We have massively multiplayer video games, competitive multiplayer shooters and can render them in the cloud now. Linear is office software, it is not e-sports, we’re not running it on the subway or in Africa. And AI happens in the cloud, Linear’s website lead text is about agents.

Those are theoretical numbers for a small elite. Real world numbers for most of the planet are orders of magnitude worse.

it is my actual numbers from my house in the Philadelphia suburbs right now, 80 miles away from the EWR data center outside NYC. Feel free to double them, you’re still inside the 60hz frame budget with better than e-sports latency

edit: I am 80 miles from EWR not 200

Like they said, for a small elite. If you don't see yourself as such, adjust your view.

what is your ping to fly.io right now?

90ms for me. My fiber connection is excellent and there is no jitter--fly.io's nearest POP is just far away. You mentioned game streaming so I'll mention that GeForce Now's nearest data center is 30ms away (which is actually fine). Who is getting 6ms RTT to a data center from their house, even in the USA?

More relevantly... who wants to architect a web app to have tight latency requirements like this, when you could simply not do that? GeForce Now does it because there's no other way. As a web developer you have options.

who said anything about designing for tight latency requirements? My argument is that, for Linear’s market - programmers and tech workers either in an office or working remotely near a city - the latency requirements are not tight at all relative to the baseline capacity. We live on zoom! I have little patience for someone whose 400ms jitter is breaking up the zoom call, I had better ping than that on AOL in 1999, you want to have a tech career you need to have good internet, and AI has just cemented this. I have cross-atlantic zoom calls with my team in europe every day without perceptible lag or latency. We laugh, we joke, we crosstalk all with realtime body language. If SF utilities have decayed to the point where you can’t get fast internet living 20 miles from the backbone, then the jobs are going overseas. Eastern europe has lower ping to Philly than jitter guy has to the edge. And people in this thread are lecturing me about privilege!

[deleted]

Mine's 167-481ms (high jitter). It's the best internet I can get right now, a few suburbs south of San Francisco. Comcast was okayish, lower mean latency, but it had enough other problems that T-Mobile home internet was a small improvement.

update, Friday evening now and my RTT to EWR is now 8.5ms (4.2 each way), up from 6 point something this morning.

From Philadelphia suburbs to my actual Fly app in:

EWR 8.5ms (NYC)

SJC 75ms (California)

CDG 86ms (France, cross atlantic)

GRU 126.2 (Brazil)

HKG 225.3 (Hong Kong)

Now try from Idaho or Botswana

I can't help but feel this is missing the point. Ideally, next refresh click latency is a fantastic goal, we're just not even close to that.

For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.

If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms

What does web request latency have to do with it? News articles or simple forms take 5 seconds to load. Why? This is not bounded by latency.

I also winced at "impossibly fast" and realize that it must refer to some technical perspective that is lost on most users. I'm not a front end dev, I use linear, I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it. (I don't mean to say optimization isn't cool)

> I'd say I didn't notice speed, it seems to work about the same as any other web app. I don't doubt it's got cool optimizations, but I think they're lost on most people that use it.

We almost forgot that's the point. Speed is good design, the absence of something being in the way. You notice a janky cross platform app, bad electron implementation, or SharePoint, because of how much speed has been taken away instead of how much has been preserved.

It's not the whole of good design though, just a pretty fundamental part.

Sports cars can go fast even though they totally don't need to, their owners aren't necessarily taking them to the track, but if they step on it, they go, it's power.

Second this. I use Linear as well and I didn't noticed anything close to "impossibly fast", it's faster than Jira for sure, but nothing spectacular.

If you get used to Jira, especially Ubisofts internally hosted Jira (which was in an oversubscribed 10yo server that was constantly thrashing and hosted half a world away) ... well, it's easy for things to feel "impossibly fast".

In fact in the Better Software Conference this year there were people discussing the fact that if you care about performance people think your software didn't actually do the work: because they're not used to useful things being snappy.

Linear is actually so slow for me that I dread having to go into it and do stuff. I don’t care if the ticket takes 500ms to load, just give me the ticket and not a fake blinking cursor for 10 seconds or random refreshes while it (slowly) tries to re-sync.

Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.

This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.

Yeah their startup times aren’t great. They’re making a trade off by loading a ton of data up front, though to be fair a lot of the local first web tooling didn’t really exist when they were founded - the nascent Zero Sync framework’s example project is literally a Linear clone that they use as their actual bug tracker, it loads way faster and has similarly snappy performance, so seems clear that it can be done better.

That said at this point Linear has more strengths than just interaction speed, mainly around well thought out integrations.

Maybe it doesn't scale well then? I syncd my linear with GitHub. Has a few thousand issues. Lightning fast. Perhaps you guys have way more issues?

I hate to be a hacker news poster who responds to a positive post with negativity, but I was also surprised at the praise in the article.

I don’t find Linear to be all that quick, but apparently Mac OS thinks it’s a resource hog (or has memory leaks). I leave linear open and it perpetually has a banner that tells me it was killed and restarted because it was using too much memory. That likely colors my experience.

Trite remark. The author was referring to behaviour that has nothing to do with “how the web has become.”

It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.

To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!

Funny how reasonable performance is now treated as some impossible lost art on the web sometimes.

I posted a little clip [1] of development on a multiplayer IDE for tasks/notes (local-first+e2ee), and a lot of people asked if it was native, rust, GPU rendered or similar. But it's just web tech.

The only "secret ingredients" here are using plain ES6 (no frameworks/libs), having data local-first with background sync, and using a worker for off-UI-thread tasks. Fast web apps are totally doable on the modern web, and sync engines are a big part of it.

[1] https://x.com/wcools/status/1900188438755733857

I was also surprised to read this, because Linear has always felt a little sluggish to me.

I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?

Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.

150ms is sluggish? 4000ms is normal?

The comments are absolutely wild in here with respect to expectations.

150 ms is definitely on the “not instantaneous” side: https://ux.stackexchange.com/a/42688

The stated 500 ms to 1500 ms are unfortunately quite frequent in practice.

Interesting fact: the 50ms to 100ms grace period only works at the very beginning of a user interaction. You get that grace period when the user clicks a button, but when they're typing in text, continually scrolling, clicking to interrupt an animation, or moving the mouse to trigger a hover event, it's better to provide a next-frame response.

This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks

I think under 400ms is fast enough for loading a new page or dialog. For loading search suggestions or opening a date picker or similar, probably not.

Web applications have become too big and heavy. Corps want to control everything. A simple example would be a simple note taking app which apparently also has to sync throughout devices. They are going to store every note you take on their servers, who knows if they really delete your deleted notes. They'll also track how often you visited your notes for whatever reasons. Wouldn't surprise me if the app also required geolocation and stuff like that for whatever reason. Mix that with lots of users and you will have loading times unheard of with small scale apps. Web apps should scale down but like with everything we need more more more bigger better faster.

> a simple note taking app which apparently also has to sync throughout devices

that is the entire point of the app, surely! whether or not the actual implementation is bad, syncing across devices is what users want in a note taking app for the most part.

I only take notes on one device, if I can take them on my phone I can pull out my phone and look at them while using other devices

It is definitely ridiculous. It's not just a nitpick too, it's ludicrous how sloooow and laggy typing text in a monstrosity like Jira is, or just reading through an average news site. Makes everything feel like a slog.

Indeed. I have been using it for 5-6 months in a new job and I didn't notice it being faster than the typical web app.

If anything it is slow because it is a pain to navigate. I have browser bookmarks for my most frequented pages.

one of my day to day responsibilities involves using a portal tied to MSFT dynamics on the back end and it is the laggiest and most terrible experience ever. we used to have java apps that ran locally and then moved to this in the name of cloud migration and it feels like it was designed by someone whose product knowledge was limited to the first 2/5 lessons in a free Coursera (RIP) module

Since it’s so easy then I’m rooting for you to make some millions with performant replacements for other business tools, should be a piece of cake

I don't know if 'the web' in general is fair, here the obvious comparison is Jira, which is dog slow & clunky.

I'm a big fan of local-first. InstantDB has productized it – worth looking into if you're interested in taking a local-first approach.

Is this technical architecture so different from Meteor back in the day? Just curious for those who have a deeper understanding.

[deleted]

I don't get it. You still have to sync the state one way or another, network latency is still there.

Me neither. Considered we are talking about collaborative network applications, you are loosing the single-source-of-thruth (the server database) with the local first approach. And it just adds so much more complexity. Also, as your app grows, you probably end up to implement the business logic twice. On the server and locally. I really do not get it.

You can use the same business logic code on both the client and server.

With the Linear approach, the server remains the source of truth.

It’s difficult to ensure that it’s always the same logic when client and server software versions can get out of sync. You can try to force the client to reload whenever the server redeploys, but realistically you’ll probably end up dealing with cases where the client and server logic diverge.

i mean that's life with client computers, you make stuff backwards compatible and you avoid making it too brittle.

It's not an insurmountable problem, but it's a problem that you don't have if you execute your business logic only on the server.

The latency is off the critical path with local first. You sync changes over the network sure, but your local mutations are stored directly and immediately in a local DB.

But the user gets instant results

If you want to work on Linear's sync infrastructure or product – we're hiring. The day-to-day DX is incredible.

You should put pay bands on job listings to save everyone time and sanity.

I'm curious what the WLB is like?

[flagged]

It’s a developer writing about a tool they like. If you’d call word of mouth an “ad” the I guess it’s one.

Why do you think marketing is not sophisticated?

Here I’ll offer my services. I’ll pretend to do a technical deep dive of your app for X amount. No one will know, I’ll just act super interested.

When the fuck did anyone ever go “omg this web app so impressive”, never, ever, never, ever.

It’s a choice to always see the worst in everything.

Many blog post submissions here are someone diving into something they like, hardware, software, tool etc. and it’s just because people like to share.

Yep, but LLMs being used as aid in writing blog posts is a relatively new phenomenon.

I'm on the fence. IMO ivape provides a hypothesis, but brings it as fact. That isn't a good starting point for a discussion (though a common mistake), but that doesn't prove they are wrong either. Btw, I believe the HN guidelines encourage you to take the positive angle, at least for comments.

As for the topic at hand local-first means he end up with a cache; either in memory or on disk. If you got the RAM and NVMe you might as well use it for performance. Back in the days not much could be cached, but your connection was often too lousy or not 24/7. So you ended up with software distribution via 3.5 inch floppies or CDROM. Larger distributions used gigantic disk cache either centralized (Usenet) or distributed (BitTorrent). But the 'you might as well use it' issue is it introduces sloppiness. If you develop with huge constraints you are disciplined into deterrence to start, failing, or succeeding efficiently. We hary ever hear about all the deterrence and failing.

[flagged]

Well, perhaps it's an AI writing about a tool...

> No API routes. No request/response cycles. No DTOs. Just… objects that magically sync. It kind of feels like cheating.

That and the paragraph above:

> What makes this powerful is that these aren’t just type definitions - they’re live, reactive objects that sync automatically.

Is what twigged my AI radar too. LLM’s seem to really love that summarisation pattern of `{X is/isn’t just Y. Pithy concluding remark}`

Fair enough, I thought what I'd originally written for that section was too wordy, so I asked Claude to rewrite it. I'll go a bit lighter on the AI editing next time. Here's most of the original with the code examples omitted:

Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.

The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like: - Think of the API operation you want to call - Implement that handler/controller/whatever based on your architecture/framework - Design the request/response objects and share them between the backend/frontend code - Potentially, write the database migration that will unlock this new feature

Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.

Thanks, I found that version much more engaging and understandable.

An ad for what? I'm not associated with any of the projects mentioned.

An ad for Linear.

If it's an ad for Linear why is so much text spent on Electric SQL, Zero, and Jazz?

[flagged]

In that worldview, it's unthinkable that you'd come and warn us about ads out of the goodness of your own heart.

Who paid you for these comments? Atlassian?

That's an interesting question. I will open a Jira ticket to schedule a meeting with the Product Team, and they will assign you several stories with acceptance criteria written by AI.

Do elaborate, u made me curious

Best kind of ad to the best kind of service (I'm not affiliated, this is not an ad) - an organic one.

The enshitification that Jira went through is by itself an ad for Linear.

[deleted]

How is this approach better than using react-query to persist storage which periodically sync the local storage and the server storage? Perhaps I am missing something.

That approach is precisely what the new TanStack DB does, which if you don't know already has the same creator as React Query. The former extends the latter's principles to syncing, via ElectricSQL, both organizations have a partnership with each other.

Yes, you’re missing a lot of things. Like how to update that data when you are offline and have everything sync around when others have made updates to the same data in the meanwhile. Among many other concerns and use cases.