I don't doubt that large amounts of javascript can often cause issues but even when cached NextCloud feels sluggish. When I look at just the network tab of a refresh of the calendar page it does 124 network calls, 31 of which aren't cached. it seems to be making a call per calendar each of which is over 30ms. So that stacks up the more calendars you have(and you have a number by default like contact birthdays).

The Javascript performance trace shows over 50% of the work is in making the asynchronous calls to pull those calendars and other network calls one by one and then on all the refresh updates it causes putting them onto the page.

Supporting all these N calendar calls is pulls individually for calendar rooms and calendar resources and "principles" for the user. All separate individual network calls some of which must be gating the later individual calendar calls.

Its not just that, it also makes a call for notifications, groups, user status and multiple heartbeats to complete the page as well, all before it tries to get the calendar details.

This is why I think it feels slow, its pulling down the page and then the javascript is pulling down all the bits of data for everything on the screen with individual calls, waiting for the responses before it can progress in many ways to make the further calls of which there can be N many depending on what the user is doing.

So across the local network (2.5Gbps) that is a second and most of it in waiting for the network. If I use the regular 4G level of throttling it takes 33.10 seconds! Really goes to show how bad this design does with extra latency.

I was going to say... The size of the JS only matters the first time you download it unless there's a lot of tiny files instead of a bundle or two. What the article is complaining about doesn't seem like it's root cause of the slowness.

When it comes to JS optimization in the browser there's usually a few great big smoking guns:

    1. Tons of tiny files: Bundle them! Big bundle > zillions of lazy-loaded files.
    2. Lots of AJAX requests: We have WebSockets for a reason!
    3. Race conditions: Fix your bugs :shrug:
    4. Too many JS-driven animations: Use CSS or JS that just manipulates CSS.
Nextcloud appears to be slow because of #2. Both #1 and #2 are dependent on round-trip times (HTTP request to server -> HTTP response to client) which are the biggest cause of slowness on mobile networks (e.g. 5G).

Modern mobile network connections have plenty of bandwidth to deliver great big files/streams but they're still super slow when it comes to round-trip times. Knowing this, it makes perfect sense that Nextcloud would be slow AF on mobile networks because it follows the REST philosophy.

My controversial take: GIVE REST A REST already! WebSockets are vastly superior and they've been around for FIFTEEN YEARS now. Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.

It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).

That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.

Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.

Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)

There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.

It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"

When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.

Please don't.

THANK YOU

This really is a very wrong take. My iPhone 11 isn't that old but it struggles to render some websites that are Chrome-optimised. Heck, even my M1 Air has a hard time sometimes. It's almost 2026, we can certainly stop blaming the client for our shitty webdevelopment practices.

>There comes a point where supporting 10yo devices isn't worth it

Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.

I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".

>Do I understand why they're so much lower latency than REST calls on mobile networks? Not really: In theory, it's still a round-trip but for some reason an open connection can pass data through an order of magnitude (or more) lower latency on something like a 5G connection.

It's because a TLS handshake takes more than one roundtrip to complete. Keeping the connection open means the handshake needs to be done only once, instead of over and over again.

doesn’t HTTP keep connections open?

It's up to the client to do that. I'm merely explaining why someone would see a latency improvement switching from HTTPS to websockets. If there's no latency improvement then yes, the client is keeping the connection alive between requests.

Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

I was very curious so I asked AI to explain why websockets would have such lower latency than regular HTTP and it gave some (uncited, but logical) reasons:

Once a WebSocket is open, each message avoids several sources of delay that an HTTP request can hit—especially on mobile. The big wins are skipping connection setup and radio wakeups, not shaving a few header bytes.

Why WebSocket “ping/pong” often beats HTTP GET /ping on mobile

    No connection setup on the hot path
        HTTP (worst case): DNS + TCP 3‑way handshake + TLS handshake (HTTPS) before you can send the request. On mobile RTTs (60–200+ ms), that’s 1–3 extra RTTs, i.e., 100–500+ ms just to get started.
        HTTP with keep‑alive/H2/H3: Better (no new TCP/TLS), but pools can be empty or closed by OS/radios/idle timers, so you still pay setup sometimes.
        WebSocket: You pay the TCP+TLS+Upgrade once. After that, a ping is just one round trip on an already‑open connection.


    Mobile radio state promotions
        Cellular modems drop to low‑power states when idle. A fresh HTTP request can force an RRC “promotion” from idle to connected, adding tens to hundreds of ms.
        A long‑lived WebSocket with periodic keepalives tends to keep the radio in a faster state or makes promotion more likely to already be done, so your message departs immediately.
        Trade‑off: keeping the radio “warm” costs battery; most realtime apps tune keepalive intervals to balance latency vs power.


    Fewer app/stack layers per message
        HTTP request path: request line + headers (often cookies, auth), routing/middleware, logging, etc. Even with HTTP/2 header compression, the server still parses and runs more machinery.
        WebSocket after upgrade: tiny frame parsing (client→server frames are 2‑byte header + 4‑byte mask + payload), often handled in a lightweight event loop. Much less per‑message work.
         

    No extra round trips from CORS preflight
        A simple GET usually avoids preflight, but if you add non‑safelisted headers (e.g., Authorization) the browser will first send an OPTIONS request. That’s an extra RTT before your GET.
        WebSocket doesn’t use CORS preflights; the Upgrade carries an Origin header that servers can validate.


    Warm path effects
        Persistent connections retain congestion window and NAT/firewall state, reducing first‑packet delays and occasional SYN drops that new HTTP connections can encounter on mobile networks.

What about encryption (HTTPS/WSS)?

    Handshake cost: TLS adds 1–2 RTTs (TLS 1.3 is 1‑RTT; 0‑RTT is possible but niche). If you open and close HTTP connections frequently, you keep paying this. A WebSocket pays it once, then amortizes it over many messages.
    After the connection is up, the per‑message crypto cost is small compared to network RTT; the latency advantage mainly comes from avoiding repeated handshakes.
     
How much do headers/bytes matter?

    For tiny messages, both HTTP and WS fit in one MTU. The few hundred extra bytes of HTTP headers rarely change latency meaningfully on mobile; the dominant factor is extra round trips (connection setup, preflight) and radio state.
     
When the gap narrows

    If your HTTP requests reuse an existing HTTP/2 or HTTP/3 connection, have no preflight, and the radio is already in a connected state, a minimal GET /ping and a WS ping/pong both take roughly one network RTT. In that best case, latencies can be similar.
    In real mobile conditions, the chances of hitting at least one of the slow paths above are high, so WebSocket usually looks faster and more consistent.

Wow. Talk about inefficiency. It just said the same thing I did, but using twenty times as many characters.

>Yes and no: There's still a rather large latency improvement even when you're using plain HTTP (not that you should go without encryption).

Of course. An unencrypted HTTP request takes a single roundtrip to complete. The client sends the request and receives the response. The only additional cost is to set up the connection, which is also saved when the connection is kept open with a websocket.

Yes and no. Have you considered that the problem is that a TLS handshake takes more than one round trip to complete?

/s

I've never seen anybody recommend WebSockets instead of REST. I take it this isn't a widely recommended solution? Do you mean specifically for mobile clients only?

WebSockets are the secret ingredient to amazing low- to medium-user-count software. If you practice using them enough and build a few abstractions over them, you can produce incredible “live” features that REST-designs struggle with.

Having used WebSockets a lot, I’ve realised that it’s not the simple fact that WebSockets are duplex or that it’s more efficient than using HTTP long-polling or SSEs or something else… No, the real benefit is that once you have a “socket” object in your hands, and this object lives beyond the normal “request->response” lifecycle, you realise that your users DESERVE a persistent presence on your server.

You start letting your route handlers run longer, so that you can send the result of an action, rather than telling the user to “refresh the page” with a 5-second refresh timer.

You start connecting events/pubsub messages to your users and forwarding relevant updates over the socket you already hold. (Trying to build a delta update system for polling is complicated enough that the developers of most bespoke business software I’ve seen do not go to the effort of building such things… But with WebSockets it’s easy, as you just subscribe before starting the initial DB query and send all broadcasted updates events for your set of objects on the fly.)

You start wanting to output the progress of a route handler to the user as it happens (“Fetching payroll details…”, “Fetching timesheets…”, “Correlating timesheets and clock in/out data…”, “Making payments…”).

Suddenly, as a developer, you can get live debug log output IN THE UI as it happens. This is amazing.

AND THEN YOU WANT TO CANCEL SOMETHING because you realise you accidentally put in the actual payroll system API key. And that gets you thinking… can I add a cancel button in the UI?

Yes, you can! Just make a ‘ctx.progress()’ method. When called, if the user has cancelled the current RPC, then throw a RPCCancelled error that’s caught by the route handling system. There’s an optional first argument for a progress message to the end user. Maybe add a “no-cancel” flag too for critical sections.

And then you think about live collaboration for a bit… that’s a fun rabbit hole to dive down. I usually just do “this is locked for editing” or check the per-document incrementing version number and say “someone else edited this before you started editing, your changes will be lost — please reload”. Figma cracked live collaboration, but it was very difficult based on what they’ve shared on their blog.

And then… one day… the big one hits… where you have a multistep process and you want Y/N confirmation from the user or some other kind of selection. The sockets are duplex! You can send a message BACK to the RPC client, and have it handled by the initiating code! You just need to make it so devs can add event listeners on the RPC call handle on the client! Then, your server-side route handler can just “await” a response! No need to break up the handler into multiple functions. No need to pack state into the DB for resumability. Just await (and make sure the Promise is rejected if the RPC is cancelled).

If you have a very complex UI page with live-updating pieces, and you want parts of it to be filterable or searchable… This is when you add “nested RPCs”. And if the parent RPC is cancelled (because the user closes that tab, or navigates away, or such) then that RPC and all of its children RPCs are cancelled. The server-side route handler is a function closure, that holds a bunch of state that can be used by any of the sub-RPC handlers (they can be added with ‘ctx.addSubMethod’ or such).

The end result is: while building out any feature of any “non-web-scale” app, you can easily add levels of polish that are simply too annoying to obtain when stuck in a REST point of view. Sure, it’s possible to do the same thing there, but you’ll get frustrated (and so development of such features will not be prioritised). Also, perf-wise, REST is good for “web scale” / high-user-counts, but you will hit weird latency issues if you try to use for live, duplex comms.

WebSockets (and soon HTTP3 transport API) are game-changing. I highly recommend trying some of these things.

Find someone to love you the way DecoPerson loves websockets.

After all my years of web development, my rules are thus:

    * If the browser has an optimal path for it, use HTTP (e.g. images where it caches them automatically or file uploads where you get a "free" progress API).
    * If I know my end users will be behind some shitty firewall that can't handle WebSockets (like we're still living in the early 2010s), use HTTP.
    * Requests will be rare (per client):  Use HTTP.
    * For all else, use WebSockets.
WebSockets are just too awesome! You can use a simple event dispatcher for both the frontend and the backend to handle any given request/response and it makes the code sooooo much simpler than REST. Example:

    WSDispatcher.on("pong", pongFunc);
...and `WSDispatcher` would be the (singleton) object that holds the WebSocket connection and has `on()`, `off()`, and `dispatch()` functions. When the server sends a message like `{"type": "pong", "payload": "<some timestamp>"}`, the client calls `WSDispatcher.dispatch("pong", "<some timestamp>")` which results in `pongFunc("<some timestamp>")` being called.

It makes reasoning about your API so simple and human-readable! It's also highly performant and fully async. With a bit of Promise wrapping, you can even make it behave like a synchronous call in your code which keeps the logic nice and concise.

In my latest pet project (collaborative editor) I've got the WebSocket API using a strict "call"/"call:ok" structure. Here's an example from my WEBSOCKET_API.md:

    ### Create Resource
    ```javascript
    // Create story
    send('resources:create', {
      resource_type: 'story',
      title: 'My New Story',
      content: '',
      tags: {},
      policy: {}
    });
    
    // Create chapter (child of story)
    send('resources:create', {
      resource_type: 'chapter',
      parent_id: 'story_abc123', // This would actually be a UUID
      title: 'Chapter 1'
    });
    
    // Response:
    {
      type: 'resources:create:ok', // <- Note the ":ok"
      resource: { id: '...', resource_type: '...', ... }
    }
    ```
I've got a `request()` helper that makes the async nature of the WebSocket feel more like a synchronous call. Here's what that looks like in action:

    const wsPromise = getWsService(); // Returns the WebSocket singleton
    
    // Create resource (story, chapter, or file)
    async function createResource(data: ResourcesCreateRequest) {
      loading.value = true;
      error.value = null;
      try {
        const ws = await wsPromise;
        const response = await ws.request<ResourcesCreateResponse>(
          "resources:create",
          data // <- The payload
        );
        // resources.value because it's a Vue 3 `ref()`:
        resources.value.push(response.resource); 
        return response.resource;
      } catch (err: any) {
        error.value = err?.message || "Failed to create resource";
        throw err;
      } finally {
        loading.value = false;
      }
    }
For reference, errors are returned in a different, more verbose format where "type" is "error" in the object that the `request()` function knows how to deal with. It used to be ":err" instead of ":ok" but I made it different for a good reason I can't remember right now (LOL).

Aside: There's still THREE firewalls that suck so bad they can't handle WebSockets: SophosXG Firewall, WatchGuard, and McAfee Web Gateway.

Why WebSockets? If you need to fetch 30 things, you can build an elaborate protocol to stream them in without them interfering with each other, or you can ask for all thirty at once. Plain HTTP(S) can do the latter just fine, although the API might not be quite RESTful.

How do you feel about SSE then?

Sync Conf is next week, and this sort of issue is so part of what I hope maybe can just go away. https://syncconf.dev/

Efforts like Electric SQL to have APIs/protocols for bulk fetching all changes (to a "table") is where it's at. https://electric-sql.com/docs/api/http

It's so rare for teams to do data loading well, rarer still we get effective caching, and often a products footing here only degrades with time. The various sync ideas out there offer such an alluring potential, of having a consistent way to get the client the updated live data they need, in a consistent fashion.

Side note, I'm also hoping the js / TC39 source phase imports proposal aka import source can help let large apps like NextCloud defer loading more of it's JS until needed too. But the waterfall you call out here seems like the real bad side (of NextCloud's architecture)! https://github.com/tc39/proposal-source-phase-imports

The thing that kills me is that Nextcloud had an _amazing_ calendar a few years ago. It was way better than anything else I have used. (And I tried a lot, even the calendar add-on for Thunderbird. Which may or may not be built in these days, I can't keep track.)

Then at some point the Nextcloud calendar was "redesigned" and now it's completely terrible. Aesthetically, it looks like it was designed for toddlers. Functionally, adding and editing events is flat out painful. Trying to specify a time range for an event is weird and frustrating. It's better than not having a calendar, but only just.

There are plenty of open source calendar _servers_, but no good open source web-based calendars that I have been able to find.