I am not sure how you come to this conclusion. JS and the DOM are fast. Aside from arithmetic JS is just as fast as Java now and only 2-4x slower than C++. The two big limitations from a pure processing perspective are the garbage collector and massive repaints of large data on large layouts.

Could it be you forgot how fast native GUIs are?

Every WebGUI I used lacks the snappiness of old school native ones.

Native GUIs are fast, but they are not powerful. You have to specify a fixed resolution, fixed size, and more often than not calculate the layout yourself, or use very limited auto-layout features.

The web browser gives you a full package with standardized commands to control all those aspects, which is also portable between different implementations.

If your rendering needs are limited to a few buttons and a canvas on a fixed size, sure, go native GUI. But if you need to support multiple devices and resolutions, on-the-fly resizing, and layout of multiple complex screen sections, the browser is an unbeatable platform.

> You have to specify a fixed resolution, fixed size, and more often than not calculate the layout yourself, or use very limited auto-layout features.

Even Java UIs supported dynamic layouts 20 years ago.

C# had a flex box based layout (called flow layout) since 2006.

Worse developer experience and worse style options though

A few years back I prototyped a b2b web app using winforms.

At this point in time I'd been writing react for 2 years non stop and I hadn't used winforms for at least 5 years.

I started up visual studio, and used the GUI builder to make a fully featured data bound winforms app connected to my backend in less than a day.

2 months later I had the react version up and running.

Was the react version shinier?

Yes.

But the web development experience is literally orders of magnitude worse.

I use both svelte and WinForms at my day job. The designer works quite well initially for laying out the groundwork but after a while it becomes a burden: - components randomly disappearing but still being there when running the program - designer crashes - Small changes that require you to manually drag around 60% of the form, to add or remove one field

Svelte ain't perfect and it requires more scaffolding initially but you get: - actually good data bindings and state management (in many places you would need event handlers for winforms) - hot reload (very big win) - the ability to do greater layout changes with a few css lines (in combination with hot reload quite pleasant to style) - mass styling without selecting every component every time you want to change something - native async/await integration in the ui framework

plus the rest of the benefits (not DX oriented) - gpu rendered instead of cpu rendered - runs on any OS (including phones) - advanced responsiveness via CSS

(sorry for the poor text layout, didn't yet find a way to insert line breaks without them being removed)

Svelte is my go to on the web as well.

For smaller apps (dozen input fields or so) the WinForms designer (which has been on life support for well over a decade now!) will get the job done better than anything else out there.

If you want GPU rendered, WPF has you covered. I strongly dislike XAML, even though I like JSX (while disliking data handling in react in general).

The thing about responsiveness is I can make 6 UIs in WinForms faster than I can fix cross platform CSS bugs.

The real issue is WinForms isn't cross platform. :(

I strongly disagree with that. WinForms is imho very easy to use.

Easy to use initially, because of the designer. But as the application scales, it becomes more and more painful. Thinking about components randomly disappearing but still being there, designer crashes. Small changes that require you to manually drag around 60% of the form, to add or remove one field.

Better user experience and many styling options are counterproductive for the UX.

Better user experience (UX) is counterproductive for the UX... wut

I think poster is trying to stay that too many styling options has made for worse UX.

Users can get used to ugly and consistent. On the web and mobile, there is minimal consistency of what a button even looks like, or where site options are to be found, every site looks different and every company has its own style guide.

>Worse developer experience and worse style options though reply But better user experience. Also many styling options are counterproductive for the UX.

Now it's more understandable what I tried to say.

Qt had what is essentially flexbox in early 2000s.

Native apps have had to deal with dynamic sizes and layouts since the idea of resizing windows to arbitrary sizes became a thing. So, since 1980s?

> on-the-fly resizing

Which web tech famously cannot do with any efficiency

> and layout of multiple complex screen sections

What complex screen sections? This is a complex screen section: https://x.com/dmitriid/status/1424052288205856773

The web does not (and cannot) have "complex screen sections". It doesn't have the capacity for it.

Fixed resolution and size? Manual layout calculations? What was the last native GUI toolkit you used, Win32?

Yeah... JS and the DOM are incredibly fast, but that does not applications written for that platform are fast. Many JS developers have absolutely no idea how these technologies work and are reliant upon several layers of abstractions that are each progressively slower than the next.

As an analogy crypto coin in theory is a good idea, but its rife with fraud because most people playing with crypto are speculators that have no idea what they are doing.

You're totally right about the layers of abstraction... still, JS and DOM could be incredibly fast (and in some way, they are, sa there are huge optimizations behind for that to work as well), but they remain slower and more energy hungry than (almost) any native interface, even if you leave away all the layers of abstractions.

> I am not sure how you come to this conclusion.

By knowing and working with more technologies than just web tech.

"The most the most powerful layouting system and rendering engine" struggles to render even a few dozen elements on screen without junk, tearing and consuming as many resources as a mid-sized game.

> JS and the DOM are fast.

DOM is slow as molasses. There's a reason why all frameworks are going to great lengths to touch DOM as little as possible. A few billion dollars of development and hundreds of thousands of man-hours have optimized it beyond any reasonable expectations, but it's still unbelievably slow because of many architectural decisions rooted back in the 90s.

It's a system designed to render a page of text with a couple of images in one rendering pass at the core, and no amount of hacks on top of it will make it a high-perfromant layout and rednering engine.

> Aside from arithmetic JS is just as fast as Java now

This has nothing to do with either layout or rendering

> The two big limitations from a pure processing perspective are the garbage collector

Has nothing to do with either layout or rendering

> massive repaints of large data on large layouts.

Where "large data" is measly thousands of elements.

Massive repaints in DOM happen basiclly on any and all layouts and layout changes. And these changes are triggered by almost anything that happens in the DOM.

There's a reason why any reasonable animation you can reliably do with DOM is to literally rip the element out of the layout context and render it independently because "the most powerful layout and re-rendering engine" cannot cope with re-calculating and re-rendering the layout for the entire page when you move elements around.

Edit.

Slightly more on this:

- Rendering performance https://web.dev/articles/rendering-performance

- Linked article, Stick to Compositor-Only Properties and Manage Layer Count https://web.dev/articles/stick-to-compositor-only-properties...

From the last article: there are only two properties that can be handled by the compositor alone, all others trigger either a re-flow+re-paint or a re-paint.

Please, don't guess at performance.

My biggest learning about performance is that developers don't know how to measure it. The training is absent, the effort is too great, and then objectivity just isn't there. So, they guess, which typically just means making things up to qualify an unfounded assumption. Guessing at performance is wrong more than 80% of the time and when it is wrong there a decent chance it is wrong by one or more orders of magnitude. This is one of the key things that separates developers from product stake holders.

The DOM is an in-memory object accessed via a standard API. Let's not over think this. The interesting thing about DOM access is that Firefox has held stable for at least the last 6 years showing no significant performance loss or increase. Chrome on the other hand less than 40% as fast as it used to be, but its execution of access via string parsing mechanisms, like query selectors, is several times faster than it used to be. To run your own tests see this micro-benchmark test:

https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc

> Please, don't guess at performance.

I don't.

> My biggest learning about performance is that developers don't know how to measure it.

So have you measured it and compared it to anything else? Judging by the fact that you think that "JS is fast" has something to do with rendering and layout, my guess is that you haven't.

> The DOM is an in-memory object accessed via a standard API.

This has nothing to do with rendering, layout, and doesn't make it fast (compared to other ways of doing UIs) in the general case.

> string parsing mechanisms, like query selectors, is several times faster than it used to be.

Again. This has literally nothing to do with either layout or rendering.

> To run your own tests see this micro-benchmark test:

I said it: "A few billion dollars of development and hundreds of thousands of man-hours have optimized it beyond any reasonable expectations, but it's still unbelievably slow because of many architectural decisions rooted back in the 90s."

> To run your own tests see this micro-benchmark test: https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc

Oh wow, you can select elements quickly. What does this have to do with the actual performance of things that matter? Or with the rest of your claim about rendering and layout?

It's funny that you claim something about people guessing, and then use and talk about things that are completely irrelevant to your claims.

You comment shows that you have no practical knowledge of the web ecosystem, and everything you know about it comes from all the blog articles that contributes nothing that's useful in real world use. The reality is that web is fast enough (even with all the tweaks and different approaches of frameworks, libraries etc), and it is the first choice for building a new cross-platform product and for migrating legacy projects. It makes all the business sense as well. Your pedantic arguments are not going to reverse that trend.

> You comment shows that you have no practical knowledge of the web ecosystem

You're talking to a person with 20 years of frontend development experience. But sure, do go on with your assumptions.

Also, no idea what "web ecosystem" has to do with the patently false claim of " the most powerful layouting system and rendering engine.", but do go on

> The reality is that web is fast enough

I never claimed it wasn't. But, again, without clarification of what fast is, or what enough is, it's again nebulous, and wrong for a very wide variety of use cases.

> it is the first choice for building a new cross-platform product and for migrating legacy projects.

I have no idea what this has to do with any of the things in this discussion.

> Your pedantic arguments are not going to reverse that trend.

Java applets and ActiveX also where the bee's knees and the best thing since sliced bread, and drove businesses and brought in billions of dollars in revenue.

All this has literally nothing to do with the technology and how bad or good it is.

I have also been writing for the web for over 20 years. This doesn't really mean anything though. That is why measures are all that matters. Bad measures are still monumentally better than no measures at all.

The sad reality is that most people writing for the web today cannot do so without a framework. They have no idea how the layers underneath actually work. If you want to understand performance you must measure for it in multiple different ways and have something meaningful to compare it to. All modern browsers provide fantastic performance measuring tools in their developer tools. Its how I got my OS GUI (in a browser) to execute as fast as within 60ms of page load.

I couldn't agree with you more.

Sadly enough the supercomputers we have still let us get away with the worst of the worst of performances with little penalties

>The reality is that web is fast enough

No its fuckin not. We have devices running literally billions operations per second, orders of magnitude faster then what we had just few years ago, yet they struggle with rendering websites which comes down to presenting some good looking text. It's insane how my pc can compute entire 3d world with millions of triangles, 120 times a second, but it lags when I open few websites because some front dev crambed some 'cool' paralax effect onto it, or because facebook, (who literally invented react) can't handle it well enough to not make memory leaks everywhere.

Did Usability of the web moved forward since few years ago? Sure. But compared to what computers can actually do, it's insane how bad things are nowadays

With a modern CPU and DDR5 memory you should be capable of running no slower than 10 billion DOM operations per second in Firefox. Usability is not performance.

How much electricity is getting wasted into this, worldwide?

10 billions is a bit much, but yes

> et they struggle with rendering websites which comes down to presenting some good looking text.

Umm... you're putting the blame on the wrong thing here, dude.

> because some front dev crambed some 'cool' paralax effect onto it,

As I said.

You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?

Would you apply the same logic to game optimization? That's it's not the responsibility of game devs, and instead we can shift all the blame to the gpu sdk team?

> You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?

Not at all. Quite the opposite, in fact. My position is that the browser is fast enough, and that any slowness is exactly the fault of the site devs. You said the browser wasn't fast enough.

Previous poster: The reality is that web is fast enough

You: No its fuckin not.

> You said the browser wasn't fast enough.

It isn't. Not for what people are trying to make with it.

Case in point: https://krausest.github.io/js-framework-benchmark/2023/table...

The benchmark creates 1000 rows that look like this:

   <tr>
    <td><a onclick={select this row}>random short text</a></td>
    <td><a onclick={remove this row}><span /></a></td>
    <td></td>
   </tr>
So, less than a 10k elements in total.

The fastest of the fastest attempts to do this takes 36 milliseconds to render. For what is essentially static markup with zero complex logic or complex interactions.

In comparison: 1000 actors with complex interaction and behaviour, complex animations and lighting takes 4 milliseconds to render (in total, significanly more than the measley 5-6k static elements on the page): https://youtu.be/kXd0VDZDSks?si=SswSZLNFlRd7adsM&t=586 (at 9:46)

I'm not saying everything should be Unreal Engine. But the web is on the lowest of the lowest end of the spectrum when it comes to performance.

> 36 milliseconds to render.

36 ms is a very small amount of time (faster than the rod flicker fusion frequency, though not the cones), and 10K elements is far more elements than even a complex web page is likely to have.

Can you give me some examples of real-world web pages that have 10K DOM elements on them, or anything like it? Running document.querySelectorAll('*').length on my personal amazon.com home page gives 3163 (obviously this is going to vary somewhat for different people), and amazon.com's front page is pretty damned complex.

> I'm not saying everything should be Unreal Engine.

I'm saying that almost nothing needs to be Unreal Engine. You are confusing "fast" with "fast enough".

> 36 ms is a very small amount of time

To render less than 10k objects on a screen given the current state of hardware? It's an eternity.

The problem is, these things compound. That is why "my page doesn't have 10k elements", but for some reason Google gave up and now calls "2.4 seconds to render content is fast, actually": https://blog.chromium.org/2020/05/the-science-behind-web-vit... (this is, of course more than just DOM being slow).

Gven that it takes that much time to render a static page with a number of elements that shouldn't even register by a clock, you run into hundreds of other problems: layout shifts in DOM are extremelyexpensive, avoid them; animations in the DOM are extremely expensive, avoid them; we can't re-render fast enough when the winow is dynamically resized, so there's tearing; we can't update the DOM fast enough because updates are extremely slow, so we fight the DOM and come up with crazier and crazier solutions to touch it as little as possible; and so on and so forth.

On the same machine a game engine re-renders the entire world with thousands or millions of objects with complex computations and interactions from scratch in under 10ms.

> You are confusing "fast" with "fast enough".

I'm not. I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.

And then hear the idiocy of "it's the most advanced rendering and layout engine" or "string parsing is so much slower than DOM operations" and other claims by people who have no idea what they are talking about.

Edit: your amazing "fast enough" is known to consume 20% of CPU just to display an animated emoji: https://twitter.com/dmitriid/status/1486364312910368769

> To render less than 10k objects on a screen given the current state of hardware? It's an eternity.

When it's so fast that a human being doesn't even perceive it, it's not an "eternity". In fact, it doesn't matter. At all.

> I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.

That's nice, but I'm not sure why I should care what you're "tired of".

I'm old enough to remember rants virtually identical to yours when people first started using C rather than hand-tuned assembly language.

>DOM is efficient >No it's not, here is the data >something something it doesn't matter because it's fast enough.

So you agree that the DOM is slow? Or, by this logic, can I call any terrible code 'efficient', because if I run it on modern hardware it will still be faster than 'good' code run on machines from 20 yrs ago?

But also, it's not like all this inefficiency is free; every millisecond that is spent running inefficient code requires power. Multiply that by trillions of operations computers are doing every day, multiply that by billions of computers worldwide and we end up with waste of resources that literally change the planet. Not to mention the e-waste of all the hardware we force out of the usage "because it's too slow"

Sure. Check out my personal website. https://prettydiff.com/

Just open a bunch of windows and you will get to 10k page elements. The primary window shows the page load time in bold red font. The page load time includes all network calls, all initial script execution, state restoration, and graphical rendering. Total load time should always be around 1 second of which most is visual render. The script execution typically takes about 60ms or so but you can see the complete breakdown in the browser performance tab. The CSS could use a lot of clean up. I pulled all of this code from a browser based OS highly distributed OS I am working on.

Also, on that site you can easily check element count in the console using the following custom DOM methods:

    document.getNodesByType(0).length; // all nodes
    document.getNodesByType(1).length; // all elements
EDIT

I just got to 10000 visible elements on the site and everything still loads in about 850ms, give or take 50ms, on my 7 year old desktop. Base load time for a new user on the same machine is about 550ms, so difference in load time is not significant. The real significance is page repaint on a fully loaded page. Drag and drop of any one window is noticeably slower.

To reset state execute the following and refresh the page:

    delete localStorage["gui-state];

Fair.

I took 'web is fast enough' as in 'current state of web is fast enough'. But if we are still sticking to the actual internals of web browsers, i don't doubt they are quite 'state of the art'. It's just the outcome for the end user sucks

It’s almost like 3d rendering is vertices and shading is an embarrassingly parallel problem which is quite trivial to make faster by throwing more hardware at it.

General layouting/text rendering, etc are not like that, and there is not even a “free” improvement anymore with single-threaded CPU speeds plateauing.

Yes, modern computers are just too slow to handle layouting a facebook page, nothing can be done.

Do you have any form of proof that layouting is a bottleneck anywhere?

You were the one who said that layouting is much harder problem then 3d rendering...

It is a much harder problem, but it is working just fine on even low-end mobile phones with the majority of websites.

> The reality is that web is fast enough

This. So much this.

Yes, there are contrived examples where DOM rendering speed makes a difference, and also a fair amount of real-world crapware (much of it written by companies that should know better) where shitty code continues to hit the DOM unnecessarily hundreds or thousands of times even after the page is allegedly "loaded", but that is not the fault of the DOM.

> hit the DOM unnecessarily hundreds or thousands of times even after the page is allegedly "loaded", but that is not the fault of the DOM.

If DOM was actually performant, this wouldn't be a problem.

If people could actually write code, it also wouldn't be a problem.

I think the misunderstanding is how to hit the DOM. If using static methods the performance cost is a memory cycle, so there can be many wasted steps and it’s still negligible. If access is via query selector then there is a string parse operation which is a few orders of magnitude slower. That does not mean the DOM is slow. Even Lamborghinis are incredibly slow if you never move out of first gear.

String parsing is negligible compared to the speed of the DOM which is glacially slow: https://news.ycombinator.com/item?id=38835920

Come on, people, make an effort to learn how insanely fast computers are, and how insanely inefficient our software is.

String parsing can be done at gigabytes per second: https://github.com/simdjson/simdjson If you think that string parsing of selectors is the slowest operation in the browser, please find some resources that talk about what is actually happening in the browser?

That is not at all correct. That is why we measure things.

https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc

For those of us who understand how these things work the incredibly slow performance of query selections is not surprising. What is interesting, though, is comparing the numbers between difference browsers.

As fast as Java?!? Better tell Google to dump Android and focus on ChromeOS then.

Yes. This has been the case for at least the last 5 years.

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

I only see random Java in synthetic benchmarks, without testing all variations of JIT and AOT compilers available throughout the ecosystem.

OpenJDK, GraalVM, OpenJ9, Azul, PTC, Aicas, ART, JikesRVM,....

Not to mention the optimizations that a dynamic language like JavaScript will never be able to take advantage of.

?

openjdk, GraalVM, Corretto

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...

Correcto is OpenJDK with Amazon branding, I left it out on purpose.

Still, a dynamic language winning out a strongly typed one, only for those that don't have a clue about how compilers work, or how to write winning micro benchmarks.

Synthetic benchmarks for some algorithms have little to do with actual performance of real life applications

What method do you propose to compare the performance of different programming languages?

While still far from perfect, benchmarks comparing actual, practical usage, like the ammount of request served by webservers for example, are much better indicators imo.

https://web-frameworks-benchmark.netlify.app/result

[dead]

Yeah everyone says that and stops there which is absolutely useless. Benchmarks are at least objective ways to measure something. And there are no "correct" benchmarks. Unless you have better metrics or another way to prove things, please stop repeating these meaningless words

I've literally pasted benchmarks measuring actual job (webserver request per second) in a comment below.

But besides, the critique isn't meaningless even without providing a better one; If your benchmark is measuring things that are trivial no matter the language (like stack-based operations), but ignores things that actually differ meaningfully (like handling of heap objects), then criticizing such aproach is perfectly fair and valid objection

These benchmarks include startup time _and_ processing time when comparing languages. I don’t believe that is telling a very compelling story given JS is still slightly slower than most of the Java metrics unless you are looking for a new language to write your lambdas.

[dead]