You comment shows that you have no practical knowledge of the web ecosystem, and everything you know about it comes from all the blog articles that contributes nothing that's useful in real world use. The reality is that web is fast enough (even with all the tweaks and different approaches of frameworks, libraries etc), and it is the first choice for building a new cross-platform product and for migrating legacy projects. It makes all the business sense as well. Your pedantic arguments are not going to reverse that trend.
> You comment shows that you have no practical knowledge of the web ecosystem
You're talking to a person with 20 years of frontend development experience. But sure, do go on with your assumptions.
Also, no idea what "web ecosystem" has to do with the patently false claim of " the most powerful layouting system and rendering engine.", but do go on
> The reality is that web is fast enough
I never claimed it wasn't. But, again, without clarification of what fast is, or what enough is, it's again nebulous, and wrong for a very wide variety of use cases.
> it is the first choice for building a new cross-platform product and for migrating legacy projects.
I have no idea what this has to do with any of the things in this discussion.
> Your pedantic arguments are not going to reverse that trend.
Java applets and ActiveX also where the bee's knees and the best thing since sliced bread, and drove businesses and brought in billions of dollars in revenue.
All this has literally nothing to do with the technology and how bad or good it is.
I have also been writing for the web for over 20 years. This doesn't really mean anything though. That is why measures are all that matters. Bad measures are still monumentally better than no measures at all.
The sad reality is that most people writing for the web today cannot do so without a framework. They have no idea how the layers underneath actually work. If you want to understand performance you must measure for it in multiple different ways and have something meaningful to compare it to. All modern browsers provide fantastic performance measuring tools in their developer tools. Its how I got my OS GUI (in a browser) to execute as fast as within 60ms of page load.
I couldn't agree with you more.
Sadly enough the supercomputers we have still let us get away with the worst of the worst of performances with little penalties
>The reality is that web is fast enough
No its fuckin not. We have devices running literally billions operations per second, orders of magnitude faster then what we had just few years ago, yet they struggle with rendering websites which comes down to presenting some good looking text. It's insane how my pc can compute entire 3d world with millions of triangles, 120 times a second, but it lags when I open few websites because some front dev crambed some 'cool' paralax effect onto it, or because facebook, (who literally invented react) can't handle it well enough to not make memory leaks everywhere.
Did Usability of the web moved forward since few years ago? Sure. But compared to what computers can actually do, it's insane how bad things are nowadays
With a modern CPU and DDR5 memory you should be capable of running no slower than 10 billion DOM operations per second in Firefox. Usability is not performance.
How much electricity is getting wasted into this, worldwide?
10 billions is a bit much, but yes
> et they struggle with rendering websites which comes down to presenting some good looking text.
Umm... you're putting the blame on the wrong thing here, dude.
> because some front dev crambed some 'cool' paralax effect onto it,
As I said.
You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?
Would you apply the same logic to game optimization? That's it's not the responsibility of game devs, and instead we can shift all the blame to the gpu sdk team?
> You mean that the fornt end devs aren't actually responsible for the rendering, but the browser devs are?
Not at all. Quite the opposite, in fact. My position is that the browser is fast enough, and that any slowness is exactly the fault of the site devs. You said the browser wasn't fast enough.
Previous poster: The reality is that web is fast enough
You: No its fuckin not.
> You said the browser wasn't fast enough.
It isn't. Not for what people are trying to make with it.
Case in point: https://krausest.github.io/js-framework-benchmark/2023/table...
The benchmark creates 1000 rows that look like this:
So, less than a 10k elements in total.The fastest of the fastest attempts to do this takes 36 milliseconds to render. For what is essentially static markup with zero complex logic or complex interactions.
In comparison: 1000 actors with complex interaction and behaviour, complex animations and lighting takes 4 milliseconds to render (in total, significanly more than the measley 5-6k static elements on the page): https://youtu.be/kXd0VDZDSks?si=SswSZLNFlRd7adsM&t=586 (at 9:46)
I'm not saying everything should be Unreal Engine. But the web is on the lowest of the lowest end of the spectrum when it comes to performance.
> 36 milliseconds to render.
36 ms is a very small amount of time (faster than the rod flicker fusion frequency, though not the cones), and 10K elements is far more elements than even a complex web page is likely to have.
Can you give me some examples of real-world web pages that have 10K DOM elements on them, or anything like it? Running document.querySelectorAll('*').length on my personal amazon.com home page gives 3163 (obviously this is going to vary somewhat for different people), and amazon.com's front page is pretty damned complex.
> I'm not saying everything should be Unreal Engine.
I'm saying that almost nothing needs to be Unreal Engine. You are confusing "fast" with "fast enough".
> 36 ms is a very small amount of time
To render less than 10k objects on a screen given the current state of hardware? It's an eternity.
The problem is, these things compound. That is why "my page doesn't have 10k elements", but for some reason Google gave up and now calls "2.4 seconds to render content is fast, actually": https://blog.chromium.org/2020/05/the-science-behind-web-vit... (this is, of course more than just DOM being slow).
Gven that it takes that much time to render a static page with a number of elements that shouldn't even register by a clock, you run into hundreds of other problems: layout shifts in DOM are extremelyexpensive, avoid them; animations in the DOM are extremely expensive, avoid them; we can't re-render fast enough when the winow is dynamically resized, so there's tearing; we can't update the DOM fast enough because updates are extremely slow, so we fight the DOM and come up with crazier and crazier solutions to touch it as little as possible; and so on and so forth.
On the same machine a game engine re-renders the entire world with thousands or millions of objects with complex computations and interactions from scratch in under 10ms.
> You are confusing "fast" with "fast enough".
I'm not. I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.
And then hear the idiocy of "it's the most advanced rendering and layout engine" or "string parsing is so much slower than DOM operations" and other claims by people who have no idea what they are talking about.
Edit: your amazing "fast enough" is known to consume 20% of CPU just to display an animated emoji: https://twitter.com/dmitriid/status/1486364312910368769
> To render less than 10k objects on a screen given the current state of hardware? It's an eternity.
When it's so fast that a human being doesn't even perceive it, it's not an "eternity". In fact, it doesn't matter. At all.
> I'm tired of your "fast enoughs" that cannot reliably render a static web page without consuming more time and about as many resources as a modern video game.
That's nice, but I'm not sure why I should care what you're "tired of".
I'm old enough to remember rants virtually identical to yours when people first started using C rather than hand-tuned assembly language.
>DOM is efficient >No it's not, here is the data >something something it doesn't matter because it's fast enough.
So you agree that the DOM is slow? Or, by this logic, can I call any terrible code 'efficient', because if I run it on modern hardware it will still be faster than 'good' code run on machines from 20 yrs ago?
But also, it's not like all this inefficiency is free; every millisecond that is spent running inefficient code requires power. Multiply that by trillions of operations computers are doing every day, multiply that by billions of computers worldwide and we end up with waste of resources that literally change the planet. Not to mention the e-waste of all the hardware we force out of the usage "because it's too slow"
Sure. Check out my personal website. https://prettydiff.com/
Just open a bunch of windows and you will get to 10k page elements. The primary window shows the page load time in bold red font. The page load time includes all network calls, all initial script execution, state restoration, and graphical rendering. Total load time should always be around 1 second of which most is visual render. The script execution typically takes about 60ms or so but you can see the complete breakdown in the browser performance tab. The CSS could use a lot of clean up. I pulled all of this code from a browser based OS highly distributed OS I am working on.
Also, on that site you can easily check element count in the console using the following custom DOM methods:
EDITI just got to 10000 visible elements on the site and everything still loads in about 850ms, give or take 50ms, on my 7 year old desktop. Base load time for a new user on the same machine is about 550ms, so difference in load time is not significant. The real significance is page repaint on a fully loaded page. Drag and drop of any one window is noticeably slower.
To reset state execute the following and refresh the page:
Fair.
I took 'web is fast enough' as in 'current state of web is fast enough'. But if we are still sticking to the actual internals of web browsers, i don't doubt they are quite 'state of the art'. It's just the outcome for the end user sucks
It’s almost like 3d rendering is vertices and shading is an embarrassingly parallel problem which is quite trivial to make faster by throwing more hardware at it.
General layouting/text rendering, etc are not like that, and there is not even a “free” improvement anymore with single-threaded CPU speeds plateauing.
Yes, modern computers are just too slow to handle layouting a facebook page, nothing can be done.
Do you have any form of proof that layouting is a bottleneck anywhere?
You were the one who said that layouting is much harder problem then 3d rendering...
It is a much harder problem, but it is working just fine on even low-end mobile phones with the majority of websites.
> The reality is that web is fast enough
This. So much this.
Yes, there are contrived examples where DOM rendering speed makes a difference, and also a fair amount of real-world crapware (much of it written by companies that should know better) where shitty code continues to hit the DOM unnecessarily hundreds or thousands of times even after the page is allegedly "loaded", but that is not the fault of the DOM.
> hit the DOM unnecessarily hundreds or thousands of times even after the page is allegedly "loaded", but that is not the fault of the DOM.
If DOM was actually performant, this wouldn't be a problem.
If people could actually write code, it also wouldn't be a problem.
I think the misunderstanding is how to hit the DOM. If using static methods the performance cost is a memory cycle, so there can be many wasted steps and it’s still negligible. If access is via query selector then there is a string parse operation which is a few orders of magnitude slower. That does not mean the DOM is slow. Even Lamborghinis are incredibly slow if you never move out of first gear.
String parsing is negligible compared to the speed of the DOM which is glacially slow: https://news.ycombinator.com/item?id=38835920
Come on, people, make an effort to learn how insanely fast computers are, and how insanely inefficient our software is.
String parsing can be done at gigabytes per second: https://github.com/simdjson/simdjson If you think that string parsing of selectors is the slowest operation in the browser, please find some resources that talk about what is actually happening in the browser?
That is not at all correct. That is why we measure things.
https://jsbench.github.io/#b39045cacae8d8c4a3ec044e538533dc
For those of us who understand how these things work the incredibly slow performance of query selections is not surprising. What is interesting, though, is comparing the numbers between difference browsers.