While the author says that much of it can be attributed to the layers of software in between to make it more accessible to people, in my experience most cases are about people being lazy in their developing of applications.
For example, there was a case of how Claude Code uses React to figure out what to render in the terminal and that in itself causes latency and its devs lament how they have "only" 16.7 ms to achieve 60 FPS. On a terminal. That can do way more than that since its inception. Primeagen shows an example [0] of how even the most terminal change filled applications run much faster such that there is no need to diff anything, just display the new change!
It makes me wish more graphics programmers would jump over to application development - 16.7ms is a huge amount of time for them, and 60 frames per second is such a low target. 144 or bust.
I don't think graphics devs changing over would change much. They would probably not lament over 16ms, but they would quickly learn that performance does not matter much in application development, and start building their own abstraction layer cake.
It's not even that performance is unimportant in absolute terms, but rather that the general state of software is so abysmal that performance is the least of your problems as a user, so you're not going to get excited over it.
No need for graphics programmers, anyone that is still around coding since the old days, does remember on how to make use of data structures, algorithms, and how to do much with little hardware resources.
Maybe the RAM prices will help bringing those skills back.
Those practices make things hard to modify and update. As one example, I use to use binary formats for game data. The data is loaded into memory and it's ready to use, at most a few offsets need to be changed to pointers, in place.
Good: (*) no extra memory needed (*) can read directly from CD/DVD/HD to memory - not through buffered i/o (*) no parsing time
Bad: (*) change or add a single field in to any part of the saved data and all previous files are now invalid.
I'd solve this with a version number and basically print a message "old version in file, rebuild your data"
Modern apps use JSON/Protobufs, etc. They are way slower to load, take way more memory as there are usually 2 versions of the data, the data in the file and the parsed data your app is actually using. They take time to parse.
But, they continue to work even with changes and they can more trivially be used across languages and apps.
That's just one example of 100s of others where the abstraction is slower and uses more memory but provides flexibility the old ones didn't.
No need for such extremes, it would already be enough to stop using Electron crap, or React on the terminal.
Followed by actually reading a data structures and algorithms book.
Finally, using compiled languages instead of interpreted ones.
No need to count bytes to fit into 48 KB.
I disagree. I use all of those because they let me get work done faster. I've shipped Electron apps to clients that are effectively just a wrapper around a command line app because (1) I knew the client would much prefer a GUI over a command line (2) I was on Mac and they were on Windows and I knew I could more trivially get a cross platform app (3) I could make the app look good more trivially with HTML/CSS than I could with native (4) Iteration time is instant (change something /refresh). The clients don't care. Interpreted languages are fine.
And that is how we get the mess we are in today.
Exactly. It turns out there are strong business incentives that favour tall towers of abstractions. Then everyone's using Electron, so it doesn't really matter if it's slow. People just come to perceive computers as slow. I hate it.
And embedded too. But then again, they do what they do precisely because in that environment those skills are appreciated, and elsewhere they are not.
It's mostly on the business side. If business doesn't care then developers have no choice. Ofc the customers need to care too, looks like we don't care either...in general.
One tradeoff is that one of the tradeoffs graphics programmers do is about security. They typically work with raw pointers, using custom memory allocation strategies, memory safety comes after performance. There is not much in terms of sandboxing, bounds checking, etc... these things are costly in terms of performance, so they don't do it if they don't have to.
That's because performance is critical to games (where the graphics programmers usually are), and if the game crashes, no big deal as long as it doesn't happen so often as to seriously impact normal gameplay experience. Exploits are to be expected and sometimes kept deliberately if it leads to interesting gameplay, it is a staple of speedruns. Infinite money is fun in a game, but not in serious banking software...
I am all for performance, and I think the current situation is a shame, but there are tradeoffs, we need people who care about both performance and security, maybe embedded software developers who work on critical systems, but expect a 10x increase in costs.
That wouldn't make any difference. Graphics programmers spend a lot of effort on performance because spending a lot of $$$$ (time) can make an improvement that people care about. For most applications nobody cares enough about speed to pay the $$$ needed to make it fast.
Many application programmers could make things faster - but their boss says good enough, ship it, move to a new feature that is worth far more to me.
This sounds like how curses did things, a 1980 technology.
On the other hand, if the guy in the video ran his app over a remote connection with limited bandwidth, diffing would probably perform better. I have a one Gbps google fiber connection to my job but at times my vpn bandwidth can choke down to a couple hundred kbps and sometimes worse.
Yeah, I think a lot of this can be attributed to institutional and infrastructural inertia, abstraction debt, second+-order ignorance, and narrowing of specialty. People now building these things are probably good enough at React etc. to do stuff that needs to be done with it almost anywhere, but their focus needs to be ML.
The people that could make terminal stuff super fast at low level are retired on an island, dead, or don't have the other specialties required by companies like this, and users don't care as much about 16.7ms on a terminal when the thing is building their app 10x faster so the trade off is obvious.
Interestingly (or possibly not), since my very first computers had ~4K of RAM, I became adept at optimizations of all kinds, which came in handy for my first job - coding 360 mainframe assembly. There, we wouldn't be able to implement our changes if our terminal applications (accessing DB2/IMS) responded in anything greater than 1s. Then, the entire system was replaced with a cloud solution where ~30s of delay was acceptable.
I think the Internet made 'waiting' for a response completely normalized for many applications. Before then, users flew through screens using muscle memory. Now, when I see how much mouse clicking goes on at service counters, I always think back to those ultra-fast response time standards. I still see a few AS/400 or mainframe terminal windows running 'in the wild' and wonder what new employees think about those systems.
It's getting ridiculous. I know SPAs aren't to blame specifically, but it feels like whenever the 2003 page-based web interface is replaced with the modern SPA each action takes forever to load or process. Was just noticing this on FedEx's site today.