Sure, if you don’t count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.

I do completely agree that there is a lot of waste in modern software. But equally there is also a lot more that has to be included in modern software that wasn’t ever a concern in the 80s.

Networking stacks, safety checks, encryption stacks, etc all contribute massively to software “bloat”.

You can see how this quickly adds up if you write a “hello world” CLI in assembly and compare that to the equivalent in any modern language that imports all these features into its runtime.

And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.

Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need.

Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.

Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?

> And this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater than anything supported by 80s micro computers.

This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.

This is [1] 64kB and this [2] is 177kB. This game from the same group is 96kB with full 3D graphics [3].

[0]: https://www.pouet.net/prod.php?which=52938

[1]: https://www.pouet.net/prod.php?which=1221

[2]: https://www.pouet.net/prod.php?which=30244

[3]: https://en.wikipedia.org/wiki/.kkrieger

Programming these days, in some realms, is a lot like shopping for food - some people just take the box off the shelf, don't bother with reading the ingredients, throw it in with some heat and fluid and serve it up as a 3-star meal.

Others carefully select the ingredients, construct the parts they don't already have, spend the time to get the temperatures and oxygenation aligned, and then sit down to a humble meal for one.

Not many programmers, these days, do code-reading like baddies, as they should.

However, kids, the more you do it the better you get at it, so there is simply no excuse for shipping someone elses bloat.

Do you know how many blunt pointers are lined up underneath your BigFatFancyFeature, holding it up?

You’re not wrong, but I just can’t bring myself to agree fully with someone just dribbling with condescension as they speak, like you are here.

Christ. Drop the greybeard act, man. You’re not getting any trophies for being the most annoying one to chime in.

[dead]

> Go's "GC disadvantage" is turned on its head by developing "Zero Allocation" libraries which run blazingly fast with fixed memory footprints. Similarly, rolling your own high performance/efficient code where it matters can save tremendous amounts of memory where it matters.

The savings there would be negligible (in modern terms) but the development cost would be significantly increased.

> Of course more features and safety nets will consume memory, but we don't have to waste it like there are no other things running on the system, no?

Safety nets are not a waste. They’re a necessary cost of working with modern requirements. For example, If your personal details were stolen from a MITM attack then I’m sure you’d be asking why that piece of software wasn’t encrypting that data.

The real waste in modern software is:

1. Electron: but we are back to the cost of hiring developers

2. Application theming. But few actual users would want to go back to plain Windows 95 style widgets (many, like myself, on HN wouldn’t mind, but we are a niche and not the norm).

> This demo [0] is a 4kB executable. 4096 bytes. A single file. All assets, graphics, music and whatnot, and can run at high resolutions with real time rendering.

You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.

When you wrote audio data in the 80s, you effectively wrote midi files in machine code. Obviously it wasn’t literally midi, but you’d describe notes, envelopes etc. You’d very very rarely store that audio as a waveform because audio chips then simply don’t support a high enough bitrate to make that audio sound good (nor had the storage space to save it). Whereas these days, PCM (eg WAV, MP3, FLAC, etc) sound waaaay better than midi and are much easier for programmers to work with. But even a 2 second long 16bit mono PCM waveform is going to be more than 4KB.

And modern graphics aren’t limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and you’re increasing the colour depth by literally 32 times. And that’s before you scale again from 64 pixels to thousands of pixels.

You’re then talking exponential memory growth in all dimensions.

I’ve written software for those 80s systems and modern systems too. And it’s simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.

> Application theming

Software 30 years ago was more amenable to theming. The more system widgets you use, the more effective theming works by swapping them.

Now, we have grudging dark-mode toggles that aren't consistent or universal, not even rising to the level of configurabilty you got with Windows 3.1 themes, let alone things like libXaw3d or libneXtaw where the fundamental widget-drawing code could be swapped out silently.

I get the impression that since about 2005, theming has been on the downturn. Windows XP and OSX both were very close to having first class, user-facing theming systems, but both sort of chickened out at the last minute, and ever since, we've seen less and less control every release.

I think what you're describing as "theming" is more "custom UI". It used to be reserved for games, where stock Windows widgets broke immersion in a medieval fantasy strategy simulator and you were legally obliged to make the cursor a gauntlet or sword. But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!" when your application is a smart-bulb controller or a text editor that could perfectly well fit with native widgets.

We are talking about software development not user configuration. So “theming” here clearly refers specifically to the applications shipping non-standard UIs.

This also isn’t a trend that Electron started. Software has been shipping with bespoke UIs for nearly as long as UI toolkits have been a thing.

>But Electron said to the entire world "go to town, burn the system Human Interface Guidelines and make a branded nightmare!"

TBH this sounds pretty medieval too.

> The savings there would be negligible (in modern terms)

A word of praise for Go: it is pretty performant, while using very little memory. I inherited a few Django apps, and each thread just grows to 1GB. Running something like celery quickly eats up all memory and start thrashing. My Go replacements idle at around 20MB, and are a lot faster. It really works.

I’ve written a $SHELL and a terminal emulator in Go. It has its haters on HN but I personally rather like the language.

> The savings there would be negligible (in modern terms) but the development cost would be significantly increased.

...and this effort and small savings here and there is what brings the massive savings at the end of the day. Electron is what "4KB here and there won't hurt", "JS is a very dynamic language so we can move fast", and "time to market is king, software is cheap, network is reliable, YOLO!" banged together. It's a big "Leeroy Jenkins!" move in the worst possible sense, making users pay everyday with resources and lost productivity to save a developer a couple of hours at most.

Users are not cattle to milk, they and their time/resources also deserve respect. Electron is doing none of that.

> You quoted where i said that modern resolutions are literally orders of magnitude greater and assets stored in bitmaps / PCM then totally ignored that point.

Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors. All are hardware accelerated, too.

> And modern graphics aren’t limited to 2 colour sprites (more colours were achieved via palette swapping) at 8x8 pixels. Scale that up to 32bits (not colours, bits) and you’re increasing the colour depth by literally 32 times. And that’s before you scale again from 64 pixels to thousands of pixels.

Sorry to say that, but I know what graphics and high performance programming entails. Had two friends develop their own engines, and I manage HPC systems. I know how much memory matrices need, because everything is matrices after some point.

> Safety nets are not a waste.

I didn't say they are waste. That quote is out of context. Quoting my comment's first paragraph, which directly supports the part you quoted: "Yes, but this doesn't prevent you from being mindful and selecting the right tools with smaller memory footprint while providing the features you need."

So, what I argue is, you don't have to bring in everything and the kitchen sink if all you need is a knife and a cutting board. Bring in the countertop and some steel gloves to prevent cutting yourself.

> I’ve written software for those 80s systems and modern systems too. And it’s simply ridiculous to Compare graphics and audio of those systems to modern systems without taking into account the differences in resolution, colour depth, and audio bitrates.

Me too. I also record music and work on high performance code. While they are not moving much, I take photos and work on them too, so I know what happens under the hood.

Just watch the demos. It's worth your time.

> Electron is doing none of that.

I agree. I even said Electron was one piece of bloat I didn’t agree with my my comment. So it wasn’t factored into the calculations I was presenting to you.

> Did you watch or ran any of these demos? Some (if not all) of them scale to 4K and all of them have more than two colors.

You mean the ones you added after I replied?

> I didn't say they are waste. That quote is out of context.

Every part of your comment was quoted in my comment. Bar the stuff you added after I commented.

> Had two friends develop their own engines

I have friends who are doctors but that doesn’t mean I should be giving out medical advice ;)

> Just watch the demos. It's worth your time.

I’m familiar with the demo scene. I know what’s possible with a lot of effort. But writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.

I’m also not advocating that software should be written in Electron. My point was modern software, even without Electron, is still going to be orders of magnitude larger in size and for the reasons I outlined.

I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these. Sometimes HN delays replies and you're accusing me of things I'm not. That's not nice.

> writing cool effects for the demo scene is very different to writing software for a business which has to offset developer costs against software sales and delivery deadlines.

The point is not "cool effects" and "infinite time" though. If we continue about talking farbrausch, they are not bunch of nerds which pump out raw assembly for effects. They have their own framework, libraries and whatnot. Not dissimilar to business software development. So, their code is not that different from a business software package.

For the size, while you can't fit a whole business software package to 64kB, you don't need to choose the biggest and most inefficient library "just because". Spending a couple of hours more, you might find a better library/tool which might allow you to create a much better software package, after all.

Again, for the third time, while safety nets and other doodads make software packages bigger, cargo culting and worshipping deadlines and ROI more than the product itself contributes more to software bloat. That's my point.

Oh I overlooked this gem:

> I have friends who are doctors but that doesn’t mean I should be giving out medical advice ;)

Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.

IOW, yep, I didn't wrote one, but I was neck deep in both of them, for years.

> I did no edits after your comment has appeared. Yep, I did edits, but your reply was not visible to me while I did these.

Which isn’t the same thing as what I said.

I’m not suggesting you did it maliciously, but the fact remains they were added afterwards so it’s understandable I missed them.

> Yet, we designed some part of that thing together, and I had the pleasure of fighting with GPU drivers with them trying to understand what it's trying to do while neglecting our requests from it.

That is quite a bit different from your original comment though. This would imply you also worked on game engines and it wasn’t just your friends.

That first one was discussed on HN before, as its source code was also released: https://news.ycombinator.com/item?id=11848097

I was sure once I saw the descriptions that what you're posting is Farbrausch prods! Do you know if anyone came close to this level since?

I'm not following the scene for the last couple of years, but I doubt that. On the other hand, there are other very capable people doing very interesting things.

That C64 demo doing sprite wizardy and 8088MPH comes to my mind. The latter one, as you most probably know, can't be emulated since it (ab)uses hardware directly. :D

As a trivia: After watching .the .product, I declared "if a computer can do this with a 64kB binary, and people can make a computer do this, I can do this", and high performance/efficient programming became my passion.

From any mundane utility to something performance sensitive, that demo is my northern star. The code I write shall be as small, performant and efficient as possible while cutting no corners. This doesn't mean everything is written in assembly, but utmost care is given how something I wrote works and feels while it's running.

Your third example seems to generate 2G of data at runtime, so misleadingly minimalistic

All of them generates tons (up to tens of gigabytes or more) of data during runtime, but they all output it, and don't store them on disk or RAM.

They are highly dynamic programs, and not very different from game engines on that regard.

> misleadingly minimalistic.

That's the magic of these programs or demoscene in general. No misleading. That's the goal.

I’m on my phone so cannot run it, but you cannot generate data and not store it somewhere. It’s going to consume either system resources (RAM/storage) or video resources (VRAM).

If your point is that it uses gigabytes of VRAM instead of system memory, then I think that is an extremely weak argument for how modern software doesn’t need much memory because all you’re doing is shifting that cost from one stack of silicon to a a different stack silicon. But the cost is still the same.

The only way around that is to dynamically generate those assets on the fly and streaming them to the video card. But then you’re sacrificing CPU efficiency for memory efficiency. So the cost is still there.

And I’ve already discussed how data compresses better as vectors than as bitmaps and PCM but is significantly harder to work with than bitmaps and waveforms. using vectors / trackers are another big trick for demos that aren’t really practical for a lot of day to day development because they take a little more effort and the savings in file sizes are negligible for people with multi-GB (not even TB!!!) disks.

As the saying goes: there’s no such thing as a free lunch.

All demos I have shared with you are designed to run on resource constrained systems. Using all the resources available on the system is a big no no from the start.

Instead, as you guessed, these demos generate assets on the fly and stream to the respective devices. You cite inefficiencies. I tell they run at more than 60 FPS on these constrained systems. Remember, these are early 2000s systems. They are not that powerful by today’s standards, yet these small binaries use these systems efficiently and generate real time rendered CG on the fly.

Nothing about them is inefficient or poor. Instead they are marvels.

> You cite inefficiencies.

That’s not what I said. I said you’re trading memory footprint for CPU footprint.

This is the correct way to design a demo but absolutely the wrong way to design a desktop application.

They are marvels, I agree. But, and as I said before, there’s no such things as a free lunch. at risk of stating the obvious; If there wasn’t a trade off to be made then all software would be written that way already.

I would also add internationalization. There were multi-language games back in the day, but the overhead of producing different versions for different markets was extremely high. Unicode has .. not quite trivialized this, but certainly made a lot of things possible that weren't.

Much respect to people who've manage to retrofit it: there are guerilla translated versions of some Japanese-only games.

> this is all before you take into account that modern graphics and audio is bitmap / PCM and running at resolutions literally orders of magnitude greater

Yes, people underestimate how much this contributes, especially to runtime memory usage.

The framebuffer size for a single 320x200 image with 16 colours is 32k, so nearly the same amount of memory as this entire game.

320x200 being an area of screen not much larger than a postage stamp on my 4k monitor.

The technical leap from 40 years ago never fails to astound me.

The 48k Spectrum had a 1-bit "framebuffer" with colours allocated to 8x8 character tiles. Most consoles of the time were entirely tile/sprite based, so you never had a framebuffer in RAM at all.

I think it's a valid view that (a) we have way more resources and (b) sometimes they are badly used in ways that results in systems being perceptibly slower than the C64 sometimes, when measured in raw latency between user input and interaction response. Usually because of some crippling system bottleneck that everything is forced through.

> all contribute massively to software “bloat”.

Could you point to an example where those gigs were really "massively" due crash handling and bounds checks etc?

Most software doesn’t consume multiple gigabytes of memory outside of games and web browsers.

And it should be obvious why games and web browsers do.

Unfortunately "most software" might be a web browser these days.

Not “most”, but definitely a depressing increasing number.

And as I said elsewhere, I do consider Electron to be bloat.

But it’s also worth discussing Electron as an entirely separate topic because it’s a huge jump in memory requirements from even “bloated” native apps.

This I think is a core part of the problem when discussing sizes from C64 era to modern applications:

1. You have modern native apps vs Electron

2. Encryption vs plain text

3. High resolution media vs low resolution graphics and audio

4. Assembly vs high level runtimes

5. static vs dynamically linked libraries

6. Safety harnesses vs unsafe code

7. Expected features like network connectivity vs an era when that wouldn’t be a requirement

8. Code that needs to be supported for years of updates by a team of developers vs a one man code base that never gets looked at again after the cassettes get shipped to retail stores.

…and so on.

Each of these individually can contribute massively to differences in file sizes and memory footprints. And yet we are not defining those parameters in this discussion so we are each imagining a different context in our argument.

And then you have other variables like:

1. Which is large? 5 GB is big by today’s standards but even 5 MB would have been unimaginable by C64 standards and that is 4 orders of magnitude smaller. One commenter even discussed 250 GB as “big” which is unimaginable by today’s standard users.

2. Are we talking about disk space or RAM? One commenter discussed using GBs of GPU memory as a way to save sure memory but that feels like a cop out to me because it’s still GBs of system resources that the C64 used.

3. Software Complexity: it takes a lot more effort to release software these days because you work as a team, and need to adhere to security best practices. And we still see plenty of occasions where people get that wrong. So it makes sense that people will use general purpose libraries instead of building everything from scratch to reduce the footprint. Particularly when developers are expensive and projects have (and always have had) deadlines that need to be met. So do we factor in developer efficiency into our equation or not?

In short, this is such a fuzzy topic that I bet everyone is arguing a similar point but from a different context.

I implemented a system recently that is a drop in replacement for a component of ours, old used 250gb of memory, new one uses 6gb, exact same from the outside.

Bad code is bad code, poor choices are poor choices — but I think it’s often times pretty fair to judge things harshly on resource usage sometimes.

Sure, but if you’re talking about 250GB of memory then you’re clearly discussing edge cases vs normal software running on an average persons computer. ;)

Back the day people had BASIC and some machines had Forth and it was like

        print "Hello world" 
or

        ." Hello world " / .( Hello world )
for Forth.

By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.

If the game was reimplemented in Golang it wouldn't feel many times slower. But no, we are suffering the worst from both sides of the coin: something that should have been replaced by Inferno -plan9 people, the C and Unix creators and now Golang, their cousin- with horrible compiline times, horrible and incompatible ABI's, featuritis, crazy syntax with templates and if you are lucky, memory safety.

Meanwhile I wish the forked Inferno/Purgatorio got a seamless -no virtual desktops- mode so you fired the application in a VM integrated with the guest window manager -a la Java- and that's it. Limbo+Tk+Sqlite would have been incredible for CRUD/RAD software once the GUI was polished up a little, with sticky menus as TCL/Tk and the like. In the end, if you know Golang you could learn Limbo's syntax (same channels too) with ease.

BASIC was slow in the 80s. Games for the C64 (and similar machines) were written in machine code.

> By comparison, giving how they optimized the games for 8 and 16 bit machines I should have been able to compile Cataclysm DDA:BN under my potato netbook and yet it needs GIGABYTES of RAM to compile, it crazy that you need damn swap for something it required far less RAM 15 years ago for the same features.

That’s not crazy. You’re comparing interpreted, line delimited, ASCII, with a compiler that converts structured ASCII into machine code.

The two processes are as different to one another as a driving a bus is to being a passenger on it.

I don’t understand what your point is in the next two paragraphs. What Go, TCL, UNIX nor Inferno have to do with the C64 or modern software. So you’ll have to help out there.

Compare Limbo+Tk under Inferno with current C#/Java. Or C++ against Plan9C.

We have impressive CPU's running really crappy software.

Remember Claude Code asking 66GB for a damn CLI AI agent for something NetBSD under a Vax (real or physical) from 1978 could do with NCurses in miliseconds every time you spawn Nethack or any other NCurses tool/game.

On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language. Forth was fast, but people was not ready for neither RPN nor to manage the stack, people tought in an algebraic way.

But that was an 'obsolete' mindset, because once you hit HS you were supposed to split 'big problems into smaller tasks (equations). In order to implement a 2nd degree equation solver in Forth you wouldn't juggle with the stack; you created discrete functions (words) for the discrimination part and so on.

In the end you just managed two stack items per step.

If Forth won instead of Basic, instead of allowing spaghetti code as a normal procedure we would be pretty much asking to decompose code into small functions as the right thing to do from the start.

Most dialects of BASIC actually had functions too. They just weren’t popularised because line numbers were still essential for line editing on home micros.

> On speed, Forth for the ACE was faster than Basic running under the ZX80. So, it wasn't about using a text-parsed language.

Forth and BASIC are completely different languages and you’re arguing a different point to the one I made too.

Also I don’t see much value in hypothetical arguments like “if Forth won instead of BASIC” because it didn’t and thus we are talking about actual systems people owned.

I mean, I could list a plethora of technologies I’d have preferred to dominate: Pascal and LISP being two big examples. But the C64 wasn’t a lisp machine and people aren’t writing modern software in Pascal. So they’re completely moot to the conversation.

They were different but both came in-ROM and with similar storage options (cassette/floppy).

On Pascal, Delphi was used for tons of RAD software in the 90's, both for the enterprise and for home users with zillions of shareware (and shovelware). And Lazarus/FPC+SQLITE3 today is not bad at all.

On Lisp... it was used on niche places such as game engines, Emacs -Org Mode today it's a beast-, a whole GNU supported GNU distro (Scheme) and Maxima among others.

Still, the so called low-level C++ it's an example on things picking the wrong route. C++ and QT5/6 can be performant enough. But, for a roguelike, the performance on compiling it's atrocious and by design Go with the GC would fix a 90% of the problems and even gain more portability.

I’m very aware of Lazarus, Delphi and Emacs. But they’re exceptions rather than industry norms.

And thus pointing them out misses the point I was making when, ironically, I was pointing out how you’re missing the original point of this discussion.

My point was about performance. Yes, Basic vs Forth was the worst choice back in the day, and you could say low level stuff was done under assembler.

Fine. But the correct choice for 'low level' stuff it's C++ and I state that most of the C++ compilers have huge compiling times for software (GCC), or much better but they still eat ram like crazy (clang) and except for few software, the performance boost compared to Go doesn't look as huge for mosts tasks except for Chromium/Electron and QT.

For what software it's doing a 90% of the time, Go + a nice toolkit UI would be enough to cover most tasks while having a safe language to use. Even for bloated propietary IM clones such as Discord and Slack.

Because, ironically, most of the optimized C++ code is to run bloated runtimes like Electron tossing out any C++ gives to you, because most Electron software it's implementing half an OS with every application.

With KDE and QT at least you are sharing code, even by using Flatpak, which somehow deduplicates stuff a little bit. With Electron you are running separate, isolated silos with no awareness of each other. You are basically running several 'desktop environments' at once.

You can say, hey, Go statically builds everything, there's no gain on shared libraries then... until you find the Go compiler can still do a better job using less RAM than average than tons of stuff.

With Electron often you are shipping the whole debugging environment with yourself. Loaded, and running graphical software with far less performance than the 'bloated' KDE3 software back in the day doing bells and wistles under a Kopete chat window under an AMD Athlon. QT3 tools felt snappy. Seeing Electron based software everywhere has the appeal of running everything GUI based under TCL/Tk under a Pentium modulo video decoders and the like. It will crawl against pure Win32/XLib under a Pentium 90 if everything it's a TK window with debugging options enabled.

So, these are our current times. You got an i7 with 16GB of RAM and barely got any improvement with modern 'apps' over an i3 with 2GB of RAM and native software.

You’re talking about compiler footprint and runtime footprint in the same conversation but they’re entirely different processes (obviously) and I don’t think it makes any sense to compare the two.

C++ is vastly more performant than Go. I love Go as a language but let’s not get ourselves carried away here about Gos performance.

It also makes no sense no sense to talk about Electron as C++. The problem with Electron isn’t that it was written in C++, it’s that it’s ostensibly an entire operating system running inside a virtual machine executing JIT code.

You talked about using Go for UI stuff, but have you actually tried? I’ve written a terminal emulator in Go and performance UI was a big problem. Almost everything requires either CGO (thus causing portability problems) or uses of tricks like WASM or dynamic calls that introduced huge performance overheads. This was something I benchmarked in SDL so have first hand experience.

Then you have issues that GUI operations need to be owned by the OS thread, this causes issues writing idiomatic Go that calls GUI widgets.

And then you have a crap load of edge cases for memory leaks where Go’s GC will clear pointers but any allocations happening outside of Go will need to be manually deallocated.

In the end I threw out all the SDL code. It was slow to develop, hard to make pretty, and hard to maintain. It worked well but it was just far too limiting. So switched to Wails, which basically displays a WebKit (on MacOS) window so it’s lower footprint than Electron, allows you to write native Go code, but super easy to build UIs with. I hate myself for doing this but it was by far the best option available, depressingly.

I know C++ it's far more performant than Go but for some games and software C++ wouldn't be needed at all, such as nchat with tdlib (the library should be a Go native one by itself, is not rocket science). These could be working close in low end machines with barely performance losses. In these cases there's nothing to gain with C++, because even compared to C, most C++ software -save for Dillo and niche cases- won't run as snappy as C ones. Running them under Golang won't make them unusable, for sure.

On the GUI, there's Fyne; but what Go truly needs it's a default UI promoted from the Golang developers written in the spirit of Tk.Tk itself would be good enough. Even Limbo for Inferno (Go's inspiration) borrowed it from TCL. Nothing fancy, but fast and usable enough for most entry tasks.

Python ships it by default because it weights near NIL and most platforms have a similar syntax to pack the widgets. Is not fancy and under mobile you need to write dedicated code and set theming but again if people got to set Androwish as a proof of concept, Golang could do it better...

Another good use case for Go would be Mosh. C++ and Protobuf? Goland should have been good for this. C++ mosh would be far snappier (it feels with some software like Bombadillo and Anfora vs Telescope) but for 'basic' modern machines (first 64 bit machines with Core Duo's or AMD64 processors) it would be almost no delay for the user.

Yes, 32 bit machines, sorry, but for 2030 and up I expect these be like using 16 bit DOS machines in 1999. Everyone moved on and 32 bit machines were cheap enough. Nowadays it's the same, I own an Atom n270 and I love it, but I don't expect to reuse it as a client or Go programming (modulo for Eforth) in 4 years, I'd expect to compute everything in the low 64 end bit machines I own.

But it will be a good Go testing case, for sure. If it runs fast in the Atom, it would shine under amd64. With the current crysis, everyone should expect to refurbish and keep 'older' machines just in case. And be sure that long compiling times should be cut in half, even if you use ccache. RAM and storage will be expensive and current practices will be pretty much discarded. Yes, C++ will be used in these times, but Golang too. Forget Electron/Chromium being used as a standalone toolkit outside of being the engine of a browser.

And if oil/gas usage it's throttled for the common folk, E/V and electric heating will reach crazy numbers. Again, telecomms and data centers will have their prices skyrocketted so the power rise doesn't blackout a whole country/state. Again, expect power computing caps, throttled resolutions for internet media/video/RDP content, even bandwith caps (unless you pay a premium price, that's it) and tons of changes. React developers using 66GB of RAM for Claude Code... forget it. Either they rebase their software in Go... or they already lost.

>Sure, if you don’t count safety features like memory management, crash handling, automatic bounds checks and encryption cyphers; as anything useful.

>Networking stacks, safety checks, encryption stacks, etc all contribute massively to software “bloat”.

They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid, but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.

> They had most of this stuff in the 1980s, and even earlier really. Not on your little 8-bit microcomputer that cost $299 that might have had as a kid

Those are the systems we are talking about though.

> but they certainly did exist on large time-sharing systems used in universities and industry and government. And those systems had only a tiny fraction of the memory that a typical x86-64 laptop has now.

Actually this systems didn’t. In the early 80s most protocols were still ASCII based. Even remote shell connections weren’t encrypted. Remember that SSH wasn’t released until 1995. Likewise for SSL.

Time sharing systems were notoriously bad for sandboxing users too. Smart pointers, while available since the 60s, weren’t popularised in C++ until the 90s. Memory overflow bugs were rife (and still are) in C-based languages.

If you were using Fortran or ALGOL, then it was a different story. But by the time the 80s came around, mainframe OSs weren’t being written in FORTRAN / ALGOL any longer. Software running on top of it might, but you’re still at the mercy of all that insecure C code running beneath it.

> Actually this systems didn’t. In the early 80s most protocols were still ASCII based.

DES was standardised in '77. In use, before that. SSL was not the first time the world adopted encrypted protocols.

The NSA wouldn't have weakened the standard, it was something nobody used.

DES wasn’t common place though (or at least not on the mainframes I worked on). But maybe than says more about the places I worked early on in my career?

Also DES is trivial to crack because it has a short key length.

Longer keys require more compute power and thus the system requirements to handle encryption increase as the hardware to decrypt becomes more powerful.

The box size at IBM was larger before standardisation. DES is trivial to break, because of NSA involvement in weakening all the corners. [0]

> In the development of the DES, NSA convinced IBM that a reduced key size was sufficient;

Minitel used DES, and other security layers, and was in use for credit cards, hospitals, and a bunch of other places. The "French web" very nearly succeeded, and did have these things in '85. It wasn't just mainframes - France gave away Minitel terminals to the average household.

[0] https://www.intelligence.senate.gov/wp-content/uploads/2024/...

Yeah I’d written about minitel in a tech journal several years back. It’s a fascinating piece of technology but safely never got to see one in real life.

I worked for one payroll mainframe in the 80s that didn’t have DES. So it wasn’t quite as ubiquitous as you might think. But it does still sound like it was vastly more widespread than I realised too.

This. An old netbook cam emulate a PDP10 with ITS, Maclisp and some DECNET-TCP/IP clients and barely suffer any lag...

Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.

Nowadays even the Artemis crew can't properly launch Outlook. If I were the IT manager I'd just set Claws-mail/thunderbird with file attachments, MSMTP+ISYNC as backends (caching and batch sending/receiving emails, you know, high end technology inspired by the 80's) and NNCP to relay packets where cuts in space are granted and thus NNCP can just push packets on demand.

The cost? my Atom n270 junk can run NNCP and it's written in damn Golang. Any user can understand Thunderbird/Claws Mail. They don't need to setup anything, the IT manager would set it all and the mail client would run seamlessly, you know, with a fancy GUI for everything.

Yet we are suffering the 'wonders' of vibe coding and Electron programmers pushing fancy tecnology where the old one would just work as it's tested like crazy.

> Also the Amiga's have AmiSSL and it will run on a 68040 or some FPGA with same constraints. IRC over TLS, Gemini, JS-less web, Usenet, EMail... not requiring tons of GB.

The AmiSSL came out long after the C64 was a relic and required hardware that was an order of magnitude more powerful than the C64 ;)