Early programming languages had to work with the limited hardware capabilities of the time in order to be efficient. Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore. So it's only logical that the current languages should work the limitations of the compilers. Perhaps one day those limitations will be gone as well for practical purposes, and it would be interesting to see what programming languages could be made then.

> Nowadays, we have so much processing power available that the compiler can optimize the code for you, so the language doesn't have to follow hardware capabilities anymore.

That must be why builds today take just as long as in the 1990s, to produce a program that makes people wait just as long as in the 1990s, despite the hardware being thousands of times faster ;)

In reality, people just throw more work at the compiler until build times become "unbearable", and optimize their code only until it feels "fast enough". These limits of "unbearable" and "fast enough" are built into humans and don't change in a couple of decades.

Or as the ancient saying goes: "Software is a gas; it expands to fill its container."

At least we can build software systems that are a few orders of magnitude more complex than in the 90s for approximately the same price. The question is whether the extra complexity also offers extra value.

True, but a lot of that complexity is also just pointless boilerplate / busywork disguised as 'best practices'.

I am eager to have an example to explain how a "best practices" is making the software unbearable or slow?

Some C++ related 'best practices' off the top of my head:

- put each class into its own header/source file pair (a great way to explode your build times!)

- generally replace all raw pointers with shared_ptr or unique_ptr

- general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

- use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

- always prefer the C++ stdlib over self-rolled solutions

- etc etc etc...

It's been a while since I closely followed modern C++ development, so I'm sure there are a couple of new ones, and some which have fallen out of fashion.

> - put each class into its own header/source file pair (a great way to explode your build times!)

Only if you fail to use binary libraries in the process.

Apparently folks like to explode build times with header only libraries nowadays, as if C and C++ were scripting languages.

> - generally replace all raw pointers with shared_ptr or unique_ptr

Some folks care about safety.

I have written C applications with handles, doing two way conversions between pointers and handles, and I am not talking about Windows 16 memory model.

> - general software patterns like model-view-controller, a great way to turn a handful lines of code into dozens of files with hundreds of lines each

I am old enough to have used Yourdon Structured Method in C applications

> - use exceptions for error handling (although these days this is widely considered a bad idea, but it wasn't always)

Forced return code checks with automatic stack unwinding are still exceptions, even if they look differently.

Also what about setjmp()/longjmp() all over the place?

> - always prefer the C++ stdlib over self-rolled solutions

Overconfidence that everyone knows better than people paid to write compilers usually turns out bad, unless they are actually top developers.

There are plenty of modern best practices for C as well, that is how we try to avoid making a mess out of people think being a portable assembler, and industries rely on MISRA, ISO 26262, and similar for that matter.

> put each class into its own header/source file pair (a great way to explode your build times!)

Is that really sufficient to explode build times on its own? Especially if you're just using the more basic C++ features (no template (ab)use in particular).

Not at all, you can write in the C subset that C++ supports and anti-C++ folks will still complain.

Meanwhile the C builds done in UNIX workstations (Aix, Solaris, HP-UX) for our applications back in 2000, were taking about 1 hour per deployment target, hardly blazing fast.

The problem is: "the platform" is never defined.

When you decouple the language from the hardware and you don't specify an abstraction model (like java vm do), "the platform" is just whatever the implementer feels like at that moment.

Isn't that the tail wagging the dog? If you build the language to fit current compilers then it will be impossible to ever redesign those compilers.

Maybe, but if you don't consider the existing compilers you run the risk of making something that is unimplementable in one of the existing compilers, or perhaps at all. (C++ has had some issue with this in the past, which is I think why it's explicitly a consideration in the process now)

Why would that be impossible? Most programming languages are still Turing complete, so you can build whatever you want in them.

You said this was an efficiency issue, and Church-Turing says nothing about efficiency.

"Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy."

- Alan Perlis, Epigrams on Programming

It's not really about "limitations" of the hardware, so much as it is about the fact that things have crystallized a lot since the 90s. There are no longer any mainstream architectures using big-endian integers for example, and there are zero architectures using anything but two's complement. All mainstream computers are Von Neumann machines too (programs are stored; functions are data). All bytes are 8 bits wide, and native word sizes are a clean multiple of that.

Endianness will be with us for a while, but modern languages don't really need to consider the other factors, so they can take significant liberties in their design that match the developer's intuition more precisely.

I was thinking more about higher-order things, like a compiler being able to see that your for-loop is just counting the number of bits set in an integer, and replacing it with a popcount instruction, or being able to replace recursion with tail calls, or doing complex things at compile-time rather than run-time.

At least the popcount example (along with some other 'bit twiddling hacks' inspired optimizations) is just a magic pattern matching trick that happens fairly late in the compilation process though (AFAIK at least), and the alternative to simply offer an optional popcount builtin is a completely viable low-tech solution that was already possible in the olden days (and this still has the the advantage that it is entirely predictable instead of depending on magic compiler tricks).

Basic compile time constant folding also isn't anything modern, even the most primitive 8-bit assemblers of the 1980s allowed to write macros and expressions which were evaluated at compile time - and that gets you maybe 80% to the much more impressive constant folding over deep callstacks that modern compilers are capable of (e.g. what's commonly known as 'zero cost abstraction').

Nope. Performance really matters. Even today. And even for web applications! Just remember how you feel using a slow sluggish website vs. a snappy fast one. It's night and day.