It's not really about "limitations" of the hardware, so much as it is about the fact that things have crystallized a lot since the 90s. There are no longer any mainstream architectures using big-endian integers for example, and there are zero architectures using anything but two's complement. All mainstream computers are Von Neumann machines too (programs are stored; functions are data). All bytes are 8 bits wide, and native word sizes are a clean multiple of that.
Endianness will be with us for a while, but modern languages don't really need to consider the other factors, so they can take significant liberties in their design that match the developer's intuition more precisely.
I was thinking more about higher-order things, like a compiler being able to see that your for-loop is just counting the number of bits set in an integer, and replacing it with a popcount instruction, or being able to replace recursion with tail calls, or doing complex things at compile-time rather than run-time.
At least the popcount example (along with some other 'bit twiddling hacks' inspired optimizations) is just a magic pattern matching trick that happens fairly late in the compilation process though (AFAIK at least), and the alternative to simply offer an optional popcount builtin is a completely viable low-tech solution that was already possible in the olden days (and this still has the the advantage that it is entirely predictable instead of depending on magic compiler tricks).
Basic compile time constant folding also isn't anything modern, even the most primitive 8-bit assemblers of the 1980s allowed to write macros and expressions which were evaluated at compile time - and that gets you maybe 80% to the much more impressive constant folding over deep callstacks that modern compilers are capable of (e.g. what's commonly known as 'zero cost abstraction').