For me was interesting that all digits in ASCII starts with 0x3, eg. 0x30 - 0, 0x31 - 1, ..., 0x39 - 9. I thought it was accidental, but in real it was intended. This was giving possibility to build simple counting/accounting machines with minimal circuit logic with BCD (Binary Coded Decimals). That was wow for me ;)

ASCII was started in 1960. A terminal then would have been a mostly-mechanical teletype (keyboard and printer, possibly with paper tape reader/punch), without much by way of "circuit logic". Think of it more as a bit caused a physical shift of a linkage to do something like hit the upper or lower part of a hammer, or a separate set of hammers for the same remaining bits.

Look at the Teletype ASR-33, introduced in 1963.

Yes, that's true ASR-33 was first application, but IBM has impact on ANSI/ASA comeete and ASCII standardisation. In 1963 IBM System/360 was using BCD with digits quick "parse" and in it's peripherals. I remember it from some interview with old IBM tech employee ;)

And this is exactly why I find the usual 16x8 at least as insightful as this proposed 32x4 (well, 4x32, but that's just a rotation).

[flagged]

I still wonder if it wouldn't have been better to let each digit be represented by its exact value, and then use the high end of the scale rather than the low end for the control characters. I suppose by 1970 they were already dealing with the legacy of backwards-compatibility, and people were already accustomed to 0x0 meaning something akin to null?

Either way you would still need some check to ensure your digits are digits and not some other type of character. Having zeroed out memory read as a bunch of NUL characters instead of like “00000000” would probably be useful, as “000000” is sometimes a legitimate user input

NUL was often sent as padding to slow (printing) terminals. Although that was just before my time.