The D language default initializes floating point values to NaN. AFAIK, D is the only language that does that.

The rationale is that if the programmer forgets to initialize a float, and it defaults to 0.0, he may never realize that the result of his calculation is in error. But with NaN initialization, the result will be NaN and he'll know to look at the inputs to see what was not initialized.

It causes some spirited discussion now and then.

In the same spirit, the `char` type default initializes to 0xFF, which is an invalid Unicode value.

It's the same idea for pointers, which default initialize to null.

Not too familiar with D, but isn't 0xFF ÿ (Latin Small Letter Y with diaeresis) in unicode? It's not valid UTF-8 or ascii, but it's still a valid codepoint in unicode.

I'm a fan of the idea in general, and don't think there's a better byte to use as an obviously-wrong default.

It's an invalid 8 bit code unit, which is what matters. It's a valid codepoint but codepoints are just abstract numbers, not byte patterns.

The only byte representation that's 1:1 with Unicode code points is UTF-32, which I imagine that the D char type can't store.