It’s both hard to attack but also a hugely audited system with a lot of attention paid.

That being said, [1] from 2012. The challenge with security is that structural weaknesses can take a long time to be discovered but once they are it’s catastrophic. Modern Linux finally switched to CSPRNG and proper construction and relies less on the numerology of entropy estimation it had been using (ie real security instead of theater). RDRAND has also been there for a long time on the x86 side which is useful because even if it’s insecure it gets mixed with other entropy sources like instruction execution time and scheduling jitter to protect standalone servers and iot devices.

Of course you hit the nail on the head in terms of the challenge of distinguishing security theater because you won’t know if the hardening is useful until there’s a problem, but there’s enough knowledgeable people on it that it’s less security theater than it might seem if you know what’s going on.

[1] https://www.usenix.org/system/files/conference/usenixsecurit...

30 years ago BSDs already had non-blocking /dev/random (there was no difference to /dev/urandom). OpenBSD especially wouldn’t have shipped something known insecure. Blocking random probably caused more issues (DOS, random hangs, etc.) than a no blocking CSPRNG would have.

Linux did /dev/random first, so naturally it had the oldest design for a few years, without the security expert scrutiny and experience, which the other OSes had for their implementations.

OpenBSD didn't exist yet when /dev/random and /dev/urandom were created for Linux.