image decoding is fast enough that no one uses hardware encoders. The extra bits are very cheap on both CPU and GPU, and by using them internally, you prevent internal calculations from accumulating error, and end up with a much cleaner size quality trade-off. (note that 10 bit output is still valuable on an 8 bit display because it lets the display manager dither the image
That is true! But AVIF is based on AV1. As a video codec, AV1 often does need to be implemented in dedicated hardware for cost and power efficiency reasons. I think the article is misleading in this regard: "This limitation comes from early digital video systems". No, it is very much a limitation for video systems in the current age too.