I take it you could choose to encode a code point using a larger number of bytes than are actually needed? E.g., you could encode "A" using 1, 2, 3 or 4 bytes?
Because if so: I don't really like that. It would mean that "equal sequence of code points" does not imply "equal sequence of encoded bytes" (the converse continues to hold, of course), while offering no advantage that I can see.