Another dum dum Unicode idea is having multiple code points with identical glyphs.
Rule of thumb: two Unicode sequences that look identical when printed should consist of the same code points.
Another dum dum Unicode idea is having multiple code points with identical glyphs.
Rule of thumb: two Unicode sequences that look identical when printed should consist of the same code points.
If anything, Unicode should have had more disambiguated characters. Han unification was a mistake, and lower case dotted Turkish i and upper case dotless Turkish I should exist so that toUpper and toLower didn't need to know/guess at a locale to work correctly.
Characters should not have invisible semantics.
So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?
And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?
> So you think that the letters in the Greek and Cyrillic alphabets which are printed identically to the Latin A should not exist?
Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.
> And, for example, Greek words containing this letter should be encoded with a mix of Latin and Greek characters?
Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?
Those Unicode homonyms are a solution looking for a problem.
> Yes. Unicode should not be about semantic meaning, it should be about the visual. Like text in a book.
Do you think 1, l and I should be encoded as the same character, or does this logic only extend to characters pesky foreigners use.
They are visually distinct to the reader.
Unicode is about semantics not appearance. If you don't need semantics then use something different.
> Unicode is about semantics not appearance.
And that's where it went off the rails into lala land. 'a' can have all kinds of distinct meanings. How are you going to make that work? It's hopeless.
>Yup. Consider a printed book. How can you tell if a letter is a Greek letter or a Latin letter?
I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.
>should not be about semantic meaning,
It's always better to be able to preserve more information in a text and not less.
> I can absolutely tell Cyrillic k from the lating к and latin u from the Cyrillic и.
They look visually distinct to me. I don't get your point.
> It's always better to be able to preserve more information in a text and not less.
Text should not lose information by printing it and then OCR'ing it.
What about numbers? Would they be assigned to arabic only? I guess someone will be offended by that.
While at it we could also unify I, | and l. It's too confusing sometimes.
> While at it we could also unify I, | and l. It's too confusing sometimes.
They render differently, so it's not a problem.
As far as I know, glyphs are determined by the font and rendering engine. They're not in the Unicode standard.
Fraktur (font) and italic (rendering) are in the Unicode standard, although Hackernews will not render them. (I suspect that the Hackernews software filters out the nuttier Unicode stuff.)
I don't think that would help much. There are also characters which are similar but not the same and I don't think humans can spot the differences unless they are actively looking for them which most of the time people are not. If only one of two glyphs which are similar appear in the text nobody would likely notice, expectation bias will fuck you over.
I wonder how anybody got by with printed books.