As a Turkish speaker who was using a Turkish-locale setup in my teenage years these kinds of bugs frustrated me infinitely. Half of the Java or Python apps I installed never run. My PHP webservers always had problems with random software. Ultimately, I had to change my system's language to English. However, US has godawful standards for everything: dates, measurement units, paper sizes.

When I shared computers with my parents I had to switch languages back-and-forth all the time. This helped me learn English rather quickly but, I find it a huge accessibility and software design issue.

If your program depends on letter cases, that is a badly designed program, period. If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.

While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.

I don't care if Unicode releases a conversion map. Natural-language behavior should always require natural language metadata too. Even modern languages like Rust did a crappy job of enforcing it: https://doc.rust-lang.org/std/primitive.char.html#method.to_... . Yes it is significantly safer but converting 'ß' to 'SS' in German definitely has gotchas too.

> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake. Yet everybody did that. Just the existence of this behavior is a reason I would like to get rid of anything GNU-based in the systems I develop today.

POSIX requires that many functions account for the current locale. I'm not sure why you are blaming GNU for this.

C wasn't designed to be running facebook, it was designed to not have to write assembly.

At a time when many machines did not have as many bytes of memory as there are Unicode code points.

I'm not sure why you are blaming POSIX! The role of POSIX is to write down what is already common practice in almost all POSIX-like systems. It doesn't usually specify new behaviour.

I always assumed it was the other way around: a system follows POSIX to be POSIX-compliant.

>Even modern languages like Rust did a crappy job of enforcing it

Rust did the only sensible thing here. String handling algorithms SHOULD NOT depend on locale and reusing LATIN CAPITAL LETTER I arguably was a terrible decision on the Unicode side (I know there were reasons for it, but I believe they should've bit the bullet here), same as Han unification.

> However, US has godawful standards for everything: dates, measurement units, paper sizes.

Isn't the choice of language and date and unit formats normally independent.

There are OS-level settings for date and unit formats but not all software obeys that, instead falling back to using the default date/unit formats for the selected locale.

They’re about as independent as system language defaults causing software not to work properly. It’s that whole realm of “well we assumed that…” design error.

> > However, US has godawful standards for everything: dates, measurement units, paper sizes.

> Isn't the choice of language and date and unit formats normally independent.

You would hope so but, no. Quite a bit software tie the language setting to Locale setting. If you are lucky, they will provide an "English (UK)" option (which still uses miles or FFS WTF is a stone!).

On Windows you can kinda select the units easily. On Linux let me introduce you to the journey to LC_ environment variables: https://www.baeldung.com/linux/locale-environment-variables . This doesn't mean the websites or the apps will obey them. Quite a few of them don't and just use LANGUAGE, LANG or LC_TYPE as their setting.

My company switched to Notion this year (I still miss Confluence). It was hell until last month since they only had "English (US)" and used M/D/Y everywhere with no option to change!

Mac OS actually lets you do English (Avganistan) or English (Somalia) or whatever.

It's just English (I don't know when it's US and when it's UK, it's UK for Poland), but with the date / temperature / currency / unit preferences of whatever locale you actually live in.

At least for any country in continental europe "English" is usually "English International", meaning English UK.

Maybe there are some exceptions if we speak globally, hence limiting myself to europe. But I assume it is the same deal.

Certain desktop environments like KDE provide a nice GUI for changing the locale environment variables. It has worked quite well for me, to use euro instead of my country's small currency :')

[deleted]

> FFS WTF is a stone!

It's actually a pretty good weight for measuring humans (14lb). Your weight in pounds varies from day to day but your weight in (half-)stones is much more stable.

The real travesty is the fact that the sub-unit for a stone is a pound and not a pebble. I have no idea what stones and pounds are, but if it was stones and pebbles at least it'd be funnier

There's a full metric system hidden there: rock - stone - pebble - grain.

I propose 614 stones to the rock, 131 pebbles to the stone, and 14707 grains to the pebble. Of course.

Let's introduce the commonly used unit of crumble which is 3/4 of a grain!

The commonly used unit should be 23/17ths.

> FFS WTF is a stone

An english imperial measurement. Measurements made based on actual stone rock and were mainly use as weighing agricultural items such as animal meat and potatoes. We also used tons and pounds before we incorporated the metric system of Europe.

A stone is 1/8th of a long hundredweight. Easy!

My car gets 40 rods to the hogshead and that's the way I likes it!

If it's offered, choose EN-Australian or EN-international. Then you get sensible dates and measurement units.

I usually set the Ireland locale, they use English but use civilized units. Sometimes there's also a "English (Europe)" or "English (Germany)" locale that works too.

I also use Ireland sometimes for user accounts. For example Hotels.com only offers the local languages when you select which country to use. The Irish version is one of the few that has allows you to buy in Euros in English.

Nowadays this works for many applications. Not for the "legacy" ARM compiler that was definitely invented after Win NT adopted UTF though. It crashes with "English (Germany)". Just whyy.

And if you want it to be more sensible but still not sensible, pick EN-ca.

> While half of the language design of C is questionable and outright dangerous, making its functions locale-sensitive by all popular OSes was an avoidable mistake.

It wasn’t a mistake for local software that is supposed to automatically use the user’s locale. It’s what made a lot of local software usefully locale-sensitive without the developer having to put much effort into it, or even necessarily be aware of it. It’s the reason why setting the LC_* environment variables on Linux has any effect on most software.

The age of server software, and software talking to other systems, is what made that default less convenient.

On the contrary, the locale APIs are problematic for many reasons. If C had just been like "well C only supports the C locale, write your own support if that's what you want", much more software would have been less subtly broken.

There's a few fundamental problems with it:

1. The locale APIs weren't designed very well and things were added over the years that do not play nice with it.

So like as an example, what should `int toupper(int c)` return? (By the way, the paramater `c` is really an unsigned char, if you try to put anything but a single byte here, you get undefined behavior. What if you're using something that uses a multibyte encoding? You only get one byte back so that doesn't really help there either.

Many of the functions were clearly designed for the "1 character = 1 byte" world, which is a key assumption of all of these APIs. Which is fine if you're working with ASCII, but blows up as soon as you change locales.

And even so, it creates problems where you try to use it. Say I have a "shell" but all of the commands are internally stored as uppercase, but you want to be compatible. If you try to use anything outside of ASCII with locales, you can't just store the command list in uppercase form because then they won't match when doing a string comparison using the obvious function for it (strcmp). You have to use strcoll instead, and sometimes you just, might not have a match for multibyte encodings.

2. The locale is global state.

The worst part about it is that it's actually global state (not even like faux-global state like errno). This basically means that it's basically wildly thread unsafe as you can have thread 1 running toupper(x) while another thread, possibly in a completely different library, calling setlocale (as many library functions do to guard against the semantics of a lot of standard library functions changing unexpectedly). And boom, instant undefined behavior, with basically nothing you could reasonably do about it. You'll probably get something out of it, but the pieces are probably going to display weirdly unless your users are from the US, where the C locale is pretty close to the US locale.

This means any of the functions in this list[1] is potentially a bomb:

> fprintf, isprint, iswdigit, localeconv, tolower, fscanf, ispunct, iswgraph, mblen, toupper, isalnum, isspace, iswlower, mbstowcs, towlower, isalpha, isupper, iswprint, mbtowc, towupper, isblank, iswalnum, iswpunct, setlocale, wcscoll, iscntrl, iswalpha, iswspace, strcoll, wcstod, isdigit, iswblank, iswupper, strerror, wcstombs, isgraph, iswcntrl, iswxdigit, strtod, wcsxfrm, islower, iswctype, isxdigit.

And there are some important ones in there too like strerror. Searching through GitHub as a random sample, it's not uncommon to see these functions be used[2], and really, would you expect `isdigit` to be thread-unsafe?

It's a little better with POSIX as they define a bunch of "_r" variants of functions like strerror and the like which at least give some thread safety (and uselocale at least is a thread-only variant of setlocale, which lets you safely do the whole "guard all library calls to `uselocale(LC_ALL, "C")`"). But Windows doesn't support uselocale so you have to use _configthreadlocale instead.

It also creates hard to trace bug reports. Saying you only support ASCII or whatever is, well it's not great today, but it's at least somewhat understandable, and is commonly seen to be the lowest common denominator for software. Sure, ideally we'd all use byte strings where we don't care or UTF-8 where we actually want to work with text (and maybe UTF-16 on Windows for certain things), but that's just a feature that doesn't exist, whereas memory corruption when you do something with a string but only for people in a certain part of the world in certain circumstances is not really a great user experience or developer experience for that matter.

The thing, I actually like C in a lot of ways. It's a very useful programming language and has incredible importance even today and probably for the far future, but I don't really think the locale API was all that well designed.

[1]: Source: https://en.cppreference.com/w/c/locale/setlocale.html

[2]: https://github.com/search?q=strerror%28+language%3AC&type=co...

I think it's important to point out the distinction between what POSIX mandates and what actual libc implementations, notably glibc, do. Nearly all non-reentrant POSIX functions are actually only non-reentrant if you are using a 1980s computer that for some reason has threads but doesn't have thread-local storage. All of these things like strerror are implemented using TLS in glibc nowadays, so while it is technically true you need to use the _r versions if you want to be portable to computers that nobody has used in 30 years in practice you usually don't need to worry about these things, especially if you're using Linux, since they use store results in static thread-local memory rather than static global memory.

As for the string.h stuff, while it is all terrible it's at least well documented that everything is broken unless you use wchar_t, and nobody uses wchar_t because it's the worst possible localization solution. No one is seriously trying to do real localization in C (and if they were they'd be using libicu).

use Australian English: English but with same settings for everything else, including keyboard layout

I live in Germany now, so I generally set it to Irish nowadays. Since I like ISO-style enter key, I use UK keyboard layout (also easier to switch to Turkish than ANSI-layout). However many OSes now have a English (Europe) locale too

Many Linux distributions provide en_DK specifically for this purpose. English as it is used in Denmark. :-)

This uses a comma decimal separator, which might or might not be desired.

Irish English locale uses a dot.

Denmark doesn't have Euros as currency, unfortunately.

Tying currency to locale seems insane. I have bank accounts in multiple currencies and use both several times per week. Why does all software on my system need to have a default currency? Most software does not care about money, those that do usually give you a quote in a currency fixed by someone else.

It's about how easy it is to reach the € sign. Ideally, it should be as easy to type as the $ sign is in the en_US layout.

For what it's worth, I think most all European keyboard layouts have key combos for € and $ defined (many have £ as well), while on en_US you can only type $ (without messing with settings). Europe of course has more currencies than just €, but they use a two-letters-long abbreviations instead of a special symbol.

zł has entered the chat. ;-)

(The Polish Ł is typically not easily typable of non-Polish keyboards.)

Huh, do typical Linux keyboards not have it on AltGr-L?

en_IE does.

> If a language ships toUpper or a toLower function without a mandatory language field, it is badly designed too. The only slightly-better option is making toUpper and toLower ASCII-only and throwing error for any other character set.

There is a deeper bug within Unicode.

The Turkish letter TURKISH CAPITAL LETTER DOTLESS I is represented as the code point U+0049, which is named LATIN CAPITAL LETTER I.

The Greek letter GREEK CAPITAL LETTER IOTA is represented as the code point U+0399, named... GREEK CAPITAL LETTER IOTA.

The relationship between the Greek letter I and the Roman letter I is identical in every way to the relationship between the Turkish letter dotless I and the Roman letter I. (Heck, the lowercase form is also dotless.) But lowercasing works on GREEK CAPITAL LETTER IOTA because it has a code point to call its own.

Should iota have its own code point? The answer to that question is "no": it is, by definition, drawn identically to the ascii I. But Unicode has never followed its principles. This crops up again and again and again, everywhere you look. (And, in "defense" of Unicode, it has several principles that directly contradict each other.)

Then people come to rely on behavior that only applies to certain buggy parts of Unicode, and get messed up by parts that don't share those particular bugs.

It’s not a bug, it’s a feature. The reason is that ISO 8859-7 [0] used for Greek has a separate character code for Iota (for all greek letters, really), while ISO 8859-3 [1] and -9 [2] used for Turkish do not for the usual dotless uppercase I.

One important goal of Unicode is to be able to convert from existing character sets to Unicode (and back) without having to know the language of the text that is being converted. If they had invented a separate code point for I in Turkish, then when converting text from those existing ISO character encodings, you’d have to know whether the text is Turkish or English or something else, to know which Unicode code point to map the source “I” into. That’s exactly what Unicode was designed to avoid.

[0] https://en.wikipedia.org/wiki/ISO/IEC_8859-7

[1] https://en.wikipedia.org/wiki/ISO/IEC_8859-3

[2] https://en.wikipedia.org/wiki/ISO/IEC_8859-9

I know that. That's why I mentioned

> in "defense" of Unicode, it has several principles that directly contradict each other

Unicode wants to do several things, and they aren't mutually compatible. It is premised on the idea that you can be all things to all people.

> It’s not a bug, it’s a feature.

It is a bug. It directly violates Unicode's stated principles. It's also a feature, but that won't make it not a bug.

>If they had invented a separate code point for I in Turkish, then when converting text from those existing ISO character encodings, you’d have to know whether the text is Turkish or English or something else, to know which Unicode code point to map the source “I” into. That’s exactly what Unicode was designed to avoid.

Great. So now we have to know locale for handling case conversion for probably centuries to come, but it was totally worth to save a bit of effort in the relatively short transition phase. /s

You always have to know locale to handle case conversion - this is not actually defined the same way in different human languages and it is a mistake to pretend it is.

In most cases locale is encoded in character itself, i.e. Latin "a" and Cyrillic "a" are two different characters, despite being visually indistinguishable in most cases.

The "language-sensitive" section of the special casing document [0] is extremely small and contains only the cases of stupid reuse of Latin I.

[0]: https://www.unicode.org/Public/UCD/latest/ucd/SpecialCasing....

Without it, there would not have been a transition phase.

I call BS. Without a series of MAJOR blunders Unicode was destined to succeed. When the rest of the world has migrated to Unicode, I am more than certain that Turks would've migrated as well. Yes, they may have complained for several years and would've spent a minuscule amount of resources to adopt the conversion software, but that's it, a decade or two later everyone would've forgotten about it.

I believe that even addition of emojis was completely unnecessary despite the pressure from Japanese telecoms. Today's landscape of messengers only confirms that.

I thought locale is mostly controlled by the environment. So you can run your system and each program with it's own separate locale settings if you like.

I wish there was a single letter universal locale with sane values, maybe call it U or E, with:

ISO (or RFC....) date time, UTF-8 default (maybe also alternative with ISO8859-1) decimal point in numbers and _ for thousands, metric paper / A4, ..., unicode neutral collation

but keeps US-English language

Just use English. If you want to program you need to learn it anyway to make sense of anything.

I'm not a native English speaker btw. I learned it as I was learning programming as a kid 20 years ago

Yes and no. This will work only if you don't create software used internationally.

If you only work in English, you will test in English and avoid uses cases like the one described in the article.

Did you know that many town and streets in Canada have a ' in their name? And that many websites reject any ' in their text fields because they think its Sql injection?

Ms O’Reilly would like a word about surname fields.

My EU country does the same. Of course software should work for the locales you're targeting but that is different from the language used by developer tooling. The GP is talking about changing the locale of their development machine so I assume that's what they're referring to.