I've told this story before, but it's super relevant here. When I set up all the servers for reddit, I set them all to Arizona time. Arizona does not have DST, and is only one hour off of California (where we all were) for five months, and the same as here the rest of the year.

I didn't use UTC because subtracting eight when reading raw logs is a lot harder than subtracting one.

They use UTC now, which makes sense since they have a global workforce and log reading interfaces that can automatically change timezones on display.

I've had to try to clean up after a similar early decision and it was very painful.

Please stick with utc across the board people, someone someday may have to clean up your mess.

This can also be expressed as general advice.

Whenever you are unsure whether to use a clever solution or follow the globally accepted standard in your work as a DevOps or Software engineer, always choose the standard.

The Principle of Least Surprise.

Among other things, in an incident, people’s brains aren’t working - the more they have to remember about your particular system, the more likely they are to forget something.

While I agree on this particular instance, there are two types of things future engineers have to clean up after: Their predecessors thinking too small (and picking the easy route) or too big (and adding needless complexity).

One is not necessarily and in all instances less of a mess to clean up behind than the other.

Nowadays I'd agree with you, UTC is probably the best bet. But back then, it wasn't.

> But back then, it wasn't.

UTC was standardized in 1963 [0]

it was already a 40 year old standard at the time you're talking about.

awareness of UTC being the correct choice has definitely increased over time, but UTC being the correct choice has not changed.

you say reddit servers use UTC now, which implies there was a cutover at some point. were you still at reddit when that happened? were you still hands-on with server maintenance? any anecdotes or war stories from that switchover you want to share?

because I can easily imagine parts of the system taking a subtle dependency on Arizona being Reddit Standard Time, and the transition to UTC causing headaches when that assumption was broken. your memory of this "clever" trick might be different if you had to clean up the eventual mess as well.

0: https://en.wikipedia.org/wiki/Coordinated_Universal_Time

using utc on servers was very common in 2005

I’d say it was common enough but not universally, given the number of arguments I had from 2005 to 2015 about this exact issue.

Hold on, I'm not a sysadmin guy. Are you folks saying the server should not know what part of the world its in, that basically it should think it's in Greenwitch?

I would have thought you configure the server to know where it is have it clock set correctly for the local time zone, and the software running on the server should operate on UTC.

From a logging perspective, there is a time when an event happens. The timestamp for that should be absolute. Then there's the interaction with the viewer of the event, the person looking at the log, and where he is. If the timestamp is absolute, the event can be translated to the viewer at his local time. If the event happens in a a different TZ, for example a sysadmin sitting in PST looking at a box at EST, it's easier to translate the sysadmin TZ env, and any other sysadmin's TZ anywhere in the world, than to fiddle with the timestamp of the original event. It's a minor irritation if you run your server in UTC, and you had to add or subtract the offset, eg. if you want your cron to run at 6PM EDT, you have to write the cron for 0 22 * * *. You also had to do this mental arithmetic when you look at your local system logs, activities at 22:00:00 seem suspicious, but are they really? Avoid the headaches and set all your systems to UTC, and throw the logs into a tool that does the time translation for you.

The server does not "know" anything about the time, that is, it's really about the sysadmin knowing what happened and when.

1) Most software gets its timestamps from the system clock 2) If you have a mismatch between the system time and the application time, then you just have log timestamps that don't match up; it's a nightmare - even more so around DST/ST transitions

you've got it backwards - the server clock should be in UTC, and if an individual piece of software needs to know the location, that should be provided to it separately.

for example, I've got a server in my garage that runs Home Assistant. the overall server timezone is set to UTC, but I've configured Home Assistant with my "real" timezone so that I can define automation rules based on my local time.

Home Assistant also knows my GPS coordinates so that it can fetch weather, fire automation rules based on sunrise/sunset, etc. that wouldn't be possible with only the timezone.

I kind of assumed all computer clocks were UTC, but that you also specified a location, and when asked what time it is, it did the math for you.

Windows assumes computer clocks are local time. It can be configured to assume UTC. Other operating systems assume computer clocks are UTC. Many log tools are not time zone aware.

Computer clock is just counter, if you set the start counting point to UTC, then it's UTC, you set it to local time, then it's local time.

that's the difference between "aware" and "naive" timestamps. Python has a section explaining it in their docs (though the concept applies to any language):

https://docs.python.org/3/library/datetime.html#aware-and-na...

AKA time zones

A server doesn't need to "know" where in the world it is (unless it needs to know the position in the sun in the sky for some reason).

A server doesn't "think" and the timezone has no relevance to where it is located physically.

I'm not sure why you're getting downvoted.

Yes, that's exactly what I'm saying :). In fact, I've run servers where I didn't even physically know where it was located. It wouldn't have been hard to find out given some digging with traceroute, but it didn't matter. It was something I could SSH into and do everything I needed to without caring where it was.

Everyone else down-thread has clarified the why of it. Keep all of your globally distributed assets all running on a common clock (UTC) so that you can readily correlate things that have happened between them (and the rest of the world) without having to do a bunch of timezone math all the time.

Common, but not universal - from 2005 to as late as 2014 I worked for companies that used Pacific time on their servers.

the standard for service providers was UTC in 1995

I have photos showing that my dad (born 1949, never in the military) kept his watch on UTC in the early 70s.

Would he by any chance refer to it as Zulu or Zebra time? The Z-suffix shorthand for UTC/GMT standardisation has nautical roots IIRC and the nomenclature was adopted in civil aviation also. I sometimes say Zulu time and my own dad, whose naval aspirations were crushed by poor eyesight, is amongst the few that don’t double-take.

[deleted]

I can’t quantify how much time my team wasted in diagnosing production glitches on checking the wrong time offsets but it was substantial. One of our systems wasn’t using UTC, and given enough time the fact that Slack wasn’t using it either does become an issue. When an outage transitions to All Hands on Deck everyone needs to get caught up to what’s going on preferably under their own power so you don’t suffer the Adding Resources to a Late Project problem.

So that first alert that came in ?? minutes ago you need to align with the telemetry and logs in order to see what the servers were doing right before everything went to shit.

What if it's your personal machine? I'm thinking about jobs I've set up... thing is, I actually do want those to align to DST in most cases. For example, ZFS scrub should start after I leave for work so that it has the greatest chance of being done by the time I get home. (It's too loud to run overnight.)

This shouldn’t be hard to deal with if the timestamp is always serialized with the offset: I’m much more picky about always persisting the offset than about always persisting UTC

Closely related pet peeve: Log display web UIs that allow selecting display timezones including UTC absolutely insist on deriving my preferred time format (12/24 hours) from my browser language preference.

If nothing else, selecting UTC should be a very strong hint to any UI that I am capable of parsing YYYY-MM-DD and 24 hour times.

If you use <input type=time>, the browser uses locale or user regional preferences… even if the app is in an application domain (e.g. medicine, military, science) where 24h is preferred even in 12h-predominant locales. There is no way for the app to say to the browser “this time field should be 24h in all locales”, the only option is to build a custom time field

I asked the HTML spec editors to fix this, but thus far they haven’t: https://github.com/whatwg/html/issues/6698

It gets worse.

Some browser APIs respect `LC_TIME`. Others say "fuck standards, I'm using `LC_MESSAGES`".

This means that if those locales differ about say, y/m/d ordering, it is quite possible for the way the browser formats a date to be different than the way it parses the date.

My issue is more with outputs than inputs (although the latter are also annoying to me in 12-hour format).

Operating systems generally allow user override of locale settings, and browsers generally respect that; I use a locale which officially has 12 hour time as its standard (Australian English), and I always override it to 24 hour time in user preferences (although Australia is rather inconsistent, e.g. in Sydney, train services use 24 hour time; in Melbourne, the metro trains use 12 hour time but the regional services use 24 hour time)

Interesting, so you're saying that that OS-level time preference is available via JavaScript? I wasn't able to figure out how to query that in a little bit of trying, so I assumed there was no API for it.

If there actually is, I'm now even more upset at that log web UI.

Well, if you run this Javascript:

    Intl.DateTimeFormat([],{"timeStyle":"medium"}).format(new Date())
It should return the current time, formatted according to your locale and user preferences.

By contrast, this should return it formatted with the defaults for your locale, ignoring your user preferences:

    Intl.DateTimeFormat(navigator.language,{"timeStyle":"medium"}).format(new Date())
For me, the first returns 24 hour time and the second returns 12 hour time. Because 12 hour time is default for my primary locale (en-AU), but I override it to 24 hour time in my macOS settings.

I know the same works on Windows, and I’m sure it works on Linux too, just the way you configure it is different for each OS.

> It should return the current time, formatted according to your locale and user preferences.

It does not for me! (I get 12 hour based time in Safari, Firefox, and Chrome, despite having 24 hours configured at the macOS system level.)

I have no idea what is going on then… works for me

maybe has something to do with OS or browser versions?

or maybe (for some reason) this works for some locales but not others?

That's interesting because I strongly prefer that timestamps are stored as UTC and converted to my timezone on the fly for easier debugging. I dunno about using the browser language choice (do sites really do this)? Usually a simple conversion with javascript is fine (javascript knows the local timezone).

A plus would be showing both.

I would especially like to call out the Scandic Hotels chain for this behaviour as well. Booking a hotel room should not involve me wondering if I booked it for the wrong day when I'm not in an European time zone.

My favorite example of that so far is the company that managed to shift my birthday by one day between their mobile and web apps.

That is the correct and only sane way to determine date format, timezone is not the same as formatting preference. But yeah it sucks. Assuming you're using the en-US locale, have you tried using the en-CA locale? It has ISO8601 formatted dates, and is otherwise pretty similar to the en-US locale.

In firefox you can try it out in about:preferences by setting en-CA as the topmost option in "Choose your preferred language for displaying pages"

And I just checked, firefox is capable of understanding system-wide split locale settings, if you only want LC_TIME

    localectl set-locale en_US.UTF-8
    localectl set-locale LC_TIME=en_CA.UTF-8
and dates will format, but currencies, addresses, etc will not

> That is the correct and only sane way to determine date format, timezone is not the same as formatting preference.

That's unfortunate, but then wouldn't the sane way for localization-aware software be to ask the user and not make any such assumptions?

> Assuming you're using the en-US locale, have you tried using the en-CA locale? It has ISO8601 formatted dates, and is otherwise pretty similar to the en-US locale.

Thanks, I'll give that a try tomorrow! If my log exploring UI of, well, not quite choice actually respects that, I'll be very grateful :)

> That's unfortunate, but then wouldn't the sane way for localization-aware software be to ask the user and not make any such assumptions?

The sane way for any localization-aware software is to use the standard knobs that the user already has for setting such preferences. Which would be the appropriate LC_* in Unix and the corresponding user settings on Windows/macOS.

In a way it is asking the user. Date formatting rules are traditionally tied to locale, so the user picks their locale and their expected date formatting rules are derived from that.

On Linux you can mix and match via LC_*= environmental variables, but that appears to be complexity the browser vendors didn't expose in their UI.

You can blame the browsers on that one. There is now an API to determine hour cycles, but it's not even supported in all browser yet:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

Just a strong hint your fellow techdorks were being at work. Oh so clever, aren't they, always?

That's still a bit risky as Arizona might just change its time zone definition on a whim. I'm an engineer on one of the big calendaring applications, and it's mind-boggling how often stuff like this happens world-wide, sometimes short-notice (a few weeks in advance). It regularly gives us headaches.

Agreed, watching the rate of changes to timezone databases will rapidly disabuse you of the notion of any constant. It's rare that a day goes by without an update to some definition somewhere, which is astounding.

It might be instructive for PHBs to have a 'hasatimezonechanged.com' website counting the days since the last timezone DB change...

Java allows setting the default timezone at jvm level and everyone in our org were setting their favourite TZ which was mostly PST somewhere in the code.

We had application logs and system level logs with different timestamp and someone decided a certain db column has to be a string of the format 'yyyy-mm-dd hh:mm:ss". You can imagine the confusion and the "fun" time we had while debugging logs from multiple systems at specific times.

Finally, one lead put their feet down and mandated that setting timezone to PST should be the first thing all applications do and db columns should be considered PST unless otherwise required.

> Finally, one lead put their feet down and mandated that setting timezone to PST should be the first thing all applications do and db columns should be considered PST unless otherwise required.

That seems to me like a really bad mandate. If you are going to mandate something, mandate UTC. If someone forced me to change a system from UTC to something else, I’d not be very happy. It’s the kind of decision which makes you seriously question if you want to work there any more.

Does that mean you ran PST during daylight savings time and half the year your logs were “off” by an hour? Or did you actually run Pacific Time and deal with the clock changes twice a year?

Doesn't the JVM handle this when you set the tz? Otherwise...how is it different than just setting a clock?

They're two different timezones. One is a UTC offset and one is a dynamic timezone (DST).

Computers can do things. Build a UX to read the logs and have it automatically parse/convert the logs to show whatever is local time.

https://unix.stackexchange.com/questions/274816/how-to-conve...

There are many solutions, but when you're actually running a site with millions of daily visitors and one person focused on ops, you do the easy thing, not the right thing, unless those happen to coincide.

Sure... "I'm too busy fighting fires to come up with a solution" is always a valid answer, but that's not what I was replying to.

Even with infinite resources (love to see that in real companies!), any such solution adds complexity, which comes at a cost.

If this is a complexity battle, I'd rather that sysadmins have to use a tool to read logs vs. figuring out why the server is melting down because all the jobs are running 2x or answering questions about why jobs didn't run at all.

In my old gig, jobs ran for many many HOURS and consumed most of the IO on the server (processing reams of data), which meant that when you got them running 2x because of overlap, it was a trainwreck.

In my experience, it's really not pleasant having to work with logs that require a dedicated viewer compared to regular old text files I can use Unix commands available everywhere (`tail`, `head`, `wc` etc), with, i.e. not just on the server producing the logs, but also the device I'm viewing them on.

That said, I absolutely prefer UTC times in logfiles. I'd rather do some math every time than make wrong assumptions about the local timezone of the file even once. (Even correctly indicated local times are more annoying, since I have the math for the offset of my timezone to UTC memorized, but not that of every possible server time to mine.)

alias tail="my fancy log viewer tail wrapper"

https://thecasualcoder.github.io/tztail/

That's exactly the type of thing I like to avoid having to do on some remote server, inside a container's minimal shell etc.

If you're at that level, I'd be exporting the logs anyway.

Yup. There's always some kind of tool anyway so why not. I mean, even if you read the logs as they come out of a screaming line printer, the tool is the thing that takes log messages and spits them out the printer port.

How hard can it be to write a log cat utility in awk?

Harder than not having to, in any case.

Naively storing timezones also adds complexity, just later and probably someone else's problem.

Timezones are naturally complex. They just are. If you're not handling it, then your stuff is broken.

This is a conversation about what timezone to configure on a server, no?

Of course all stored timestamps should include a timezone (assuming there's a local context to the events they refer to) if at all possible. I hope that part is not controversial.

I mean, if this doesn't depict modern devops, I don't know what does. Unsung heroes, honestly.

The problem is a lot of times when you are looking at logs you are already very far off the happy path of things you look at often and want to invest resources into displaying well.

Nooo... we have a bunch of metrics and logs reporting systems at my company. Some of them are in UTC, some of them display my local time, some of them display China Time, and I'm trying to collaborate with colleagues in London and Australia who get data displayed in their local times as well. When I'm working to address an incident and combing through multiple systems to try to correlate events, it's a pain in the ass having to constantly double-check which time zone this data is in.

In my experience, the hard part is getting everybody else to do that. And then also getting them to actually include the timezone in their communication with you.

I'd rather the logs be consistent, and not rely on every single person who ever looks at the system logs make sure to use the correct tool the correct way.

If you'd rather the logs be consistent, then log to UTC.

> Computers can do things. Build a UX [...]

That sounds like a job for a human (at least the review part), not only a computer.

> I didn't use UTC because subtracting eight when reading raw logs is a lot harder than subtracting one.

Also important here is that the date stays the same! Even though I've gotten used to instinctively decoding UTC, that part is still frustrating sometimes.

What about that one hour in the day when the date isn't the same?

You probably aren't working that hour. (also way better than 8).

And the computer isn't doing anything interesting during that time? You're reading logs, not using the computer as a clock.

This is not smart, this is just surprising to the next person who is going to maintain your ”smart” tricks. Thank god they switched to utc, that is what everyone expect.

Actually, back then, 18 years ago, most people expected your servers to be in Pacific or Eastern time, depends on where your company was headquartered, because none of us really had global technical workforces back then. We all pretty much worked in one office and used the local time zone, because often our servers were in the building with us or in a datacenter nearby.

Case in point, before reddit I was at eBay, and we kept all those servers in Pacific time, since the entire technical workforce was in Pacific time, as well as all of the servers.

Making blanket statements like that without considering the context of the time is usually not a good idea. ;)

> most people expected your servers to be in Pacific or Eastern time

I was there back then, working for shops people have heard of, and I honestly don't know where you're getting this idea from. Some places did things wild and wacky when they were wee small, but most of us quickly learned that such shenanigans (like fun server naming conventions) start to fall apart and maybe we should do things differently.

Using UTC for servers was standard when I entered the field in 2005.

I was setting them to UTC in 1995.

Ah you think UTC is your ally? You merely adopted UTC. I was born in it, molded by it. I didn't see DST until I was already a man, by then it was nothing to me but blinding!

In the 1980s, PT and ET were common. I was working at Bell Labs then, and one of my jobs was to change the time zone (back then it was two words) on the testing machines, as needed. This is stuck in my memory since to change the timezone, you needed to edit the Unix kernel source code and recompile it!

2000 for me. That was the first time I had users from outside my own time zone, so I figured it was better to just use UTC for everything and just convert depending on the user's settings. I think I just applied the thinking to the whole server.

This is the way.

Yelp servers were set to Pacific time when I started in 2009, probably a decision from 2004

I run into this a lot when working with legacy code. The first reaction most teams have is to mock it, not understand it.

Everybody's dunking on you here but yeah, circa 18 years ago I remember that setting servers to local time was still pretty common.

[deleted]

It only matters if the servers were running cron jobs where it mattered if they ran "not at all or 2x."

Logs with weird dates on high demand production servers... less important.

Can you please make your substantive points without swipes? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

I believe it was an anecdote and not a recommendation.

It was probably the smartest option at the time, given the context. As long as it was documented properly, no one on such a small team would have been surprised. Spend more than the 30 minutes you've spent here on HN so far, and you may learn a thing or two about what is "smart", and how that is inextricably linked to the given situation.

Sounds like it was smart at the time, and then eventually outgrew the solution.

If you're like 5 people, each with a list of TODOs that doesn't fit on one screen, it's pretty smart to just do something quick and good enough, then move on, revisit it in the future.

"revisit it in the future" is exactly the reason your TODO list keeps growing rather than shrinking

Nice when the company lives long enough for that to be an issue

Yeah, as we all know the startup is the only type of software company in existence and the best way to ensure the startup's success is to act as if it won't exist next week.