If this is a complexity battle, I'd rather that sysadmins have to use a tool to read logs vs. figuring out why the server is melting down because all the jobs are running 2x or answering questions about why jobs didn't run at all.

In my old gig, jobs ran for many many HOURS and consumed most of the IO on the server (processing reams of data), which meant that when you got them running 2x because of overlap, it was a trainwreck.

In my experience, it's really not pleasant having to work with logs that require a dedicated viewer compared to regular old text files I can use Unix commands available everywhere (`tail`, `head`, `wc` etc), with, i.e. not just on the server producing the logs, but also the device I'm viewing them on.

That said, I absolutely prefer UTC times in logfiles. I'd rather do some math every time than make wrong assumptions about the local timezone of the file even once. (Even correctly indicated local times are more annoying, since I have the math for the offset of my timezone to UTC memorized, but not that of every possible server time to mine.)

alias tail="my fancy log viewer tail wrapper"

https://thecasualcoder.github.io/tztail/

That's exactly the type of thing I like to avoid having to do on some remote server, inside a container's minimal shell etc.

If you're at that level, I'd be exporting the logs anyway.

Yup. There's always some kind of tool anyway so why not. I mean, even if you read the logs as they come out of a screaming line printer, the tool is the thing that takes log messages and spits them out the printer port.

How hard can it be to write a log cat utility in awk?

Harder than not having to, in any case.