for one it's extremely costly, in vcpu , storage , transfer rates. and if you're paying a third-party logger , multiply each by 10x

Axiom wants $60/m if you send them a terabyte of logs, which is basically nothing compared to the cost of developers trying to debug issues without detailed logs.

not to mention the performance impact of synchronous logging. Write a trivial benchmark and add logging and you will see cost per operation 1000x

I think you're being naive on the costs but that's just me. That's the intro price, plus you have transfer fees , vcpu .

I've never used axiom, but all the logging platforms I've used like splunk, datadog, loggly are a major op-ex line item.

And telling your developers their time is priceless means they will produce the lowest quality product.

If you're in a testing environment, where your SIT and UAT are looking to break stuff though, don't you usually want to be able to look to a log of everything?

I could see a couple reasons against. For one, it's expensive to seralize/encode your objects into the logger , even if you reduce logging level on prod.

Secondly, you can't represent the heap & stack well as strings. Concurrent threads and object trees are better debugged with a debugger (e.g. gdb).

That makes it foolish, but I'm not sure if it's lazy.

the lazy part comes from the fact that it's easier to be foolish in this case than to be selective about what gets logged. So lazy & foolish.

it's not lazy, it's a good use of time, you don't go back and forth when you realize you forgot to log something important.