Logging "everything" could include stack traces and parameter values at every function call. Take the information you can get from a debugger and imagine you log all of it. Would that be necessary to determine why a defect is triggered?
Second, "lazy":
Logging has many useful aspects, but it is also only a step or two above adding print statements to the code, which again leads to the "lazy." If you have the inputs, you should be able to reproduce the execution. The exceptions include "poorly" modularized code, side effects, etc.
Alternatives.
I've found it helpful for complex failures to make sure that I include information about the system. For example, the program couldn't allocate memory: Was it continuous chunks of memory or a memory leak? How much free memory is there, versus the shapes of the free memory (Linux memory slabs)? What can I do to reset this state? (reboot was the only option)
Finally, a quote a colleague shared with me when I once expressed my love of logging. In the context of testing online games:
"Developers seem drawn to Event Recorders like moths to a flame. Recording all game/ mouse/ network/ whatever events while playing the game and playing them back is a bad idea. The problem is that you have an entire team modifying your game's logic and the meaning or structure of internal events on a day-to-day basis. For The Sims Online and other projects, we found that you could only reliably replay an event recording on the same build on which it was recorded. However, the keystone requirement for a testing system is regression: the ability to run the same test across differing builds. Internal Event Recorders just don't cut it as a general-purpose testing system. UI Event Recorders share a similar problem: when the GUI of the game shifts, the recording instantly becomes invalid."
Page 181, "Section 2.1 Automated Testing for Online Games by
Larry Mellon of Electronic Arts", in Massively multiplayer game development 2, edited by Thor Alexander, 2005
Axiom wants $60/m if you send them a terabyte of logs, which is basically nothing compared to the cost of developers trying to debug issues without detailed logs.
If you're in a testing environment, where your SIT and UAT are looking to break stuff though, don't you usually want to be able to look to a log of everything?
I could see a couple reasons against. For one, it's expensive to seralize/encode your objects into the logger , even if you reduce logging level on prod.
Secondly, you can't represent the heap & stack well as strings. Concurrent threads and object trees are better debugged with a debugger (e.g. gdb).
First "everything":
Logging "everything" could include stack traces and parameter values at every function call. Take the information you can get from a debugger and imagine you log all of it. Would that be necessary to determine why a defect is triggered?
Second, "lazy":
Logging has many useful aspects, but it is also only a step or two above adding print statements to the code, which again leads to the "lazy." If you have the inputs, you should be able to reproduce the execution. The exceptions include "poorly" modularized code, side effects, etc.
Alternatives.
I've found it helpful for complex failures to make sure that I include information about the system. For example, the program couldn't allocate memory: Was it continuous chunks of memory or a memory leak? How much free memory is there, versus the shapes of the free memory (Linux memory slabs)? What can I do to reset this state? (reboot was the only option)
Finally, a quote a colleague shared with me when I once expressed my love of logging. In the context of testing online games:
"Developers seem drawn to Event Recorders like moths to a flame. Recording all game/ mouse/ network/ whatever events while playing the game and playing them back is a bad idea. The problem is that you have an entire team modifying your game's logic and the meaning or structure of internal events on a day-to-day basis. For The Sims Online and other projects, we found that you could only reliably replay an event recording on the same build on which it was recorded. However, the keystone requirement for a testing system is regression: the ability to run the same test across differing builds. Internal Event Recorders just don't cut it as a general-purpose testing system. UI Event Recorders share a similar problem: when the GUI of the game shifts, the recording instantly becomes invalid."
Page 181, "Section 2.1 Automated Testing for Online Games by Larry Mellon of Electronic Arts", in Massively multiplayer game development 2, edited by Thor Alexander, 2005
for one it's extremely costly, in vcpu , storage , transfer rates. and if you're paying a third-party logger , multiply each by 10x
Axiom wants $60/m if you send them a terabyte of logs, which is basically nothing compared to the cost of developers trying to debug issues without detailed logs.
not to mention the performance impact of synchronous logging. Write a trivial benchmark and add logging and you will see cost per operation 1000x
I think you're being naive on the costs but that's just me. That's the intro price, plus you have transfer fees , vcpu .
I've never used axiom, but all the logging platforms I've used like splunk, datadog, loggly are a major op-ex line item.
And telling your developers their time is priceless means they will produce the lowest quality product.
If you're in a testing environment, where your SIT and UAT are looking to break stuff though, don't you usually want to be able to look to a log of everything?
I could see a couple reasons against. For one, it's expensive to seralize/encode your objects into the logger , even if you reduce logging level on prod.
Secondly, you can't represent the heap & stack well as strings. Concurrent threads and object trees are better debugged with a debugger (e.g. gdb).
That makes it foolish, but I'm not sure if it's lazy.
the lazy part comes from the fact that it's easier to be foolish in this case than to be selective about what gets logged. So lazy & foolish.
it's not lazy, it's a good use of time, you don't go back and forth when you realize you forgot to log something important.