First "everything":
Logging "everything" could include stack traces and parameter values at every function call. Take the information you can get from a debugger and imagine you log all of it. Would that be necessary to determine why a defect is triggered?
Second, "lazy":
Logging has many useful aspects, but it is also only a step or two above adding print statements to the code, which again leads to the "lazy." If you have the inputs, you should be able to reproduce the execution. The exceptions include "poorly" modularized code, side effects, etc.
Alternatives.
I've found it helpful for complex failures to make sure that I include information about the system. For example, the program couldn't allocate memory: Was it continuous chunks of memory or a memory leak? How much free memory is there, versus the shapes of the free memory (Linux memory slabs)? What can I do to reset this state? (reboot was the only option)
Finally, a quote a colleague shared with me when I once expressed my love of logging. In the context of testing online games:
"Developers seem drawn to Event Recorders like moths to a flame. Recording all game/ mouse/ network/ whatever events while playing the game and playing them back is a bad idea. The problem is that you have an entire team modifying your game's logic and the meaning or structure of internal events on a day-to-day basis. For The Sims Online and other projects, we found that you could only reliably replay an event recording on the same build on which it was recorded. However, the keystone requirement for a testing system is regression: the ability to run the same test across differing builds. Internal Event Recorders just don't cut it as a general-purpose testing system. UI Event Recorders share a similar problem: when the GUI of the game shifts, the recording instantly becomes invalid."
Page 181, "Section 2.1 Automated Testing for Online Games by Larry Mellon of Electronic Arts", in Massively multiplayer game development 2, edited by Thor Alexander, 2005