There is a parallel universe where the convention is that 0 is used for when something went wrong, and 1-127 are reserved for all the myriad and beautiful ways in which things went right.
Indeed you're supposed to, but that way if someone calls exit(0), it looks like the program worked fine, when in fact they committed some debug code and made the program no longer run to completion. "Yay, done" was put in for the scripts to flag this sort of thing, presumably based on experience.
"People deliberately misuse the very mechanism that was designed to indicate successful completion, so we added another, flawed detection mechanism based on IO, because there's no way they'll do the thing that they should have learned in the first year of school, and just call exit with any other argument than 0 on irregular completion".
Gosh I thought the engineering culture was bad where I work.
Yea, whatever you do, don't solve the actual problem! This reminds me of solving a buffer overflow by just blindly increasing the size of the buffer until it no longer crashes.
Next thing you know, you'll have to skip a major version number in your product because a significant fraction of your user base string parse its name to determine what version it is.
I can say up until 2005 or so I was a real believer in printf() debugging but I deliberately switched to using a debugger as much as possible around that time. I found that no matter how hard people "try" if they modifying the code to do debugging there is some chance these get checked in -- whereas you can investigate many things with the debugger without checking anything it.
Some applications have more trouble with setup and teardown than others. Like I knew a professor who kept sending me C programs that would crash before main() and some systems have a lot of trouble with "crash on shutdown" which might be a problem (corrupted files) or a non-problem.
> I was a real believer in printf() debugging but I deliberately switched to using a debugger
This really does not need to be an either/or. They have different uses. You can stick in 20 printfs and get a quick feel for where the bug is far quicker than stepping through the code - especially if you set a breakpoint and hit run, only to realise that you've overshot. You can run the program 10 times with different parameters and compare the results with printf much more easily than you could with a debugger. But, once you've found the rough area, a debugger is much better for fine grained inspection, and especially interrogating state with carefully written watches.
I do get your point about the risk of leaving in some trace by accident. But it feels like overkill to throw away such a valuable tool just because of that.
We still seem to have fairly bad tooling for advanced debugging use cases.
There's no good reason you shouldn't be able to have an IDE maintain a text overlay of debugging points which is solely supplied as breakpoint scripts to the debugger instead.
That has literally been table stakes for Windows development since the 90s.
Not having the debugger fully integrated into your integrated development environment is strictly a problem of the commercial Unix and open source crowd and their "Real Programmers are fine with stone knives and bearskins" machismo.
Then that's utterly and horribly wrong? And in my decades of experience, I've fortunately only relatively rarely had to deal with that particular kind of atrocity.
There is a parallel universe where the convention is that 0 is used for when something went wrong, and 1-127 are reserved for all the myriad and beautiful ways in which things went right.
Indeed you're supposed to, but that way if someone calls exit(0), it looks like the program worked fine, when in fact they committed some debug code and made the program no longer run to completion. "Yay, done" was put in for the scripts to flag this sort of thing, presumably based on experience.
"People deliberately misuse the very mechanism that was designed to indicate successful completion, so we added another, flawed detection mechanism based on IO, because there's no way they'll do the thing that they should have learned in the first year of school, and just call exit with any other argument than 0 on irregular completion".
Gosh I thought the engineering culture was bad where I work.
Yea, whatever you do, don't solve the actual problem! This reminds me of solving a buffer overflow by just blindly increasing the size of the buffer until it no longer crashes.
The amount of problems I have handled with liberal void* is greater than I care to admit
Brother, have I got bad news for you about all the places outside your door.
Next thing you know, you'll have to skip a major version number in your product because a significant fraction of your user base string parse its name to determine what version it is.
I can say up until 2005 or so I was a real believer in printf() debugging but I deliberately switched to using a debugger as much as possible around that time. I found that no matter how hard people "try" if they modifying the code to do debugging there is some chance these get checked in -- whereas you can investigate many things with the debugger without checking anything it.
Some applications have more trouble with setup and teardown than others. Like I knew a professor who kept sending me C programs that would crash before main() and some systems have a lot of trouble with "crash on shutdown" which might be a problem (corrupted files) or a non-problem.
Just make it fprintf(stderr, ...) debugging instead.
> I was a real believer in printf() debugging but I deliberately switched to using a debugger
This really does not need to be an either/or. They have different uses. You can stick in 20 printfs and get a quick feel for where the bug is far quicker than stepping through the code - especially if you set a breakpoint and hit run, only to realise that you've overshot. You can run the program 10 times with different parameters and compare the results with printf much more easily than you could with a debugger. But, once you've found the rough area, a debugger is much better for fine grained inspection, and especially interrogating state with carefully written watches.
I do get your point about the risk of leaving in some trace by accident. But it feels like overkill to throw away such a valuable tool just because of that.
We still seem to have fairly bad tooling for advanced debugging use cases.
There's no good reason you shouldn't be able to have an IDE maintain a text overlay of debugging points which is solely supplied as breakpoint scripts to the debugger instead.
IDEs seem to conk out at click to set breakpoint.
That has literally been table stakes for Windows development since the 90s.
Not having the debugger fully integrated into your integrated development environment is strictly a problem of the commercial Unix and open source crowd and their "Real Programmers are fine with stone knives and bearskins" machismo.
Then that's utterly and horribly wrong? And in my decades of experience, I've fortunately only relatively rarely had to deal with that particular kind of atrocity.