Just read the following post on the performance of try/catch and throwing exceptions. In the author’s small (and completely unrepresentative of real-world scenarios) performance tests, it showed that throwing exceptions is a quite costly operation.
Ah… if only it were that simple. Throwing an exception is indeed a costly operation, compared to most other statements. But let’s get real… the performance ‘hit’ of throwing exceptions in real-world scenarios is completely negligible unless you’re doing it in a tight loop or a piece of code that is being executed pretty much all of the time under heavy user load. In practically every other case, you really won’t feel the difference performance-wise.
Anyways, because of the perceived performance hit, the author concludes the following:
Code defensively rather than catching exceptions wherever possible.
I’m not entirely sure what the author really means with this, after all there are probably plenty of developers who consider throwing exceptions when something is wrong instead of letting things blow up as ‘coding defensively’. But if he’s talking about the ‘defensive coding’ variant which often entails returning error codes or null references to avoid exceptions, and then checking for those return values all over your consuming code, then i could not disagree more.
I’m probably biased because i once had to maintain a codebase where the authors (to this day i believe they were relics from a C++ museum… and no, i don’t dislike C++ but ‘typical’ C++ programming in .NET is just not ideal) tried to hide exceptions from everyone who used their code because ‘application developers don’t understand exceptions anyway’ (sidenote: can you imagine that? not only did they suck tremendously, they were so arrogant to think they were so much better than ‘typical’ developers). So they had a few try/catch blocks in low-level pieces of their code, and when they caught an exception, they would just return a null reference or some kind of old school error code.
My first problem with this, is that it leads to code which is littered with null checks and the likes. It hurts readability, and i’d even go as far as to say that it hurts maintainability because it really is easy to forget a null check if you’re writing a piece of code where a certain reference really should never be null. And you just know that the null checks will be missing in some places. Which actually makes debugging harder than it needs to be. Had the original exception not been caught and replaced with a faulty return value, then we would’ve gotten a valuable exception object with an informative stacktrace. What does a missing null check give you? A NullReferenceException in a piece of code that makes you go: wtf? this reference should never be null!
So then you need to start figuring out how the code was called, and why the reference could possibly be null. This can take a lot of time and really hurts the efficiency of your debugging efforts. Eventually you’ll figure out the real problem, but odds are that it was hidden pretty deeply and you spent a lot more time searching for it than you should have.
Another problem with null checks all over the place is that some developers don’t really take the time to properly think about the real problem when they get a NullReferenceException. I’ve actually seen quite a few developers just add a null check above the code where the NullReferenceException occurred. Great, the exception no longer occurs! Hurray! We can go home now! Umm… how bout ‘no you can’t and you deserve an elbow to the face’? The real bug might not cause an exception anymore, but now you probably have missing or faulty behavior… and no exception! Which is even more painful and takes even more time to debug. That original exception with the stacktrace sure sounds appealing now, doesn’t it?
Btw, why is it that the kind of developers who avoid certain language features because of ‘performance problems’ (usually based on faulty information) are often very likely to make mistakes such as remote calls in loops, memory leaks due to lousy event handling schemes, or my personal favorite: writing a HighPerformanceTimer class which uses Windows API’s because ‘it times much more accurately’ and then in the same class, walking down the stackframe at runtime to figure out where the HighPerformanceTimer instance was called from for ‘easy logging purposes, dude’
Well like i said, maybe i’m just biased…
” Hurray! We can go home now! Umm… how bout ‘no you can’t and you deserve an elbow to the face’?”
LOL. I’ll have to remember that one.
This is a craft issue and one of balance. Code your classes/methods to behave as you would like them to behave as the caller, IMO.
TDD helps this tremendously, but is most commonly the happy path. I think this is a great argument for testing exception cases in Unit Tests and simply being deliberate about the behavior of your SUT.
“I’m not entirely sure what the author really means with this, after all there are probably plenty of developers who consider throwing exceptions when something is wrong instead of letting things blow up as ‘coding defensively’. But if he’s talking about the ‘defensive coding’ variant which often entails returning error codes or null references to avoid exceptions, and then checking for those return values all over your consuming code, then i could not disagree more.”
I think you’re arguing vs a straw man here as that would not be what I’d take code defensively to mean. Exceptions are high overhead because you’re generating a stack trace, and this can easily tie up resources if you’re in production and for some unforeseen reason that exception raising code starts getting pounded, but anyway, that’s a whole other topic.
Coding defensively I would take to mean checking for nulls where having a null passed in would indicate an error state, then throwing the exception immediately. Not returning error codes or nulls which need to be checked for. This is in contrast to not doing said checking, and having try/catch blocks around every place where you’re calling the method.
Just keep in mind that exceptions are just that. The exception, they should not be the norm so any performance concerns around generating stack traces etc. should be m00t. Exceptions should not be used as an indicator of success or failure, that’s what the return value is for. (Which goes into all kinds of fiery territory about when to use return values vs. out parameters but I’m sure that’s been discussed here before.) Simply put for tracking down intermittent issues a useful tool is Break on All errors. As soon as you start using exceptions common-place to indicate “failure”, that tool is basically useless. But that digresses from Null checks…
Having worked on projects where nearly 1/3 of all “bugs” revolve around going through call stacks tracking down root causes for “Object reference not set to an instance of an object.” I’ve come to the conclusion that #null is a twisted, evil time-waster.
Too many developers use exceptions as part of the normal program flow. They are not designed to be that, they should be used when there is something wrong.
Some methods should return null. Null can be a valid answer depending on what you are doing. I have seen code where the only way I could find out if an object was invalid was to try a method, and the method would work or throw an exception. I had to code a try / catch around the method to trap the silly return value.