9 Sep

An Observation about TDD

Category:UncategorizedTag: :

To me, developers that are not applying TDD practices during their day-to-day job always seem more in a hurry than developers that do apply red-green-refactor. In their hurry, they’re the first to cut corners and start making messes while they rush to their goal. A while ago it dawned to me why that is. They subconsciously want to get feedback as soon as possible about the code they’re writing. They cut corners and generally mess up their code in order to prevent spending those extra hours and days to keep things clean. Constantly refactoring and cleaning up their code is restraining them from having the feedback they so desperately want.

Humans are in fact feedback junkies. We constantly want to know how we’re doing what we’re doing. I actually wrote a blog post about this a couple of years ago.

Since I adopted TDD as a discipline, I tend to feel less pressured which results in me taking the time to continuously refactor the code I’m working on, trying to keep everything clean. Why? Because I known that the code I wrote a minute ago works. The tests I write constantly give me a shot of feedback so that I’m constantly hooked.

Just an observation …

21 thoughts on “An Observation about TDD

  1. Nice observation. Serous question: how long do your test suites take to run in user time?  Most people writing Ruby are writing Rails. They’re test suites typically take at least 30 seconds to run these days.

    Seems like even more justification to focus on doing anything possible to make that feedback loop tighter. Although I’m biased: I already hold that perspective.

  2. I would disagree with you to some degree.  Although, it may be that the developers you are exposed to exihibit the behavior you describe.  I personally do not practice TDD (that’s another topic on it’s own), yet I don’t cut corners or rush to my goals, nor do I experience the behaviors you have described.  Regardless if you are using TDD or not you should still refactor and clean up your code.  TDD does not prevent you from writing refactor free code.  The code you write may work and pass a test, but that doesn’t mean it shouldn’t be refactored.  I find the major influencer to “rushing” is the release date.  TDD or no TDD, you must still meet deadlines, which if are tight you will certainly be working extra hours to deliver either way.  It is even possible to be more prone to writing “messy” code just as long as it passes a test.  There is some fallacy in the thought of “My test passed, so my code is clean”.

  3. @Brian:twitter : I think that’s a bit of a miss. Unit tests are no indication that you have written clean code. I think Jan’s point is that developers can get that desired feedback buzz from unit tests; Having a means of knowing whether their re-factoring efforts are pure to intent, or have unexpected consequences.

    Focussing on writing clean code, unit tests or no, can provide the discipline to give customers & managers proper feedback into the true cost of change. Developers that get into the habbit of cutting corners start to justify short-sighted change from stakeholders as just another shortcut to add to the stack. My estimates always include unit tests and keep in mind any re-factoring I think would be useful. People may not agree with them, but it’s what I insist is necessary to deliver the desired feature without risking the rest of the application. Unit tests serve as good feedback as to whether or not my estimations are accurate, and can be used to explain situations to managers and clients if necessary why, and where there are complications with the new functionality. (I.e. unit tests related to feature X were broken when I changed this behaviour to suit feature Y.) From here they can re-evaluate feature Y (or X), and I can explain what additional work might be needed to get them both working correctly together. Granted my model for unit tests follows an approach similar to BDD where unit tests reflect requirements, not merely implementation.

  4. Although I will agree that TDD’d code doesn’t mean clean code, one observation that I’ve made (after seeing the code of a lot of various projects in different stages of development) is that every single one *without* tests was in a much worse state (poor design, code coupling, massive classes, long methods, feature envy, and the smells go on). That’s not to say that the TDD’d ones were pristine, but there was still a world of difference.

    I agree about the feedback loop that TDD provides; when I didn’t test, I would write code until I fixed a bug, many times without knowing what I actually did to fix it. Outside-in development keeps me in line and helps me write as little as code as possible – because we all know every line of code (test or otherwise) is a liability.

  5. @95c411e966f549fa07b3d7fb142ba733:disqus From my intereptation of this post it was not a miss at all.  Originally the main point I took from this post is that people who don’t follow TDD are always in a hurry making mistakes and then spend all their time refactoring and cleaning up their code. 

    That said, I just realized I miss interpreted the point of the post.  I should really learn not to read a blog post, write a blog post, tweet, and code at the same time.  So you are right, that was a total miss!  Given that, I am retracting my original statement.

    Now to address the only thing I do disagree with:

    I don’t practice TDD, but I am not in a hurry, cut corners, or make a mess of the application while trying to implement features.  Now I may be an exception, I don’t know. You can be the judge of that.

    @joshuaclayton:disqus   I was wondering if you could clarify your statement.  Do you mean every application you have seen that does not follow TDD or every application you have seen that doesn’t have tests?  Either way it doesn’t really matter. TDD or just having tests will not eliminate the possiblity of poor design, code coupling, massive classes, long methods, feature envy, and other common code smells.  If you would like to see a well architected, decoupled, and feature rich application that does not follow TDD then shoot me an email and I will show you a few.

    Just so everyone knows; I do believe in tests, just not TDD.

  6. @BrianLagunas:disqus “TDD or just having tests will not eliminate the possiblity of poor
    design, code coupling, massive classes, long methods, feature envy, and
    other common code smells.”
    Actually, unit tests will add a pretty good dose of prevention or at least visibility to most of these code smells because unit testing such code can be considerably more difficult than testing well structured, single-purpose, loosely coupled code.

    Regarding TDD, I’m certainly not a purist. To me, TDD is the concept of “test driven development” where the testing is a driver for development. From there I distinguish TFD vs. TSD as equally valuable subsets. (Test First Development, and Test Soon(Second) Development) In both cases the production code is written with test-ability having an equal weight to the customer’s requirements. Even so, TDD is not a replacement for good coding principles (ala SOLID) but it is a very valuable compliment to them, and it also acts as a bit of a litmus test for those principles.

  7. “Actually, unit tests will add a pretty good dose of prevention …”

    My point was it doesn’t eliminate the possibility and that you can have a poorly written application following TDD.  Josh Clayton made a statement similar to “all the TDD applications were better than the horrible mess of non-TDD apps”.  The problem with that statement was not one thing he mentioned is specific to TDD and can also exist in a TDD application.

  8. @Steve – while I agree with your comment in general, I have to disagree with your view that TFD and TSD are equally valuable.

    In my opinion, TSD is significantly less valuable and can even be dangerously unreliable, as it is all to easy to write a TSD test that starts off green (as all TSD tests do) but is passing for the wrong reason. TFD, on the other hand, forces you to go red first and then go green by making the exact change (and only the exact change) required to satisfy the test.

    TSD tests are better than no tests, I suppose, but a) don’t encourage the mindset that TFD does and b) are surprisingly often broken and providing false confidence or outright misdirection.

  9. It’s just an observation, not a rule of thumb. You and perhaps might be an exception. That being said, I’m a TDD addict. As I recognize that practicing TDD doesn’t automatically lead to clean code (I’ve seen messed up code that was developed using TDD), the resulting code usually was more loose coupled which enabled me to go in and make changes far more easily because of the tight feedback loop.

  10. I’ve seen that, and while it might not be universally true, I am guessing to are describing a large percentage of programmers that do not practice TDD.  I’ve seen a focus on getting as many complexity points a sprint done as a culprit as wells as trying to get as many user stories a sprint done as a culprit.  When the focus become producing more code faster, rather than the correct code so the users are happy, quality will suffer.  QA cannot make up for TDD.

  11. @a13b7b95f049c603f78757c4e473958b:disqus  TSD is easier to be caught out being a bit slack at, but often TFD unit tests are just Red first because the stub methods created to test threw NotImplementedExceptions before they were filled in. Writing tests first or second doesn’t make an ounce of difference if the tests don’t significantly excercise the code under test. Regardless of whether you write tests first or second if you are solely fixated on details like test coverage % (without considering # of touches) you are still going to be caught in a false sense of security. I alternate between TFD & TSD depending on what I’m working on and my mood. The same rules apply regarding CI etc, such as nothing gets checked in without unit tests. I refer to TSD as “Amber-Green-Re-factor” (http://py-sty.blogspot.com/2010/09/amber-green-re-factor.html) where my attitude towards the testing is to actively break the code. If tests are passing I’ll often introduce a bug to ensure the test is actually guarding against potential changes.

    A perfect example came up on Friday when I was demonstrating unit tests and the CI status screen to the BA who was asking about how to document how our code was meeting customer requirements. My unit tests to exercise the code caught me making a mistake with how I was treating an IEnumerable. It also demonstrated that in exercising my code I had thought up of at least 4 scenarios but that after implementing only 2 of them, NCover was reporting 100% coverage. A lot of 1-touch counts and I still hadn’t exercised some of the expected behaviour. After I had the 4 scenarios it was still 100% with a minimum touch-count of 3. (most lines were 6 or more.) But that’s another story I plan to put up on my blog pretty soon. This is probably going a bit off-topic, but testing like this does give very good feedback on progress. 🙂

  12. “If tests are passing I’ll often introduce a bug to ensure the test is actually guarding against potential changes.” – that’s the key to TSD, as far as I’m concerned, and it is one that too few people seem to do. I’m religious with it, when I do use TSD, and it sounds like you are too.

    As a small note regarding coverage percentages, I don’t really pay any attention to them, and haven’t in years. The only thing I really pay attention to is that the coverage is not falling over time.

  13. “TDD does not prevent you from writing refactor free code”
    I don’t think I have EVER heard that claim. In fact in this very post the author states: “Since I adopted TDD as a discipline, I tend to feel less pressured which results in me taking the time to continuously refactor the code”

    The Claim I’ve most often heard is the opposite, that code w/out test is hard/impossible to refactor. While this is not strictly true, it is differently more DANGEROUS to refactor.

  14. @6f2315e1e7ea57ad8b76fa3dad79ee06:disqus Please read a little further down. I originally misunderstood the point of the post.

  15. If think it is difficult to distinguish the cause and consequence. When reading you post one might think that doing TDD help you not being in a hurry.

    But my understanding is the contrary. Because you are in a hurry, you have to cut corners. And so you don’t do unit tests.

    Running the damn code and check it really work you can’t avoid it. It’s simply too important. Unit Tests are a luxury you can bypass when in a hurry.

    Being in a hurry might really be the cause, not the consequence!

  16. I definitely don’t agree! Unit tests are not a luxury. For me, doing TDD is all about having a steady pace. Doing TDD is all about going as fast as you can. Bypassing TDD and unit tests might give you the perception that you’re going faster in the very short run, but this is definitely a false sense as your rushing towards your goal while spending lots and lots of time in the debugger.

    In my observations, being in a hurry is the direct consequence of not holding through to the disciplines that we need. Consequence, not cause.

  17. Unit tests are certainly not something to treat as a luxury. Skipping unit tests does not save you time/cost, it only defers that cost until later. (In terms of cleaning stuff up later, and hunting/squishing bugs.) The problem most people don’t understand is that when you defer that cost, the technical debt you are introducing starts accumulating interest. Sure, that code you wrote might work, but are you doing a full regression test? What about the code that you’re going to need to add on top of it next week? You are playing software “Jenga”. Even if you’re a “Jenga” whiz there is only so high that tower can go before it topples, and a tower becomes increasingly unstable and time consuming the higher you go.

    Writing unit tests takes no longer than firing up the application and thoroughly ensuring code works. If anything it SAVES you time because with every change you make using effective unit testing you’re getting the benefit of an automatic regression test on demand. (And as part of your integration.)

  18. It is a luxury in the sence that it is optional. You can’t bypass real testing (that is, check the damn application still really work), even if you have unit tests. But you can bypass unit tests and deliver as soon as you have checked real application do work.

    In fact most form of testing before production release fix only the obvious. Well you ensure that the nominal case is working… on your testing environment.

    Most complex bugs are only found by using the application a lot by many differents users with different goals.

    Unit Test are really not even the best kind of tests because, in particular they fail to see if the new/modified code integrate well with other class/modules/feature.

    They also tend to test very specific things. After a refactoring, the application might still work perfectly, the new code might be perfect… But the old unit test fail, because it tested a specific code implementation and not a feature.

    They do have value, but they have a high cost too. You need to write them, and it true you gain some automatic regression. But you need to maintain them too.

    Final productivity boost don’t appear to be so good, even in the long run.

    Show me studies on big projects (>1 million line code) with or without extensive unit testing and let compare the cost of adding new feature for both.

    If unit testing is good enough, adding new features with extensive unit testing on the project with the extensive unit tests is going to cost less than adding same kind of feature without embedding any unit test on the project that doesn’t have a unit test suite.

  19. No offence indended, but if you believe unit tests fail to see if new code integrates well with existing features then you need to take a closer look at what unit tests are. This is *precisely* what unit tests pick up! For example, you write, and unit test the behaviour of two classes, Class A and Class B, Feature A is managed by Class A, which consumes an instance of Class B. Later you start a new feature and add a Class C which also consumes Class B, however to reuse an existing method in B you need to make some changes. This constitutes what I’ll call a “touch”. Everything compiles, the application runs. You write tests for Class C and they all pass. However, when you run the full suite, a test for Class A fails because of a change you applied in Class B. So you re-evaluate the change needed in B until A, B, and C tests are satisfied.

    Sure, that sounds a simple enough example to have possibly picked up by running the application and trying out Feature A, however enhancements to large projects will involve a great many “touches” on existing code, and some are bound to be non-obvious. The whole point of unit tests isn’t to pick up bugs affecting the code you’ve just written: (though sometimes they point those out.) The point is that existing tests pick up unexpected behaviour your changes has introduced.

    My point about the cost of unit tests is to ask yourself, how many times do you need to hit F5, and how long do you need to spend in the application to set up the scenario to test a given feature. To test it thoroughly enough that you’re reasonably certain that it is working properly, plus test other features that you figure were affected by that change? Chances are it’s around the same amount of time a test-driven developer would spend writing unit tests. If you’re saving time by not being thorough with checking that kind of thing you’re just asking for big problems in the support phase.

    If you find that unit tests are a maintenance burden (“you have to maintain them too”) then this is an indication that you are making broad, sweeping changes to behaviour, or writing overly specific, fragile tests. Fixing the latter scenario takes practice. Though if the cause is the need for broad changes then you are greatly multiplying the risk and cost of change if you don’t have unit tests to show you the real scope of your changes.

    The result is that bugs get discovered by customers, and while it sounds like you’re personally ok with that, I for one am definitely *not* ok with it. Sure, they may spot things during pre-release builds when they’re giving feedback to features, but they are not a regression testing resource. Not to say TDD/Unit tested projects don’t have bugs; Of course they have unintended or incorrect behaviour, but the first thing you do when a bug is found is reproduce it with a unit test then fix the bugger. Why? because one thing customers absolutely hate, more than new bugs, is re-introduced bugs. Get a couple of those under your belt and you’ll soon have no customer.

    As for raw figures, you’ll have to ask around, and provided you keep an open mind you might find the answers surprising. I’ve personally worked on two 1M+ line projects with unit tests, and in many cases started the raw task of introducing unit tests on brownfield apps. In all cases the # of bugs reported by testers, and particularly the customer dropped dramatically, changes were far easier to estimate for, and the projects were delivered on-time or closer to estimates than they were pre-unit test. Problems with new features were also detected much sooner. Ask around.

  20. In addition: “They also tend to test very specific things. After a refactoring, the
    application might still work perfectly, the new code might be perfect…
    But the old unit test fail, because it tested a specific code
    implementation and not a feature.”

    This is a sign of overly fragile tests, or kitchen-sink code. (Implementing more functionality than you need to satisfy a requirement.) Have a read up on a practice called BDD. (Behaviour-driven-development) There the role of unit tests is to test behaviour, not implementation specifics. It’s a whole discipline in itself, but taking away just the shift in the responsibility of the unit tests will do you a world of good towards making robust and effective tests.

Comments are closed.