Making Good Software

A blog by Alberto G (Alberto Gutierrez)

Written by Alberto Gutierrez

December 15th, 2011 at 4:43 pm

How to write efficient unit tests. 5 principles for unit testing.

with 11 comments

In these last years, since the first unit testing frameworks were made available, and methodologies like TDD have become mainstream, unit testing is turning into a more popular strategy in software development.

The main advantages that unit testing can bring into a software development project can be summarised in mainly two purposes:

  1. Design purpose. Help programmers creating new code.
  2. Correctness purpose. Make sure that the behaviour of the different independent small parts of the newly created code are correct.

But what usually gets overlooked is that unit testing has also risks involved, specially when applied using dogmatic approaches like “every public method should be unit tested” or “everything needs to be design so that is easily unit tested”.

The main risk with unit testing appears when too many unnecessary tests are created. This risk becomes obvious when every time that the code is changed the time spent fixing tests is way too high, hence impacting in the productivity of the developers.

The key to effectively use unit tests is to find a balance between your tests and the amount of time you need to maintain them. When looking for this balance is important to remember a few principles to ensure that you write efficient unit tests.

1. Behaviour is the key element to test.

Focusing in testing behaviour is the key to produce a good unit test. This way if you refactor the logic inside a method without breaking its behaviour you should keep all the related tests passing.

2. Not every method is applicable for unit testing.

There are many dogmas in agile, and to have a unit test for each public method seems to be one. While is true that if possible a method should be unit tested, if you cannot test its behaviour, is better to leave it alone. (See point 1) Unnecessary unit tests are going to lock you down from changing your code for no good reason, (See point 5). These areas of your code that can´t be unit tested should be tested from an Integration or manual perspective. (See point 4)

Some clear examples of these types of methods are those that encapsulate calls to a framework, methods that loop through a list of items and then delegate to a different method, methods that log out…

3. Unit tests which are not useful anymore should be deleted.

Tests are created mainly in two fashions:

  • After the code is completed: These tests are created to built in an automated check for correctness in the associated methods. Creating useless tests (See point 1 and 2) may be made by mistake, if so, you shouldn’t feel any shame of deleting these tests, or if possible to refactor them to focus on the behaviour.
  • Before the code is completed. (Specially known for TDD). In these cases, tests have a second goal, to help programmers to come up with a cleaner code. After the code is complete, is usually a good idea to review the tests created and to delete/refactor them wherever applies. This unfortunately never made it to the list of steps to follow in TDD…

 4. Unit tests will never substitute manual and integration testing.

Unit tests, once that your code is completed, help you diagnose weather the individual parts of your application are working as you expect. This is important, but is very far away of proving that your application is robust and works according to your customer expectations, which is your main goal.

Units tests are only a small part of the complete picture, you are going to need integration tests for areas of your code where unit tests can’t prove their behaviour, and you are going to need manual testing in areas where you can’t create an automated test, or for more abstract areas like, usability and UI testing.

5. Unit tests that lock down your code from changes are evil.

If there is one particular type of test to be avoided at any cost, these are the tests that lock down your from changes without adding any value. Let me illustrate this with some pseudo-code:

MyClass.MyMethod (magicParam1, magicParam2)
      magicReturnValue = someOtherClass.doSomething (magicParam1, magicParam2)
      veryRemoteClass.stuff (magicReturnValue)

      when (someOtherClass.doSomething (magicParam1, magicParam2)).
         thenReturn (magicReturnValue)
      myClassToTest.MyMethod (magicParam1, magicParam2)
      verifyICalledThis (someOtherClass.doSomething (magicParam1, magicParam2)).
         andReturnedValueIs (magicReturnValue)
      verifyICalledThis (veryRemoteClass.stuff (magicReturnValue))
      verifyICalledThis (managerOfManager.buzzinga(magicParam2))

What is the previous test achieving? Well, is actually achieving a lot… of pain… This is the one thing that sets me off when I see others people code, not only this is not proving anything about the expected behaviour of your code, but if someone refactors the main class maintaining the same logic, he will find that this test fails miserably, only because the code changed, not because there is any unexpected change of behavior in the code…

Funny thing about this type of tests, at least for my experience, is that they are usually most defended by the extremist agilist, everything must be unit testedHave they perhaps forgot about their beloved agile process motto?

11 Responses to 'How to write efficient unit tests. 5 principles for unit testing.'

Subscribe to comments with RSS or TrackBack to 'How to write efficient unit tests. 5 principles for unit testing.'.

  1. Hi Alberto,
    I disagree about your third point. Obviously, I agree that useless tests should be deleted but, to go a bit further, I would ask why did you write such tests in the first place? When somebody systematically write extra useless tests, I would think this person should try to figure out what process is wrong in his coding habits and try to solve this.
    As a TDD believer, I think you should write a few test explaining what each method is suppose to do. Once each test has a title, you can start implementing the test and assert the correctness of the outputs of your method. I assume each developer ask himself the following question before coding a method: “What is this method suppose to do, which params is it gonna take (which are the boundaries of these params) and what will be the outputs?” These should be reflected in your different tests. Then you can start coding your method. Finally, you will refactor your tests to make them DRY (group setups together and so on) but I would hope you never actually have to delete a whole test…

    Also, tests are not only used to code a feature and help debug it, they are here to help you maintain and refactor it in the future. As long as your tests keep passing, you can be confident that any changes you made to the method, class or any other code that has influence on that method will not break it and introduce bugs. This is very important concept.

    Thanks for the post.

    Simon Le Parc

    16 Dec 11 at 7:35 am

  2. Hi Simon!

    Great comment!

    I think you are making a very good point there…

    I agree that useless tests should be deleted but, to go a bit further, I would ask why did you write such tests in the first place? When somebody systematically write extra useless tests, I would think this person should try to figure out what process is wrong in his coding habits and try to solve this.

    I also heartily agree with the rest of your observations…

    I should probably made clearer that I wasn’t trying to make a case against TDD, if properly applied, TDD is a great method to develop, just as you explain, there is people that write bad code… Is these tests that I am talking about when I say that if you find them don’t feel ashamed of deleting them, or refactor them… But I think you are going even a step further, which I think is completely right, which is, confront the person who has wrote the test if he usually does so, and try to get him to think about the value of his tests and why he is writting them…


    Alberto Gutierrez

    16 Dec 11 at 7:45 am

  3. I also do not agree with deleting test cases. I really do not think that there is any test which is useless.

    If you have written any test then you must execute it to check for the result. It may uncover any hidden issues and errors which may not be caught by other tests.

    Just one thing that it would be such a scenario which is not normally executes. So there is no need to remove any test case. Focus should be on the writing efficient test cases.

    John David

    20 Dec 11 at 7:52 am

  4. Please explain what you mean by “behavior”. It is a very vague word when used with computer code. Please give several examples of a class or method, what you think it’s behavior is, and then explain what is and is not a test of its behavior.

    Thank you!


    20 Dec 11 at 8:02 am

  5. @devdanke.

    By behaviour I mean a test that is expressed so that only cares about the output for a given input, where ideally your mocking code would be minimal and only to support an expected condition.

    Some very simple and valid examples:
    AssertEquals (Maths.add(2,3), 5)

    AssertError (Maths.divide(2,0))

    For invalid examples, any test code where you just check that you called such and such method with such and such parameter….

    Sorry for not giving you a better list of examples… Do you think this makes the idea of behavior clearer?


    Hi John!

    Thanks for your comment! I see your point, but to be honest, I think is a bit dogmatic… Just because you have a test, it doesn’t mean that is proving anything, right? I am just saying that is better to get rid of tests that don’t prove anything but are going to require from maintenance

    Alberto Gutierrez

    20 Dec 11 at 8:17 am

  6. imho integration tests should be used only at domain boundaries, against 3rdparty apis. The correctness of your code should be tested only with unittests. Integrations tests have higher maintainability costs, and they are less reliable and slow. So using them inside your domain is a waste of time.


    20 Dec 11 at 3:12 pm

  7. Hi Alberto,
    Very nice article. I am not an extreme agile follower but a guy that try to write tests first. I have a question about Point 5. What would be your solution for a method that returns nothing (so return type is void) but still has a certain behavior that should be tested (otherwise probably it would be delegated as a manual test) ?
    Apart from that what do you think about testing well encapsulated classes (where mostly we are forced to use reflection rather than breaking encapsulation with unnecessary accessors). So basically a case which is less resilient to change/refactoring ?


    20 Dec 11 at 3:25 pm

  8. @tichyagainstichy

    Good question! My opinion here is probably going to be highly controversial… If you have a method that doesn’t return anything, and inside you plainly delegate to other x methods, I simply would recommend not to write any unit test at all…

    Checking wether you are calling such and such method with such and such parameters, in my opinion, is a waste of time… If you are thinking that you need to call A, B and C, you are going to write a test that checks that you called A, B and C… So your test is not going to prove anything to you, you are always going to call A, B and C… But then, there is the question that always pops up… What if someone changes the code?! How is my test going to prevent that someone to break my code? Well, the problem with this question, to start off, is that is bad formulated, because your test is not guaranteeing any correctness, is only showing a red light whenever anyone changes the source code, even if they haven´t break any logic…

    Let’s think about this, if I find a test like this, that fails, because I have changed the code to call D instead of C, what am I gonna do? Change the test to reflect that change? If I do so, I am only perpetuating this waste of time…

    So what to do, well I would suggest that for such methods, if possible, sometimes is not, you write an integrations test, because they usually end up performing operations outside of the boundaries of your application, that way you can protect the behavior for that method.


    Completely agree with you, ideally all the methods that doesn’t return anything or that interact with obscure third party frameworks etc are the ones where we are going to have this kind of untestable code, if you can, you should always aim to have your code so that yor isolate these areas and then you perform integration tests, so they are, indeed, areas of the code where I would also suggest to avoid unit testing

    Alberto Gutierrez

    20 Dec 11 at 4:04 pm

  9. Hi Alberto,

    You mentioned in 3rd principal, that there are 2 fashion to write test case.

    1)Write after the code is completed.
    2)Write it before the code.

    I prefer to write it after the code is completed as i have specification document before getting started with the code development.

    But which fashion do you prefer the most to write unit test case?



    21 Dec 11 at 2:09 am

  10. Hi Joe,

    I actually would alternate, depending on the circumstances I would do a TDD like programming style, or I would just write the code and then the tests… I have always believed that the important thing is to get there, not how you get there, and that everyoe has its own style/way, which should be respected, as long as they respect your style of course :)

    Thanks for the comment Joe!

    Alberto Gutierrez

    21 Dec 11 at 2:22 am

  11. Hi Alberto, thank you for the post!

    I’m quite interested in the topic of testing and would like to support some of your points.

    First of all, I believe that experienced developers shouldn’t be afraid of deleting tests that stopped to serve their purpose. The key ability here is to be able to distinguish which tests deserve deletion and which should be preserved – we don’t want lazy developers going around and deleting tests just because they’re lazy to fix them. As you said, an important factor is what I could call the “economics of software”, i.e. benefit of a particular test vs. its cost. I actually started to distinguish “private” (helper) tests that might be more coupled to the implementation and should be deleted when not needed anymore or when the implementation changes to much, and “public” tests that should be as decoupled as possible, testing more “what” the class/unit does instead of “how” and are thus more change resistant – described in detail at

    I’m very much concerned with creating tests that enhance instead of limit evolvability of the code base. As you said, tests that lock you down are evil. I’ve tried to collect some ideas of Kent Beck and others regarding this subject in One of the key ideas there is that tests should read as stories – which, I believe, corresponds to the behavior-centric view of yours.

    *end of shameless plug :-)*

    Jakub Holy

    29 Jan 12 at 8:41 am

Leave a Reply