The debate over the merits of unit testing has raged for the better part of two decades. But it’s no stalemate. The tide has moved steadily toward an industry that more or less, for the most part, agrees that unit testing is a good thing to do.
By and large, this is a good thing for the state of the industry and the state of the art in software. More developers writing more unit tests means catching regression defects earlier and less fear when changing existing code. But it also means more developers diving into unit testing, not fully comfortable with how to do it just yet.
And that can lead to mistakes and headaches.
So today, I’m going to talk about mistakes that developers often make when writing unit tests. You’ll typically see these so-called sins of unit testing when people are first start unit testing. But if they’re not careful, it’s more than just a temporary growing pain. These problematic tests become baked in to the codebase and stick around for the long haul.
Let’s look at what to avoid.
1. Slow Running Tests
I’ll start with something that may seem like a bit of a nitpick. But it’s actually foundational. You absolutely do not want to tolerate slow-running tests in your unit test suite.
What’s so bad about slow running tests?
It’s not some nebulous notion of performance, nor is it the wasted developer time, per se. Oh, those things matter. But the foundational problem is that long-running tests bore developers. And bored developers, looking to write code and be productive, remove the boring obstacle. In this case, they remove it by not running the test suite.
If you have slow unit tests, you have a test suite that will languish, unused. You might as well not bother.
You can certainly have long running tests in your test portfolio. But keep them separate from your developers’ unit test suite, which the team should run constantly.
2. Writing Test with Lots of Assertions
I made this mistake myself, years ago, when new to unit testing. I wrote tests with lots of assertions.
public void Test_All_The_Things()
var customerProcessor = new CustomerProcessor();
If one assertion is good, more are better, right? You want to make sure the customer processor behaves correctly. Right?
Well, yes, you do. But not like this. To understand why, ask yourself this. If you saw on a unit test report that “Test_All_The_Things” had failed, would have any idea what the problem was, at a glance? Which assertion failed? Why?
When a unit test breaks, it provides you with a warning — the equivalent of your car giving you a warning on your dashboard. Do you want a warning that says, “low tire pressure” or “low battery” or would you rather have one that says, “something is wrong somewhere?” Probably the former.
The same logic applies in your test suite. Each test should tell you something specific and detailed about what’s going on with your codebase.
3. Peering into Private Methods
One of the most common questions I hear among newbies to unit testing is something like “how do I test private methods?” They’re usually initially dumbfounded by my response. “You don’t.”
Unit tests are meant to serve as a way to exercise your codebases’s public API. If you want to test the functionality of a class’s private methods, then you do so indirectly by testing the public methods that call them. If this is really hard, it’s a good sign that your code is too monolithic (e.g. lots of iceberg classes) and that you should extract some of this functionality to a separate class.
In many languages, you can use constructs like reflection to “cheat” and access methods labeled as private. Don’t do this. You’ll break encapsulation and create really brittle unit tests. Instead, extract the private implementation to a new class with a public interface that the existing class uses privately. It’s the best of all worlds — you retain encapsulation and have an easier time testing.
4. Testing Externalities
Remember that sin 1 involved writing long-running unit tests? One surefire way to write unit tests that take forever to run is to write unit tests that do things like writing files to disk or pulling information out of databases.
So avoid expensive use of externalities because it slows down your test. But also avoid it because, when you do this, you’re not actually writing unit tests.
Unit tests are focused, fast running checks that isolate your code and assert how it should behave. You’re checking things like “if I feed the add(int, int) method 2 and 2, does it return 4?” That’s the scope of a unit test.
When you’re executing code that calls web services, writes things to disk, or pulls things from a database, you’re actually writing integration or end to end tests that, by definition, are not testing things in isolation. You’re involving external systems which means that your ‘unit’ tests can fail for environmental reasons that have nothing to do with your code.
You can avoid this particular sin by learning more about the unit testing technique of mocking.
5. Excessive Setup
If you find yourself writing a unit test that seems particularly laborious, you should probably stop and do a quick sanity check. Do you have dozens of lines of code instantiating things, passing them to other things, mocking all kinds of objects, and just generally doing a lot of busy work? If so, realize that setup is excessive.
Any number of things can create a situation with excessive setup. It might be a lack of familiarity with test writing and mocking, creating inefficiency in the tests. Or it can be a simple case of highly coupled design. If you have to set six different global variables in a specific sequence in order to test the code you want to test, you should revisit how your production code works.
But whatever the case, try to avoid excessive setup. Make the setup more efficient and/or improve the production code. Because tests with lots of setup are extremely brittle and they make maintaining the unit tests suite an onerous chore. They make it the kind of onerous chore that the team will simply stop doing.
6. Daisy Chaining Tests
This sin is more subtle, but also crucial to understand. Your unit tests should each be self-contained and possible to run in isolation. If each one were the only test in the entire suite, it should work just as faithfully as it does in mixed in with the others. Never make it necessary to run your unit tests in a certain order.
public void Test1()
GlobalVariables.IsProcessorInitialized = CustomerProcessor.Instance.Initialize();
public void Test2()
var firstName = CustomerProcessor.Instance.GetFirstCustomer().FirstName;
Here we have global state in the form of a singleton, and that singleton apparently requires initializing beyond just instantiation. Test1 initializes the processor, and then Test2 assumes that the processor is initialized and goes on to test other concerns.
Do not do this!
Many test runners make no guarantees about the order in which your tests run, and will even run them in parallel. And, even if you can force ordering with your runner, this may be a configurable setting that others do not enable or that does not run on the build.
This type of test leads to test suite nightmares — scenarios in which your tests will fail intermittently and seemingly randomly. You’ll then hear developers say, “oh, the test failed on the build machine? Just try it again or ignore it.” And this defeats the whole purpose of having a test suite to warn you of trouble.
7. Test Code as Second Class Citizen
I’ll close with perhaps a more philosophical than tangible 7th sin. This sin involves treating your test code as a second class citizen. You’ll know this is happening when team members say things like, “whatever, it’s just test code, so who cares if we copy and paste?”
Treat your test code with just as much care as your production code. You’re going to need to maintain both sets of code over the long haul, and both are critical to ensuring that your code behaves properly in production. So you don’t want to skimp on either. For more on unit testing, check out this post on different unit testing techniques.
Help us improve this page!
What problem are you trying to solve?