4 Outside-in Signs That You Don’t Have Sufficient Unit Testing

This is a guest post by Erik Dietrich

Starting unit testing can be tough. Many questions, misconceptions, and points of confusion torment the beginner. One of the most common of such doubts revolves around amount. What’s the correct amount of testing I need? How much is enough? Should I aim for 100% coverage?

In today’s post, we’ll help you answer some of these questions, but not by telling you how much is enough. Instead, we’re sort of doing the opposite. We’ll give you tools to identify how few tests is too few.

Get TestRail FREE for 30 days!

TRY TESTRAIL TODAY

Test Coverage Is Low

4 Outside-in Signs That You Don't Have Sufficient Unit Testing . Knowing when you have enough testing. Test Coverage. Unit Testing. TestRail.

The first and most visible sign of a lack of unit testing is, unsurprisingly, low test coverage. Test coverage is a topic that can spark fierce debates. Some developers will fight religiously to defend their point of view on the issue. The issue being: is test coverage really that useful of a metric?

I’m not settling that matter in this post, nor do I intend to. One thing for me is obvious, though. While we can’t seem to agree whether 100% coverage is a good thing or not, we can agree that shallow coverage is a bad sign.

How can developers gain confidence in the test suite if it only covers a narrow portion of the code base? The answer is that they can’t. When a developer sees the green bar on their screen, they can be pretty confident that their code is correct. If the number of unit tests isn’t high enough, having such a degree of confidence is just wishful thinking.

Test Coverage Isn’t Increasing

4 Outside-in Signs That You Don't Have Sufficient Unit Testing . Knowing when you have enough testing. Test Coverage. Unit Testing. TestRail.

Picture this: you work at a small-sized software shop. There are about six or seven developers currently on the team. There’s also a QA department, comprised of a test analyst and two interns.

Some unspecified time ago, management heard something about unit testing and brought in an external consultant to provide training for the team. Since then, the developers have been adding unit tests to the code base with varying degrees of dedication and success. Some of the developers on the team are really into it, others less so, and a few are openly skeptical about the whole thing.

You, being an advocate for unit testing, know you and your colleagues are lucky to at least have management on board with this (since it was their initiative). Some developers in other companies aren’t so fortunate. But here’s the catch. Even though management officially supports the unit testing initiative and have also put money into it, in practice, they only pay lip service to the importance and benefits of testing. When project deadlines start looming, managers pressure developers into skipping unit testing in favor of writing production code.

And what about the QA department? Well, they work around the clock, finding and reporting bugs every single day. And that’s excellent work because those bugs won’t affect customers. But you know what? Just about everyone knows that writing a new unit test every time someone files a new bug is a widely accepted best practice. Yet this doesn’t seem to be happening.

If bugs are continuously being found and reported, test coverage should steadily increase. When that doesn’t happen, it’s a powerful indicator that your codebase needs more tests.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

Existing Tests Rarely Fail

4 Outside-in Signs That You Don't Have Sufficient Unit Testing . Knowing when you have enough testing. Test Coverage. Unit Testing. TestRail.

A telltale sign of insufficient unit testing is when developers seldom experience a test failing.

Don’t get me wrong. Your tests shouldn’t fail all the time. If they do, you might have problems with the test suite itself. Maybe the tests depend on implementation details of the production code. Perhaps the tests make unnecessary assertions (e.g., they expected a specific exception message, but someone found and fixed a typo on said message, causing the tests targeting it fail).

Or maybe the tests are rightly failing, which means the developers are introducing errors at an alarming rate (which should scare you, but you should also be relieved that you have unit tests in the first place).

But let’s get back on topic. Now that it’s clear that I’m not advocating for an incredible amount of test failures let’s address the opposite extreme. A test suite that never fails can be just as bad as a test suite that always fails—just in a different way. Unit tests are supposed to help developers gain confidence in their work by catching their errors. But if your test suite never actually catches any errors, then what good is it?

Reasons Why Your Tests May Be Failing by Not Failing

OK, we’ve just established that an error-catching, confidence-boosting mechanism that fails to catch errors and therefore to boost confidence is pretty much useless. The question then becomes why? Why doesn’t the test suite catch bugs more often?

One possible answer is that the tests just aren’t right. After all, test code is still code, and all code is prone to bugs. So it shouldn’t come as a surprise that unit tests can contain bugs themselves. Fortunately, there are ways to counter that. You can have a second pair of eyes review every code that makes it to production, either in the form of pair-programming or a more traditional code review practice. You should employ some workflow on which you watch a test fail—in an expected way, that is—before it passes. Test-driven development (TDD) is an example of such a workflow.

Finally, you can also employ a technique called mutation testing. Mutation testing refers to a process in which an automated tool deliberately introduces defects throughout a codebase and then runs the test suite. Each defect introduced is called a mutation. If at least one unit test fails after the introduction of a mutation, we say that mutation was killed. If not, then the mutation survives.

But if you’re already doing all of the above and you’re sure that your tests are as good as they can be, and yet, they rarely fail…then something must be wrong. (I mean, maybe, just maybe, your tests aren’t failing because your developers are so darn good and write perfect code pretty much every time. I find this highly unlikely though.)

That being said, I can only think of one solution left for this puzzle. If you’re fairly confident that your tests are of good quality, and you have evidence that new bugs continue to be introduced in the codebase, but your tests refuse to fail, then the only logical conclusion is that you don’t have enough tests.

It’s just a matter of probability. The smaller the portion of the codebase protected by tests is, the less likely it is that the error you’ve just introduced will fall on that covered area.

When Tests Fail, It Isn’t Due to a Bug

4 Outside-in Signs That You Don't Have Sufficient Unit Testing . Knowing when you have enough testing. Test Coverage. Unit Testing. TestRail.

Here’s another telling sign that you don’t have enough unit testing. If, besides rarely failing, when tests do fail, it’s more often than not due to something other than a bug in the production code.

Ideally, the reason a unit test fails is due to a bug in the production code. This is the very reason unit tests exist, after all. Sadly, in the real world, several reasons may cause a test to fail. To cite a few:

  • A bug in the test itself. In the previous section, we talked about some techniques and tools to prevent this from happening, but there is no silver bullet.
  • Changes in the public API of the system under test. How big of a deal it is to break changes in a public interface depends on the type of software you’re building, but stability is generally a good and desirable thing.
  • Changes in the implementation of the system under tests. On the other hand, tests failing due to internal changes in the system under test? That’s a bad sign.
  • Miscellaneous reasons, such as some problem with the CI system or server.

Here’s the thing: a test failing due to the reasons above should be the exception, not the rule. Usually (and ideally), a test should fail due to an error in production.

If you have tests that rarely fail and for the wrong reason when they do, that’s not good at all.

Where There’s Smoke, There’s Fire

4 Outside-in Signs That You Don't Have Sufficient Unit Testing . Knowing when you have enough testing. Test Coverage. Unit Testing. TestRail.

In today’s post, we’ve given you four signs you can use to identify when you don’t have enough testing for your application. Don’t jump to conclusions though. Use these signs as a starting point, the way doctors use a patient’s symptoms to identify and treat the cause. Then, if it seems like you’re on the right track, proceed to more testing.

And remember: the most crucial sign—both a symptom and cause—of a low number of tests is the lack of enthusiasm among developers. If you fail to create a strong unit testing culture in the development team, no amount of techniques or tools will perform miracles.

This is a guest post by Erik Dietrich, founder of DaedTech LLC, programmer, architect, IT management consultant, author, and technologist.

Test Automation – Anywhere, Anytime

Try Ranorex for free

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Agile, Automation

Test Automation in Agile: Advanced Strategies and Tools

In agile software development, efficient and reliable test automation is essential for maintaining quality and speed. Agile methodologies demand rapid iterations and continuous delivery, making robust testing frameworks a necessity. Test automation streamli...

Agile, Automation

Test Automation in Agile: Key Considerations

Integrating test automation is crucial for maintaining software quality amid rapid iterations in today’s agile development landscape. Test automation enables continuous delivery and integration, ensuring the stability and functionality of the codebase...

Uncategorized, Agile

Understanding QA Roles and Responsibilities

The software development process relies on collaboration among experts, each with defined roles. Understanding these roles is crucial for effective management and optimizing contributions throughout the software development life cycle (SDLC). QA roles, resp...