What to Test When You Can’t Test Everything

This is a guest post by Peter G. Walen.

It happens to everyone doing software testing at some point. You have mountains of work to do, volumes of scenarios, and tests to work through; and there is no way they can all be tested, even if you and the entire team work 24 hours a day.

I see this challenge in nearly every organization and team I work with. There are mountains of things that need to get done and no possible way they can be finished on time to make the delivery date promised to the customer. People working in various “Agile” environments face similar, perhaps identical, challenges. It is a demand that boils down to “Test Everything.”

Except, in reality, you simply can’t test everything.

How do you go about addressing this? How do you do the best possible work, meet the expectations of the team and do your best to please the customer?

Planning and thinking

When I get the sense that testing work will likely exceed the time available to test, I start with some basic planning. Look to see these things:

  • What can be tested?
  • How can those things be tested?
  • What is the amount of effort needed for each of those things?
  • Are there features similar enough to be tested together or in tandem?
  • What features are most vulnerable to failure?
  • What features are most important?

The first several items are things the tester or test team can work through and make a meaningful plan. The goal is to think through possible work options and identify the best way to maximize the use of time for each of the features.

This gives us a list of items that you can plan to execute and a rough idea on dependencies and how features work together. This gives us a good idea of a schedule with a sequence of specific features to be tested, and a rough idea of the time needed for that testing.

The last point is one which takes something hard for many people. It takes a conversation with the customer or their representative. In a Scrum environment, that might be the Product Owner. In other environments, that might be the Product Manager.

When we have identified the items most important to the customers, we can return to the draft test schedule. Using that information we can move the customer’s most important features first in the line for testing.

Talking with the development team, we can identify the areas which they believe to be the most vulnerable and prone to failure. Combine these with the customer’s important features and we have a prioritized list to work from.

The highest priority tasks are done first and the lowest priority tasks are at the end. Simple, right?

It can be. Often it gets very complicated, very quickly.

First complication – customer priorities

Conflicting customer needs will absolutely make these conversations complicated. For many organizations, the conversations about priority happen before any work begins. This, at least, is a starting point for talking about what should be tested first.

Not every need and expectation is equal to the others. When there are multiple customers, particularly if they are external customers paying for specific enhancements, the test team will need support and guidance from organizational leadership to balance the competing priorities.

Leadership in a mature organization will step up and wrangle the collection of needs into something resembling a prioritized list. When that doesn’t happen, things will get messy. The demand to test everything tends to grow along with demands to test everyone’s top priority items.

Of course, not everything can be tested at the same time. There can be only one “top priority” item. Left to their own devices, most test organizations will make a reasonable, reasoned effort to figure out what is most important and what needs to be done to test those things.

The simple truth is, test teams should not be doing this coordination in a vacuum. There needs to be someone in a leadership position supporting both development and testing work by directing priorities. When the leadership fails to do this, they are setting the team up for failure.

Internal customer and organization priorities are often easier for teams to navigate. Even when demands are coming from multiple parts of the organization, these can be weighed against the overall business value and the effort to deliver them. Test and development teams might be able to play a larger role here, however, the organizational leadership needs to make these determinations.

When priorities are determined, things get easier, right?

Modern Test Case Management Software for QA and Development Teams

Second complication – effort and time to test

When it comes to testing, the underlying question for many organizations is, “When will the testing be done?” Pundits might suggest it is never done. Even if planned software testing is complete, customers will test it simply by using it. Others might respond “It depends.” Neither of these is helpful.

The challenge in coming up with time to test and how much work this testing will take is the subject of numerous papers, talks, books, and blog posts. This never-ending cycle is often held up as evidence that software testing is separate from software development.

For us, the question becomes “Can we test everything we need to test that is expected to be delivered in this release?”

The answer is usually “No.” So let’s continue to clarify that a bit.

How long does it take to test a given feature? Has the test team worked with it before? Are they familiar with how it should work? Have they been involved and active participants in the discussions around the feature? All of these questions play into “How long will it take?”

One approach I often use is to examine each deliverable in isolation. If things work really well, how long do I think it would take me to exercise the function then verify my results with the development team and the product owner or manager? I then ask the same question but if there are some problems found – how long do I expect exercising the feature and fixing any problems found to take? This will take the development team to be involved.

Of course, their answer will typically be “We have no idea, it depends on what problems get found.” This is fair, and frankly, quite honest. They are doing their best to deliver a product with as close to zero problems as they can.

We can come up with a fancy equation to prove we are giving a scientific estimate. Most of those don’t actually work in my experience. Instead, I look at something like how long it took to get bugs corrected in previous releases. If there are other projects of a similar scale the team has done, what was the time needed for bug fixes to be coded, returned, and tested to success? Find an average of the overall time spent and that is a reasonable estimate for most projects.

This gives an estimated range for testing each feature with and without problems. We now have the next piece we need to consider.

What to test

With enough time and resources available, we can test everything. Except we rarely, if ever, have the time we need to test everything. We are usually limited to the tools already available. What can we do?

So far we have defined a prioritized list of features to test. We have the most important and most risky features identified and can put them, and all their prerequisites, at the top of the list to test.

We have made reasonable estimates for how long each feature will take. We can shift these, perhaps, a little, if something else changes. However, we have some estimates to work from.

Using the prioritized list and the estimates we can draw up a plan for what can be tested, roughly when it can be tested, and how long we think each component will take to test, and the overall time needed to test everything.

Now comes the hard part. How much time do we have, from the completion of development to when our testing MUST end? Does our timeline fit within that? Once in a great while, it does. Sometimes it is really close.

Most of the time we will go over, if not well over, the total available time. This sets up the next set of conversations with the project team and the product owners or managers.

When I’ve been in these conversations, they generally resemble something like this. I describe the process used to determine the testing estimates. I remind them of their participation in the discussions around priorities and risk. And what was most important and what was less important.

Then show everyone where the cut-off line is. Anything before that line can be tested with a high degree of certainty. Anything below the line might not be tested at all.

The important part of this discussion comes now: What does this mean to the project and the customers? Is it OK if some things are not tested? What about reducing the depth of testing for some features to get time to test other features?

A very, very few times I’ve had people look at the list of items and say something like, “If we push the date 3 weeks, how much more can be tested? How much more time would be needed to test everything?”

You then shift into the expert information provider while they weigh the benefits and risks of testing. When this happens, you have done your job as well as possible to this point. You have provided professional advice based on your expertise. You have consulted them before on what information is needed.

Now they are making the decision of what to do.

Sometimes, there is pushback on the estimated time to test. This is fairly normal. If you have examples of prior projects you can point to, preferably ones they are familiar with, you can remind them of the actual time to test, as compared to the test time originally estimated.

In the end, we testers are in the business of providing information. Others need to act on that information. You have given them information as to why “test everything” in the project cannot be done in the constraints of the project. Now they need to act on that information and make a decision.

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

Software Quality

Test Planning: A Comprehensive Guide for Success

A comprehensive test plan is the cornerstone of successful software testing, serving as a strategic document guiding the testing team throughout the Software Development Life Cycle (SDLC). A test plan document is a record of the test planning process that d...

Software Quality, Business

Managing Distributed QA Teams

In today’s landscape of work, organizations everywhere are not just accepting remote and hybrid teams—they’re fully embracing them. So what does that mean for your QA team? While QA lends itself well to a distributed work environment, there ar...