Successfully Managing Test Cases: Finding the Right Test Case Tool

This is a guest post by Peter G. Walen.

I have worked with a lot of tools to help me with test management and test case management. I don’t really have a favorite. I’m not sure I can name the “best” tool. What worked really well in one situation and project often didn’t work well in another.

Doing what I do, people ask me what the best tool for test management is. I ask what they mean by “test management.” Do they mean “managing the people and processes in testing” or do they mean “test case management?” 

It seems many people don’t see a meaningful distinction. This is an important distinction to me. The idea of people as “resources” might be the central difference. I do not see human persons as resources at all. The tools we use are expendable, disposable resources. Here, I will look at the question of test case management. 

The root of an answer lies in the question “What problems are you trying to fix?” Without knowing what goals you have, what problems you want to address, and what needs to be done to fix them, no tool or methodology will help you. Understanding the need you have is often the first step in defining the method for managing test cases that will work for you and your team.

These are things I focus on when discussing how to manage test cases.

The Fundamentals of Test Case Management

There are some things that simply are needed to do anything around test case management. In the mid-1990s, I used spreadsheets to track tests planned, executed, passed, and failed. I also tracked bugs found, fixed, the time between discovering a bug and when it was fixed, and the impact of each bug. 

It was helpful at the time and made it easy to show progress to anyone interested in a quick summary. I could work with the leadership team and find areas that were having more problems than others. We could review all bugs, open, closed, deferred, and rejected, and look for areas that needed improvement in development and testing. 

I could rearrange the test cases from specific functional processes to mimic the workflows of “real” users and examine the differences. This made it easier to refine testing. It also helped present the software as a complete system and not a collection of disparate components.

I could put it on a shared network location where everyone who needed to see it, could, while limiting those who had access to update or modify it. 

Those fundamental needs are still present and most tools handle them with varying degrees of grace and simplicity. When it comes to things that differentiate one tool from another, here are some of the things I look for.

A Tool to Support Your Workflow

Your software might have very specific sequences to how things work. It might also be completely open-ended with no clear-cut “usual” way customers use it. Many organizations try to replicate that same process in their workflow for development and for testing. 

This can make sense sometimes. I like to visually represent the possible paths people may follow in the software I’m testing. That gives me a good idea of the relationship between modules and how that translates to possible, logical workstreams.

Whatever I end up with, I want my tool to support those paths, not make me use a pry bar to get my test cases loaded into their preferred approach. When tests need to run in a specific sequence to exercise specific conditions, then we need that to be able to happen. 

When tests can run in an apparently random order, I want the tool I’m using to support that. The LAST thing I want to deal with, is having the tool decide a test “failed” because I did not run it when I ran others around it. 

If my team does good testing work, I want a tool that will support and encourage them, not frustrate them by what appears to be arbitrary rules.

Test Cases, Development Tasks, and Work Items

Looking at tests and test cases in isolation is what many teams and organizations do. It has been a long time since I’ve done that, simply because doing so makes the task of understanding the software that much more challenging. 

Test case management tools that are not integrated with design and development tasks, drive the idea of “test at the end.” To be successful, reduce rework, and help drive delivery, testing effort must be closely integrated with the work of designing and making the software. Breaking this bond builds a barrier in teams and slows the overall effort.

Any tool that does not handle the integration of testing work with design, development, and implementation tasks will hinder what you are trying to achieve. In the early 1990s, when I was using spreadsheets to manage test efforts, this was common. It is no longer acceptable in most instances.

Problem Tracking

Bugs. Defects. Anomalous behaviors. You are going to find them. You will need to keep track of them. For many teams, the Ideal situation is the tester reaching out to the developer(s) who worked on an item and checking with them. They may be able to fix the issue in the next build and move on.

When testing work is being done long after the development is done, this is not possible or realistic. When teams are in different time zones or on different continents, this often won’t be possible. You will need to have a mechanism to record issues, associate them with a piece of development work, and assign them to appropriate people for further investigation. 

You will also need to be able to track them like any other work item. When are they being investigated? When are they getting worked on? When does the developer believe it is fixed? Additionally, you may want to know how many times it got bounced between test and development.

If it went more than one round trip, either the scenario or requirements are unclear, or the code is too complex and needs to be reviewed or refactored.  

Tracking Workload

If there are one or two testers in a team working on a project, it might be pretty obvious who is doing what. Still, it is one thing to “know” who is doing what testing. It is another thing to see what is waiting to be done and what the queue looks like.

I’m less worried about what “estimated” work might be for a project. Usually, not always, the amount of testing needed can be surmised from the amount of development work for the given feature or function. I prefer to not worry about development hours and estimates and instead, look at the impact. The broader the impact of a given feature, the more effort I expect to be expended evaluating it. 

The size and scope of tasks need to be readily visible to everyone interested in the project, not just the immediate participants. I want any tool my teams are using to be able to accurately reflect the amount of work in progress and the amount waiting to be done, at any time.

Agile

No tool is an “agile” tool. Every tool I have seen and worked with can be used to support an Agile team or environment. They are not Agile in themselves. 

Every tool I have worked with can be more flexible or less flexible, depending on how it is set up and implemented. This is the central question of being an “Agile Tool,” or not. Make sure the people configuring the tool selected fully understand the needs you have and what you want to achieve. 

This will take conversations and possibly experimentation. A “generic” or “standard” installation may make the task easier for the IT or Support people doing the install. It may not help the people working with it on a daily basis.

Implementation/Installation 

When you have a tool selected, look at what it takes to install it in your environment. I have been burned so many times by using “standard installation” options that I am tempted to keep an aloe plant at my desk. 

I have seen no tool where the “standard” installation will actually do precisely what the team needs. Planning what the configuration and use cases for the tool should be, hence guiding the installation process, must be done before any commands to start the process begins. 

Installing a tool is usually not treated as a “software project.” That is often a problem. It is a software project and the customers of the project will find the tool amazing and helpful or a burden, depending on how much thought goes into the implementation.

This is crucial when the “IT” or “services” teams are the people actually doing the installation. We would not want to develop and implement a solution without finding out the needs of our customers. Teams must take the same approach with the tools they will use.

Finally

Every tool has good and bad qualities associated with it. The biggest, most used tools on the market have huge detractors commenting and denouncing them. Use your favorite search engine and look for your favorite tool with the word “stinks” in the search bar.

Most of the horror stories I’ve seen are configuration problems or using the tool for things it was never intended to be used for. Even allowing for the “sour grapes” effect, some of the stories will likely strike close to home. No tool is perfect. Every tool stinks to some degree.

Finding one that works for your team is usually worth the effort of not going with the most well-known on the market.  That may be the best choice, however, make sure you are choosing it because it really will meet the needs, not because the management/leadership team has heard of it. Take a tour of the TestRail tool or get started with your free 14-day TestRail trial today!

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance, and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.

General, Business, Software Quality

DevOps Testing Culture: How to Build Quality Throughout the SDLC

Organizations need conceptual and QA has crucial strengths to effect that change. Here are three winning action plans to change your QA culture and integrate it with the rest of your SDLC.