This is a guest posting by Justin Rohrman.
Nearly all the testers I know react negatively to metrics programs. They argue that measurements are rarely measuring the things we think they are, people can change their behavior to drive the metrics in a direction management wants to see, and it can be easy to get the wrong idea about what is happening when reading a number with no supporting story.
And yet, development leadership still has a very reasonable need to understand what is happening on a project at a high level. There are metrics that can be beneficial for both management and the testing team. Here are a couple of measurements I have found useful.
Understanding test coverage is useful because it gives you ideas about both what is tested and what isn’t yet covered. This metric can help answer the questions of how much longer you need before you’re done testing, why we didn’t catch this bug, why testing is taking so long, and whether we are testing the right things.
My preference is to take a holistic approach, starting with the easiest place to measure first: the code base. Adding a code coverage tool to your continuous integration system will give an idea of how much of your code base is covered by unit tests. This may not be useful at a glance, but over time you can see trends. If unit test coverage is dropping, then you might also see an increase in time spent by your testers on bugs that could have been designed out of the product earlier on.
You also want to understand testing performed by people with less tooling. I like to do this by making product inventories. We talk about software through abstractions all the time — pages, features, scenarios, configurations. Start keeping track of these metrics in some lightweight format that’s easy to change and share. This coverage can be reviewed in a way similar to how code-level coverage is reviewed.
Of course, test coverage doesn’t tell you about the quality of your testing and whether you’re designing tests that are likely to find an important problem. But it does give you a place to start the conversation. The goal of talking about test coverage is to show what is in the plan, to review what is already covered, and to see what is missing.
Rework is what happens when something isn’t done perfectly the first time around. A metric tracking rework looks at things that surround testing and directly affect it.
In testing, we usually see evidence of this from bugs. A new change might get merged into the test branch, and when we start looking closely we find that submitting the page fails under a few different conditions. We then have to spend time investigating and collecting the errors, reporting bugs, and then waiting for the fix to be merged back and built. Hopefully, things work the second time around, but sometimes they don’t.
This can also happen before development starts. One company I worked with did a Three Amigos meeting before any work began on a change. This was to make sure we all understood what change was being requested and had one last opportunity to decide whether we were building the right thing. A few times a month, we would get a new card and start talking about it. Eventually, we would discover that it somehow conflicted with a feature we were already working on, or that some aspect of the change wasn’t clear enough to start on. That card had to go back into the product management queue of work to review and clarify. It was a good thing that this happened because otherwise, the development team would have spent time on the wrong thing. But it still represents rework.
Rework is usually found by testers, but it tells a story about the development environment. When rework happens frequently, it usually means that people with the right skill set are missing from the team, or that time constraints are so short that other aspects of the product are being sacrificed to get the product built “faster.”
Each bit of rework affects the schedule and has a cascading impact on the things that are supposed to come next. They also affect the budget, because each of these instances means more people are working longer than was initially planned for.
The easiest way to begin tracking rework is to measure bugs and cards that move backward in the flow. This is the stuff nightmares are made of for most testers, so let me clarify. The point isn’t to count bugs or to see how long it takes you to test something. We want to learn about the environment that causes large numbers of bugs or lots of false starts so that the situation can be improved by skill development or removing project constraints.
Measurement Drives Behavior
People change their behavior — sometimes on purpose, and sometimes not — to make a metric move in the right direction. Pick your measurements to understand specific problems, and communicate them clearly. If you start with a problem, measuring it might help drive good behavior.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
- TestRail Again a Leader in the G2 Grid for Software Testing
- Announcing TestRail 6.0 with UI Enhancements and Docker Support