Tester’s Diary: Getting Ahead With Post-Release Testing

TestRail, Software Testing, Tester's Diary

This is a guest post by Carol Brands.

Back in January, I wrote about how we determined the minimal amount of testing required to meet our release date for a struggling project. But what could we do to make sure we don’t fall into the same trap for our next release?

This is the story of how we’re using post-release testing to practice a different approach that might help us avoid getting trapped by a deadline in the future.

Identifying the Issues

Our first order of business was recognizing what processes led to the project running behind in the first place. We identified the following three issues:

1. Backlog items did not go through enough review between their initial creation and when they were handed off to testers.

For the majority of the project, backlog items were written by the product manager in collaboration with the development manager. They worked at developing acceptance criteria that were understandable and testable without dictating implementation. However, often when these backlog items were passed on to the test team, we had questions.

We identified acceptance criteria that were fine in isolation but needed to be adjusted to reflect other work or features that they interacted with. Most of our backlog items were defined by their relationship to our legacy product, and we often found workflows available in our legacy product that hadn’t been considered.

Finding these problems during the testing phase of development meant that we experienced a lot of churn. We were writing defects constantly, and we often needed to send entire backlog items back into development.

2. Only testers were responsible for testing.

In the early stages of development, it became clear that the development capacity on the team was outpacing the test capacity. We had five developers working on the project and only two testers. Development began well before the test team got involved, so we began work on the project with over 50 stories already marked “Ready for Test.”

At that time, we thought that all the stories would have the same testing priority, so we started with the earlier stories and tried to work our way through the pile. Over time, it became clear that we wouldn’t be able to catch up to development, so we switched strategies to testing the most recently developed stories. This was better, but nothing could overcome the fact that we had more than double the developers as we did testers. This compounded the problems around backlog items being insufficiently reviewed prior to entering the testing phase.

3. Defects were being triaged infrequently via defect reports, not discussion.

By the time we reached the end of the testing cycle, we needed to be careful of what changes we chose to include in our release. Any unnecessary changes would take time to fix and test, and we didn’t have time.

We decided that we needed to document all potential defects so that they could be reviewed by the product manager, and then he could choose which defects would be fixed and which would be deferred. Since the backlog items hadn’t been reviewed by testers, we were finding lots of defects, all of which needed a detailed report so they could be reviewed by the product manager, whose time was limited by the approaching release date and other competing priorities.

Modern Test Case Management Software for QA and Development Teams

Cleaning Up Our Mess

In our particular market, it’s not uncommon to have some lead time between a release and the first sale. Throughout our stakeholder conversations leading to the release date, we discussed using that time to complete additional testing. We decided to use this as a practice run for a different way of working.

First, we enlisted all the developers to assist with testing. We gave some basic directions:

  1. Explore thoroughly using a test environment, as opposed to testing the “happy path” on a local environment.
  2. Write down what you test. You should be able to answer the question, “Did you test this specific scenario?”
  3. If you find a defect, first talk to the product manager about it:
    • If the defect is going to be fixed, write a very basic defect report — just enough to give the product manager something to write release notes from.
    • If the defect is going to be deferred, write a full defect report so that when we make a decision about the defect in the future, there’s enough information to base the decision on.

Using this way of working, we’ve improved developers’ understanding of testing, they’ve practiced being responsible for testing backlog items, and we’ve increased the level of communication happening during development and testing. This feels like a good first step in getting developers more involved in the full development process, rather than treating the testing phase as something that belongs only to the testers.

We hope to iterate on these practices on our next project by including testers in backlog reviews before they go into development, and asking developers to participate in the testing phase as soon as we see more “Ready for Test” backlog stories than available testers. As we bring this project to a close, I am grateful for the lessons we learned from our mistakes.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform


Carol Brands is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.

Comments