This is a guest posting by Carol Brands.
This has been one of the hardest months I’ve ever had at this job. A team member left the team. We missed a deadline — by a lot. And there is still what feels like a mountain of work to do. Maybe it’s worth taking a look at how I got here, and how I hope to avoid it in the future.
Earlier in the year, the test and development teams started work on a new project. We began testing after development had already started, so we had a large backlog of stories to test. We had three testers working alongside six developers, and for a variety of reasons, we were unable to keep up with development. The backlog of stories to test was growing.
Trying to Keep Up
We tried to shore up testing by doing our best to keep up with development rather than attempting to work through the backlog. We stopped working on old completed stories, only working on the most recently completed stories so that we would at least have fast feedback on the most recent work. We thought we could come back around and find solutions to the previously developed backlog later, but we soon learned that this was not going to happen. We knew there was a problem, but we assumed we would have time at the end of the project cycle to catch up.
Toward the end of the year, the stakeholders decided — without input from the test team — that a release date needed to be set. They chose a stop-development date just before Christmas and a release date in mid-January. This meant that all testing needed to be completed within three weeks of stopping development. It was impossible. We were months behind in our testing, and now we were being asked to finish it within three weeks, right around the holidays.
It was about this time that I was promoted to team lead. I had been participating in developing our test strategy throughout the project, and now I used my new role to suggest a change in our approach.
A Minimum Set of Testing
The biggest risk factors in the new product were data-related. We needed to be able to import data from our flagship product, and we needed changes from the flagship product to be synced to the new product. I suggested that instead of worrying about individual stories, we could create a minimum set of testing to be completed before release that consisted of importing a known data set across three databases, and then testing that a set of common workflow scenarios in the flagship product were brought across.
While the three testers worked on the minimum set of testing for each of the three databases, our developers would take a little guidance and test the stories specifically related to syncing the flagship product with the new product. Each story would be tested on only one database, but the developers would be spread across the three database platforms. That, alongside the scenario testing happening in all three databases, should provide sufficient coverage in the little time we had. The hope was that we might be able to finish by the deadline and give our product manager an idea of whether the most important part of the new product was fit to release.
Then the unexpected happened. With the new test plan in place, in the final week of December, one of our testers quit. The reason given didn’t refer to the current stressful situation, but with some longer-term dissatisfaction, possibly due to being part of such a small team. We all took a long lunch on his last day, and then it was time to scramble.
We reworked the plan by bringing on one of the developers to act as a tester. This didn’t alter the strategy too much, but with our extremely tight deadline, the pressure increased significantly. This was when my new role as team lead really sank in. I was designing and redesigning test strategies to meet changing deadlines. I was figuring out who on the team would be best to help with our minimum viable testing, which required an understanding of “happy path” testing vs. “deep exploration.” And I was guiding the developers through testing stories and writing defects. The pressure was intense.
Learning From Our New Strategy
When the release date came, we hadn’t finished. However, thanks to the new strategy and the parts of the testing we had completed, I at least was able to give the stakeholders an estimate of how much longer the testing should take, and that was used to make a better guess at the actual release date.
More importantly, I now have a good idea of why this project failed to meet its initial deadline, as well as what I want to do in the future to make sure we never have to go through a process this stressful again.
Next month, a new intern will be joining the team, and we’ll start on the second phase of development on the new project. I’m hopeful for a fresh start for our test team, for our project and for my new role as team lead. Wish us luck!
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.
Test Automation – Anywhere, Anytime
- Announcing TestRail 6.2 with Fast Track Editing, Dynamic Filtering & Save Validation
- TestRail Leads in the Spring 2020 G2 Grid for Test Management