I’m a tester who works in a development team room. On our team, developers write unit and integration tests, and all other test activities fall to the tester—or at least, that was true until this week. From now until the next release, management has decided that developers can test, too.
For us, asking developers to test was the solution to an emergent problem. There was sudden pressure to release our product before the end of the year in order to satisfy our company’s business needs. The problem was, as I talked about in This Test Strategy Has a Defect, I was the only tester on the team, and I had fallen significantly behind in my testing. There was no way I could meet the aggressive new release date.
We had meetings to try to figure out how we could get the testing done to a level that was “good enough” and still meet our deadline. We evaluated risk and found that we could cut scope by changing some of the assumptions we’d made about our users. Because the product would initially be used internally, users could be trained in workarounds, and defects with workarounds could be deferred. We were able to disable some features, which reduced the testing for those features from making sure they worked to making sure disabling them didn’t cause major problems.
But even after cutting scope significantly, there was still no way I could meet the deadline by testing on my own.
Then, during a meeting, we talked about bringing on some help. There was no way to bring any of the other testers on board without losing too much time training them in a product they had never seen. Then our director said, “Developers can test, too.” I almost couldn’t believe my ears. I knew that having developers test the product is common in other companies, and even in other parts of our own company, but I had never heard my coworkers suggest it as something we might do until that moment.
Get TestRail FREE for 30 days!
A New Approach for the Development Team
The development team offered to allow two developers to help me test. The developers and I looked at the list of defects and backlog items remaining to be tested. Defects seemed easier to start with because the boundaries of testing seemed a little more clear, but there were still questions about how much testing was good enough.
I tried to explain what I knew intuitively: First, test the reproduction steps to confirm the obvious problem is no longer present. Then, closely read the defect for clues about other ways the defect might have presented itself. For example, if the defect occurred with a set of input types, and the reproduction steps only mentioned one of them, try the same steps with the other input types. Once it looked like the problem was fixed, it’s time to start thinking in the opposite direction: Try to show the problem is not fixed, and think about how the change that was made may have caused other failures.
I encouraged the developers to take their time when stopping to think about potential failures, because I know their instinct is to move quickly to get things done. Armed with a better understanding of what was expected, the developers started testing the subset of defects I had set aside for them.
The first day of working with the developers, I ended up spending a lot of time with them as they tested their defects. They were surprised to find that often, as they tested one defect, they ran across other unexpected behavior as they set up their test. I showed them how they could do a quick check in our tracking tool to make sure it was already written up, or told them they could just send me an IM, because I know most of the known defects off the top of my head. When they found something that wasn’t written up yet, I helped them write up the new defect using the method I described here.
A New Way of Working?
It was tempting for them to try to fix defects on the fly, but fixing low-priority defects at this point represented more risk, and more testing that could keep us from meeting our deadline. Instead, we wrote up the defects, perhaps with the details on how it could be fixed, and reviewed that list with our product manager so he could prioritize and decide which defects were worth potentially missing the deadline.
We’ve been working through defects for more than a week now, and the developers are doing a great job of testing. They’ve even found some defects that I may not have found, due to their development tooling and deeper insight into the backend. It is exciting to watch them learn how challenging testing can be.
My hope for this experiment is that it will show us that maybe having developers test shouldn’t be a last-ditch effort. If we can work developer testing into our normal product cycle, we might be able to keep testing in sync with development, increasing speed and quality at the time time. I hope this experiment makes the suggestion one that management would be ready to hear.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.
Test Automation – Anywhere, Anytime
- Gurock & TestRail Acquired by IDERA
- Staged Releases for Better SaaS and On-Premise Deployments
- Advice on Balancing Testers for Embedded Scrum Teams
- TestRail Highlight: Test Management Project History and Test Case Versioning