Tester’s Diary: Five Experiments for Reducing Rework

This is a guest post by Rachel Kibler.

I like testing. I don’t think anyone is surprised by this. However, I don’t like feeling like a gatekeeper, and I don’t relish the inevitable conversations with developers about improving their code. 

One day during my second week at this company, each of my five developers had something ready for me to test. A normal day had me releasing between two and five things, but this day, I released nothing. Each of the five stories had more than one bug, and when those were fixed, each had another round of bugs. It was a demoralizing day for everyone on the team, and I resolved to change things for the better.

The first step was figuring out how to measure rework. We decided, as a simple metric, to measure every time something went backward in our kanban flow. This meant that if work started on something, and then people realized we needed more information or more discussion and the work item moved back to “To do,” it would count as rework. Mostly, it measured when things went from “In testing” to “In dev,” but we chose the broader sense to encompass more issues than just those specific to code.

I started sharing those metrics with the team and getting them to talk about why things were moving backward.

Modern Test Case Management Software for QA and Development Teams

Then, armed with information from our discussions, I came up with a series of experiments to try to reduce rework:

  1. Speak up more in refinement meetings
  2. Review designs and get the team to review them too
  3. Pair with developers before they create their pull request
  4. Get the team writing more (and better) unit tests and integration tests
  5. Get the team more comfortable testing each other’s code

The first one, and part of the second, required only my nerves.

I like being liked, and I don’t like conflict. Great qualities for a tester, right? Conflict is inevitable. But I start from the assumption that everyone wants good code that serves a purpose to go out the door, and that helps with a lot of conversations. It cuts through the “test versus dev” mentality so that we’re all on the same side.

Speaking up—a lot—in refinement helped quite a bit. Along with that, I started asking to go through designs during refinement.

In one particularly memorable refinement, the product owner was really excited about a new feature. I had questions about every sentence and every design element, which got the team asking more questions and ended up sending the feature back for a lot more thought, investigation and integration collaboration with other teams. When the product owner had done all the additional research, he wrote it up and specifically asked me to poke holes in it. He saw that we had the same goal, just different roles in getting there.

We started asking our designer to send out designs as they were ready for review, so we could review them individually and as a team rather than just having the product owner review them. I had gotten tired of questioning word choice on screens, button sizes, and usability issues once they were already coded in. Following my lead, the team quickly realized that it was reducing rework when we found things in design. Typos were corrected, awkward copy was changed, and, like after we started speaking up in refinement, we felt more confidence in implementation when it was time.

Pairing with developers was hard to get off the ground. The idea behind this was to talk with developers while they were coding, so we could try to catch bugs before they were implemented. I would ask questions about what we were doing, give suggestions about what I would try, and bring up a lot of negative test cases. It’s slower than if they were coding alone or with another developer, and I don’t have solid metrics to prove this experiment was a winner. Still, I’m continuing on with this in the hopes that I can amass either metrics or enough anecdotal evidence that it’s a good thing.

As for the fourth idea, some of my developers don’t like writing unit tests. They don’t see any value in them. I’m fairly certain this is because they have not had strong training in how to write good unit tests. Developers teaching developers how to write unit tests and integration tests is still in progress. The reluctant ones are starting to come around, though.

The last experiment, getting the team more comfortable testing each other’s code, is also still in progress. Informally, I’ve talked with a lot of the developers about how I test and how they can make sure each other’s code is good, and I’m putting together a training lesson. They seem to do OK without me when I go on vacation, which is my goal.

In a recent week I still sent back plenty of work items, but it was generally only once, and it wasn’t everything. We’re making progress!

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Rachel Kibler is a tester at 1-800 Contacts. She can be found online at racheljoi.com and on Twitter @racheljoi.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.