The Bug Reporting Spectrum

This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.

On my first day working as a software tester, my lead pointed to a bug tracking system. He gave me step-by-step instructions on how to use this system. I produced a bug, entered the product and browser information, and clicked submit. I did this on my first day because it is a critical skill, essential to being a software tester.

text cta arrow

Modern Test Management.
Try TestRail free for 30 days!

I had little training beyond my introduction to the bug tracker. Sadly, this is like most people’s introduction to bug reporting. This often results in boring triage meetings, where managers decide what is or is not a bug. The manager will reclassify bugs to make reports look good, and send reports of bugs that cannot be reproduced back where they came from. We also got bug tracking systems cluttered with data that no one would use.

Bug reporting is a skilled activity that can either enable faster delivery or jam up a development group, making them slower. I will explain why reporting is hard, what skilled reporting looks like and why it doesn’t always have to be done through a reporting system.

Improved Bug Reporting

The Bug Reporting Spectrum Abstract Graphic

My first couple of jobs were like a mad game of musical bug reports. I’d discover what I thought was a problem; write it up, and inevitably the bug would be sent back to me. Sometimes the developer would say it wasn’t a bug at all, they couldn’t reproduce it, or they would say my bug was a feature disguised as a bug.

The problem I was having, and the problem I see most people have, is hiding the bug.

Before I developed good reporting techniques, I’d write up a bug with a complicated title that tried to capture every possible detail. In the description I’d add an introduction paragraph to set the context, and then literally every step needed to reproduce the bug. After that, I would write about the ‘expected result’ and ‘actual result’. The title was a mess, so programmers had to open the report to figure out what was going on. The description was also a mess. Programmers would stash the report away to be reviewed during a triage meeting, when they might have some help interpreting the problem.

To counteract this, I improved my bug reporting skills by creating useful titles. If I make a report now, I like to use the title format ‘X fails when Y’ when possible. This gives the programmer a decent idea of what went wrong, and were the problem might be before they open the report.

I also improved the descriptions by cutting them down. I removed the introduction paragraph. If the steps to reproduce the bug were needed, they focused on the parts that were critical. Some bugs are hard to describe. They are hard to produce on purpose, or have long, hard to follow workflows. I use supplemental things like recordings of me triggering the bug, screen shots, data files, and log captures in those situations. Sometimes, a recording is more accurate and easy to follow than a set of written instructions.

My improved reporting style had a positive effect; the amount of data in our bug tracking system shrank. We had fewer reports to contest in triage meetings. Furthermore, we had fewer bugs that would sit in the tracking system release after release.

Pairing and Agile

The Bug Reporting Spectrum Abstract Graphic

Whilst working in a different testing job, I reduced the amount of written bug reports by at least 50%.

I was working with a development team that consisted of two back end programmers, and three that worked in the user interface. All of us sat at desk pods in the same room, with three or four people in each pod. We were agile-ish. Our team delivered software to production every two weeks. We had daily status meetings and generally tried to work together. We weren’t to the point of having feature teams and single flow development, but we were trying.

We would work together before checking a feature fix into the source code repository and building it to test. We were working on a product that helped marketers create small advertisements. These advertisements that would be viewed through social media channels such as Twitter or Facebook. One project was to build a new type of advertisement based around video content. The finished result would be a YouTube video embedded in a frame with some text and a few text fields that would collect user data.

The programmer working on this product told me that it was mostly done. He wanted to know if I could come take a look on his machine before he checked in. We started with the process of building the advertisement in our tool. I started by testing the usual suspects. I experimented to see what happened if I entered too many characters, non-numeric characters or a bad date format. We found a few bugs and he continued to work on fixing those while I continued testing and taking notes. We found some more interesting problems and questions once I started looking at the advertisement that our tool made. The video didn’t auto-play, so to view the content a person would have to click play. Was that correct? All of our advertisement types had some analytical functions attached. This one was supposed to record views, average view length, and a few others. But, how do you define a view? Does the person have to watch an entire video to be considered a view? What if they start halfway through and watch the last 30 seconds?

We didn’t have the answer to these important questions, and our product person was at a customer site that day. Our questions were logged into the bug tracker so we didn’t lose them. When the product person got back we had a brief meeting to talk about the issue, updated the ticket to reflect those decisions and then the developer fixed them.

Bug reports were mostly done through demonstration and conversation. We were able to discover new problems, demonstrate exactly how they were triggered, and get them fixed without ever touching a bug tracking system. We went to the bug tracker only when we had questions that couldn’t be answered immediately, or bugs that were complicated and needed some research before they were fixed.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

A Note on Zero Defects

The Bug Reporting Spectrum Abstract Graphic

Occasionally I will see people advocating ‘zero defects’. This is the idea that every single bug found should be fixed immediately. In this scenario, there is no bug tracking system, and there are no written bug reports.

A zero defects flow might look something like this:

A developer makes a change to add a discount field to a purchase page. The tester goes to work once the change is in a build. They may find a few superficial problems. For example, an error is thrown when non-numerical strings are entered, the user can enter discounts larger than 100%, and there is no limit on the number of decimal places a person could enter. These are pretty simple problems, and the programmer begins to fix them with an input mask that restricts what can be typed into the field.

The tester then starts looking at more complex scenarios after this. They will check if the discounts apply correctly, if someone can apply multiple discounts, and how is tax calculated. After some investigation, the tester finds that the discount is calculated incorrectly when the purchase total is greater than $100. At this point, the developer isn’t finished with the input mask change. Once that change makes it to an environment there would be some retesting to do.

There is a new dilemma. Should our tester interrupt the programmer still working on the previous fix to talk about the new problem? Should they wait to talk about the new problem until the other issue is fixed and retested? Should they move on and test some other aspect of the feature? Not talking about the bug now introduces the risk that the tester might forget something important about the bug making it harder to fix. The solution is usually to make some lightweight documentation with a post-it note or sent over email instead of a bug tracker.

The idea of “zero defects” is a lie. As my colleague, Matt Heusser points out, a project might work in a specific browser that will only Create, Read, Update and Delete database applications. Or a project might work in back-end batch applications with no user interface, and a few other limited applications. I’ll step out on a ledge and say it again: it’s a lie. If you think you have zero defects, let’s bet a consulting assignment on it.

At some point during feature development, a tester, programmer, or product person will stumble across a problem that can’t be fixed immediately. That issue might be complicated, it might require research, or the programmer may be busy working on something else. Either way the bug can’t be fixed now, and not documenting it is risky business.

Only When a Necessity

The Bug Reporting Spectrum Abstract Graphic

My general rule now is to only make a bug report when it’s an absolute necessity. If there is a question that no-one can answer in the next day, or if there is a bug that can’t be fixed yet. Most of the time, I find that a conversation can solve problems that a bug report introduces. Some people say the best tester is the one that finds the most bugs. I’d change that and say the best tester is the one that can get the most bugs fixed. That means reporting them in a way people care about and understand.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.