Continuous Regression Testing

Despite all effort to move quality upstream, most organizations I work with have a regression testing process. This process is expensive and slow, so they do it less often; ‘batching’ a large number of changes into a sprint, a release, or a project. With more time between releases, the uncertainty increases. This makes regression testing more expensive, leading to an awful cycle.

The primary alternative to batching releases is to separately release every change that has meaning to a customer. To enable that, we need to reduce the defects that our teams create instead of waiting to remove them at the end. That means a better beginning design, beginning requirements, professional developers, more tooling, and many other techniques.

That might be enough for a simple project; Elisabeth Hendrickson once said that it is best to work with skilled developers. When someone asked about regression testing, she said that she didn’t worry too much about it. She instead tries to test throughout the whole process, even though it can be more complicated.

Testing throughout the process may include rechecking features that are important, have recently been touched, or generate new use cases due to new features. That means there is a lot in common with this technique and regression testing; only, instead of testing for sixteen hours at the end of a two-week sprint, we do it a half-hour a day during the entire sprint.

But how can this work?

Get TestRail FREE for 30 days!

TRY TESTRAIL TODAY

Turning Regression Sideways

Continuous

In graduate school we thought that regression testing at the end of the build was always necessary. After all, the smallest change could break something in the previous code; we needed to have the code complete before testing. If just one bug was found and fixed, we needed to retest everything, doing a “full regression test.”

What is wrong with that?

First, we never did complete testing in the first place. Complete testing is not possible. We always did the best we could with the time we had, but if an executive cut testing by a day or two, we would do the same thing we always did, but we would also create a list of untested things – highlighting risks, perhaps getting a slight deadline extension, especially if there were show-stopper bugs.. It was always that way.

Also, when we were trying a “waterfall” process, we didn’t do full retests after the first round. The second round would likely be close in scope to the first, but in the third, fourth, and fifth rounds, we would become lazy, asking ourselves what the smallest amount of reasonable re-testing would be.

Cutting that big list of test ideas into a few of the riskiest ones … Why not do that all the time? Done right, we could separate the concept of re-testing the entire system from the release.

The thing we need to understand before reaching a conclusion is the place for modern test tooling.

What About Automation?

Continuous

Some of my friends and co-workers like to automate a large number of checks, then run them all on every build. That’s fine; I am a great fan of Test Driven Development, which generates a regression-test suite at a very low, code level. Likewise, programmers who code to an interface may create integration tests. Trying to automate 100% of the GUI checking, on the other hand, leads to some tough questions.

It might be a safer goal to try to automate 100% of the checks that will run on every release. That leaves us open to “black swan” problems – the kind of problems that are hard to predict ahead of time but seem obvious in hindsight. Here are a few classic black swan risks that tooling won’t catch:

  • Microsoft creates a new browser
  • Apple unveils a new use-case or form-factor for devices
  • A combination of features intersect in a way that makes the software more powerful
  • A new admin user type combines with an old feature to create a security issue
  • Comments about potential defects that seem related appear on the iTunes or Google store
  • Customer Support brings credible but hard to reproduce issues to your attention

Personally, I’ve had the best experience in either talking to other testers, customers, product owners, or programmers. Asking what they are most concerned about and listening to their replies. What are the cases that need to work? What are the things that, if we can get a little more time, we should consider more carefully?

One company I worked with had a tool that listed all the web pages visited, taking out the IDs, including time to serve and number of users in the past month. The fields were sortable, so you could easily bring up the most popular pages or the slowest pages.

Receive Popular Monthly Testing & QA Articles

Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.




We will never share your email. 1-click unsubscribes.
articles

What It Looks Like

Continuous

One team I worked with used sticky notes to create a list of risks for the release.

Each risk was written in a way that defined a charter for fifteen to thirty minutes of work. Then we voted on the risks, with each team member getting four dots to place on the tickets. Once we had the risks prioritised, we built a Kanban board.

This idea, of a sorted risk of lists to be pulled off as we have time, is the core of continuous testing. It means the testers (or perhaps the whole team) are testing at the system level every day, for perhaps an hour or less.

The key is that this list never goes away. Instead, we add more and more test ideas. Releases can still be batched, (management could choose to release every sprint if they want to) but in any case, the board will serve as documentation for what was tested, and, if we are careful, when.

You can also call these ‘test cases’ and plan them with a tool. The important thing here is to give up on the idea of candidate builds and testing only for a release; instead, shift your time, and interleave regression testing with feature-testing.

If you can’t do that, then the software project team has design and code issues to work on. That is not a testing problem. Still, setting continuous testing as an end-state goal might help your quality extremely.

How excellent is your quality, and your test/code team?

This a guest posting by Matt Heusser. Matt is the Managing Director of Excelon Development, with expertise in project management, development, writing, and systems improvement. And yes, he does software testing too.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

Software Quality

Test Planning: A Comprehensive Guide for Success

A comprehensive test plan is the cornerstone of successful software testing, serving as a strategic document guiding the testing team throughout the Software Development Life Cycle (SDLC). A test plan document is a record of the test planning process that d...

Software Quality, Business

Managing Distributed QA Teams

In today’s landscape of work, organizations everywhere are not just accepting remote and hybrid teams—they’re fully embracing them. So what does that mean for your QA team? While QA lends itself well to a distributed work environment, there ar...