This is a guest post by Peter G Walen.
We can find a list of what various experts tell us that testers do. There are extensive lists of items these experts tell us what testing is all about. They tell us to write test plans and test cases, to find bugs, to verify conformance to requirements, to tell a story, and to add value.
What does any of this really mean? Have you ever read this and thought, “Something is not quite right?” Do we add value for the customers, for ourselves, for someone else, or because we’re told to? Can we do better work — no matter what type of environment we are in — to improve quality and add value for our customers? Can we be proactive in helping developers and designers prevent bugs in the first place?
A common view of testing — what some consider is the “real” or “correct” view — is that testing validates behavior. Tests “pass” or “fail” based on expectations and the point of testing is to confirm those expectations.
The challenge of introducing the notion of “Quality” with this conception of testing brings in other problems. It seems the question of “Quality” is often tied to a “voice of authority.” For some people that “authority” is the late, near-legendary Jerry Weinberg: “Quality is value to some person.” For others, the “authority” is Joseph Juran: “Fitness for use.”
How do we know about the software we are working on? What is it that gives us the touchpoints to be able to measure this?
If you take a course in software testing or read a book or find videos on “how to test” or what “good testing” is, you will find a vast array of opinions and contradictory views. You can read books and articles, online and in print, or look to Reddit and other sources for competing ideas or views.
When you compare the current views on software testing to those commonly held 15 or 20 years ago, you will likely find that core views haven’t changed in a significant manner. Some of the arguments may have become more nuanced but not really changed.
There are reasons for this. At the core, however, lies a thick mud of unclarity. This can come from many sources including what the very word “test” implies to people. So, why do we test?
Bugs & Quality
I’ve asked the question, “Why do we test?” many times of testers, developers and managers.
The common, almost automatic answers include “find bugs” or “improve quality.” These seem reasonable at first, but let’s take a deeper look.
We can find bugs in any piece of software if we look hard enough. This includes software currently in production, available today for a price and free software that can be, and is, downloaded and used by millions of people every day.
Was this software tested? It depends. In some instances, there was testing done to whichever standards the company releasing it set. In others, it likely had a small amount of testing, but not rigorous. If there is a problem with a Google app or Facebook, a fix can usually be pushed out in a matter of hours – and since people typically don’t pay to use that software, it is likely it was not tested very much at all. Maybe some automated smoke tests when the build finished.
Other software might have more testing where people have spent money, sometimes a lot of money, for use in business or work life. Software that touches on medical devices, aircraft or automobiles and other applications that literally can be “life or death” will be tested more rigorously. It is much the same with software dealing with personal records or information.
Still, all software has bugs. That is a given.
The question, “why do we test?” will not be suitably answered with, “to find bugs.” That is one aspect, but not the biggest piece.
To improve quality? That is another common answer. But by itself, testing does not and cannot improve quality. Someone must be told of what has been found and act on it. Results must get communicated and someone must act on those results. We might find significant bugs, report them, they get corrected and after that, the tests work.
Have we improved quality? Or does the software now meet the minimum level of expectation?
Still, the concept of “quality” is central to what we do. Testing, by itself, may not be able to “improve quality” but it can certainly measure and inform interested parties in the software, the behavior of the software and how closely it meets expectations.
Adherence to Requirements
Requirements are an interesting thing. The biggest problem I have seen is that people often treat requirements like a fixed point. A single oracle of truth that must never be questioned. When they consist predominantly of buzzwords and acronyms there tends to be a level of ambiguity. This ambiguity leaves the potential for misunderstanding and misinterpretation.
This often stems from failing to ask clarifying questions around the requirements. If we don’t say, “I don’t understand what you mean here. What do you intend?” The chance of having the software designed and written to meet those needs, those expectations, can be pretty slim.
To consider “adherence to requirements,” everyone must agree on what the requirements mean. In some environments, this leads to precise discussions around small aspects of how the system behaves, the understanding of how the change will impact it and what the expected behavior will look like.
Without that level of understanding, there is no way to gain any certainty that the “requirements” have been met until the software is completed. By then it is often too late to fix variations without completely missing the delivery target.
If testing isn’t about “finding bugs,” or “improving quality,” or checking “adherence to requirements,” then why do we test?
Perhaps the answer lies in something that people tend to overlook. Complex questions, or problems, usually do not have a single, simple answer. In this case, the reasons why software might be tested are varied and can be complex.
We might test for a “sanity check.” We might test to find paths for a demo at a trade show or conference. We might test to make sure standard or core functionality continues to behave as expected.
These are all various paths. They are models for considering how the software behaves under certain circumstances. These models serve a common purpose. Each model tells us something about the software.
When we are testing any model and find unexpected or incorrect behaviors it tells us something. When we find some aspect of the software that does meet expectations or requirements, that tells us something. When we see how the software behaves in unexpected circumstances, that tells us something as well.
This, perhaps gets to the answer to the question of “why test software?”
We test software so we can become aware of its behavior. When we are aware of its behavior, we can be informed of the vulnerabilities of the software, and how the software behaves in unexpected ways, both good and bad.
When we are aware and informed of how the software behaves, we can then provide accurate and meaningful information to stakeholders and leadership.
That is why we test software. And testers are the ones who uncover this information and make it available.
Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.
Help us improve this page!
What problem are you trying to solve?