Does It Work?

This is a guest post by Peter G Walen.

I think the question I dread most is “Does the software work?”  It’s a question that testers get asked on a fairly regular basis, and it often comes with loads of unstated questions, as well as angst. In this article, we’ll look at what might be really getting asked.

Why does this particular question make me pause and think before answering? It is partly because of the various views and understandings of the people asking. Also, like many people in software and testing, I’ve been burned.

If you answer based on your understanding of how the application is supposed to behave, you may have a totally different impression than the person asking the question. People working on the project may ask the question around requirements – have they been met? Does the product address each requirement?

Requirements

This can be a reasonable starting point. What I have learned is to look at the requirements and be certain I am aware of what they say, and importantly, what they don’t say. Let’s assume that there are five requirements asserting “The software will…” These give very specific directions around specific conditions. But what happens if the conditions are not met? Can a data combination exist that is not covered by the documented requirements? If so, how should that be handled?

Are there possibly contradictory combinations? If X and Y are “true” then Z must be “false.” What is controlling those values? If Z is “true” can something happen to X and Y to make them “true”? Does the state of them both being true force Z to be false? Does your testing look at each possible combination?

Requirements that are vague or unclear make it challenging, if not impossible for testers to evaluate software. How do you evaluate a requirement of “the software will respond quickly?” Are there standards the team or organization have in place to follow? If not, what does “fast” or “quickly” mean? Can a tester even model the behavior customers would experience? Can you extrapolate performance in your test environment to what performance would be in the wild?

The problem I’m dancing around here is that requirements can be a good place to start, but not end. Too many times development teams, including testers, do not delve into the intent or need behind the requirements. Requirements need to be the springboard to start conversation and consideration.

Modern Test Case Management Software for QA and Development Teams

Expectations

If we can say the requirements have been met, at least to some level, there is something else we can look at. Have the expectations of the people who use the software been met? Have we filled the needs of the people who will be working with the software on a day to day basis?

Requirements often come from people who are not involved in actually using the software or doing the task the software is intended to support. They will understand at some level, often a meta-level, but the daily nuts and bolts, the sequence of what can be done or what needs to be done, can often be misunderstood, if not a mystery.

Can we make sure the software fits these needs? Can we emulate the tasks being done by the people who need to use the application? If this is possible, then using the requirements as a general guide, we can model the behavior of the people using it.

This gives us a broader range of possible test scenarios. We can set up models to work through combinations of requirements and expectations. We can look at how other pieces of software may interact with what we are developing.

How Can This Fail?

Testing to show that something “works” is straightforward. It is the crux of what most people do. It generally consists of the least complicated scenarios, including combinations of data values, and the mildest weight of configuration.

The opposite question, “How can this fail” is far more complex. It is an examination of risk. Risk analysis is often glossed over because it is often poorly understood.

One exercise I have used fairly often, and then found Elizabeth Hendrickson uses a similar exercise, is to consider the worst possible headline that could be generated about the software in question. Then, test to make sure that can not happen. When this disaster scenario is accounted for, repeat the exercise with a new “disaster” headline. The object is to determine how you would test your software to avoid each of the disaster scenarios.

Failure may not make the national or even local news. But it can impact your customers. This exercise can help you focus on the vulnerabilities you have not considered in testing either requirements or user expectations.

Does It Work?

To answer that question, we need an understanding of the person asking it. We can understand their expectations. We can understand the requirements as documented. If we have engaged in broader conversations, we can understand the implied requirements.

Importantly, and this is often overlooked, we can understand how the software could go wrong. If we can understand these things, we can avoid the trap of a “yes or no” answer and engage in a meaningful discussion about how the software behaves.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform


Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.