7 Types of Testing Beyond What Customers See

This is a guest post by Cameron Laird.

Testers must ensure that the functionality customers see is correct. However, too many organizations — and at least a few testers — still believe that’s all they can do, and they miss many other opportunities to contribute to software quality.

Here are seven easy wins testers can score beyond visible functionality.

1. Access

Developers typically optimize their development environment for, well, development. For a web application, for instance, this might mean operating entirely within Firefox, because the individual programmer has configured their workflows for Firefox.

When the application arrives to quality assurance (QA), though, the situation is entirely different. The application needs to work with Firefox, but also with Chrome, Safari, Opera and all other browsers, and with a range of configurations of each browser, and with each browser in various resource-starved circumstances, and so on. While no specific customer uses all possible browsers, typical products or services are required to operate well over a wide range of them. Therefore, QA has a responsibility to test across the range.

Happily, help is available to meet this responsibility. Several tools are able to emulate a range of browsers in various ways. A tester can use such a tool to accumulate results about behavior across the whole range, all quicker and more conveniently than maintaining and launching each browser independently.

Also of interest in this domain is Lighthouse, a tool from Google for judging the quality of web pages in several dimensions — including accessibility, in the sense of conformance to standards that help make a page readable by the visually impaired. Browsers specifically for the blind deserve inclusion in full-blown testing, and Lighthouse and related tools help do exactly that.

2. Interactions

A second kind of testing for functionality beyond what a typical end-user usually sees involves interactions between distinct elements.
For example, how should an e-commerce site behave when a valid credit card expires over time? This isn’t an error response; everything the user entered was correct at the time. It’s a particular interaction with the passage of the calendar, though, that demands a particular action.

Another interaction most users never see is what to do when different end-user entries suggest a confusion. What does it mean when a 5-year-old requests mortgage bids? What should it mean? Is the business value of defining such responses sufficiently high to be worth QA’s time to specify and verify them?

Modern Test Case Management Software for QA and Development Teams

3. Exception handling

Handling overt errors is just as important, of course. While most users focus on entering correct data and receiving corresponding results, unintended errors can be costly in user experience.

A diagnostic that a proposed password doesn’t conform to site policy, for instance, is likely to lead to far more frustration and confusion than a similar diagnostic that the same proposed password simply isn’t long enough to meet a site’s requirements. Error diagnostics have great potential to help users feel assisted and protected, rather than victimized, by an application.

4. Capacity planning

What happens when load exceeds planned capacity? Does the system degrade gracefully? While we hope that our systems always have capacity in reserve, part of ensuring this is to understand and plan for their behavior when they don’t.

Ideally, a system might generate a warning on the order of “Reserve mass storage is designed never to dip below 22% of total usage, but current reserve is only 14%.” This gives operators an opportunity to arrange that no customer runs out of space.

5. Localization

Another variation on these themes: How does an application respond when configured for different locales? Most end-users will spend little or no time varying their locale; the result they get for their own needs to be right, though, and so testing over a whole range is likely necessary.

6. Equity

Good user interaction is not only correct, but also consistent, predictable and uniform. An application that’s much slower for the common surname Smith than for the rare one Fernsby — or vice-versa — might trouble users. Good software meets humans’ expectations for equity, even though those users might not be able to articulate exactly what it takes to make an application feel “fair” or “unfair.”

7. Compliance

Users are unlikely to know or care whether software conforms to license requirements of third-party inclusions. But QA might be in a good position to verify those requirements and head off potential litigation or other penalties.

A variety of security and compliance scanners(such as Kiuwan) are available to help in this area. Similarly, tests for compliance with style guides, coverage thresholds and related metrics from software engineering are also candidates for help from QA.

Conclusion

Most of what testers test is overt functionality. But the same testing skills also apply to many requirements that are beyond what customers ever see. QA would do well to make all those different dimensions of validation explicit and to manage them effectively.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Cameron Laird is an award-winning software developer and author. Cameron participates in several industry support and standards organizations, including voting membership in the Python Software Foundation. A long-time resident of the Texas Gulf Coast, Cameron’s favorite applications are for farm automation.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.