Applying Variability in Automated Tests

This is a guest post by Peter G. Walen

More than one observation has been made that the “Automated Tests” will follow a single path and not necessarily find problems introduced since the last execution. This can be a fair criticism. What if we changed the rules?

One common criticism of test automation is that it will only exercise a single path through the code. This begs the question of “What purpose is the automation in question intended to serve?”

If the intent of your automated testing is to exercise a single scenario under specific conditions then a “single path” is what you need. When people speak of “test automation” they often mean all the functional tests created to support the software from when the software was first made.

If the team creates 100 tests for a specific project impacting a particular software and 75 tests get created for a different project, many groups will claim there are 175 tests to execute “regression.”

Aside from the logical fallacy implied there, one might look at the tests created for each project, see if they serve a meaningful purpose beyond that project, examine them to see if the intent of the tests might allow that test to combine with others, and attempt to streamline the process.

While this may vary in context (including environment, purpose, type, and nature of software under test) some ideas may help many teams wrestling with ever-expanding test code and script libraries.

Basics

My working definition of “Automated/Automation Testing” is this:

Work that is done by a commercial or purpose-built tool to exercise software, freeing human knowledge workers from repetitive tasks and allowing them to focus on work done better by thinking humans than by a computer program.

There’s a lot in there, but the big thing is at the end – automation for testing is another piece of software. It takes a great deal of understanding and trust of one piece of software to be able to use it with confidence to exercise another piece of software.

In the hands of someone trained to do software testing and evaluation, tests get developed, created, evaluated, and exercised to determine if there is actual value in the test. The result may be a collection of tests, each exercising a different aspect of the product. Once there is a measure of confidence in what the tests are doing, the tests themselves are tested for patterns and overlapping areas among them.

At this point, there is a good set of tests that avoid redundancy and exercise crucial portions of the application, including areas of significance to the customers. Then decisions can be made around creating the automated test scripts and executables.

Modern Test Case Management Software for QA and Development Teams

Defining the Core Tests

Purpose-built tests, created while the application is being developed, can give a good idea of expected functionality. These tend to be very precise, looking at specific aspects of the software under test.

Reviewing the existing automated regression tests reminds us of the crucial functions of the application before the most recent changes. Determining if a change should be made or extra regression scenarios added is a straightforward exercise. Still, the team needs to look to see if changes introduced will impact core functions used by customers of the application. Can the new features be included with the least amount of interruption to the existing tests and the least amount of effort?

These tests go beyond the very straight forward tests developed in a TDD environment. They are looking at more than simple units of work level evaluation. They are looking at touchpoints, or integration points, within the specific application, and where it touches other applications, and where there are dependencies between many applications in a system or systems.

Expanding from the Core

Most human-driven test efforts will look for paths where the team can use the same basic tests with changes in variables, conditions, and application settings.

If there are common paths, with differences or variances introduced in the control parameters, data drawn on for the test, variables used to route the test through different paths of the code, and these can be done manually, is it possible to select them through code? Is it possible to set up the conditions that would allow the person launching the automated test, or the automated test itself, to choose which set of variables to exercise on each logical path?

This could be simple, as in some conditional logic within the code that is the “automation.” Submitting parameters at the time the process runs would determine the path through the automation code, and the code under test.

For each parameter-controlled path flow, are there other conditions that might determine possible outcomes? Are there sub-branches that need examination in their own right? If so, is it possible to create a temp-table to track what paths have been executed and what paths have not yet been executed within the selected major path? Can we drive part of our testing to not only make sure each potential path is exercised, but to examine combinations of how the software could be executed, and track potential issues that may not be possible to test with humans driving the main effort?

Granted, this is more complex than some might need or want to consider. The idea of using complex code to exercise other complex code might seem strange. That is what “automation” really is. The complexity of code will vary according to the nature of the systems under test.

Is it Realistic?

Can we expect “testers” to engage in this level of code development? Of course. Is this really that difficult to imagine?

Contributing by assisting in risk and vulnerability analysis using critical “tester” thinking to drive the logic and consider the paths through the code to be exercised is a huge part in determining how the “test automation” can be implemented.

Expect people to contribute to making things happen, find areas they can contribute and encourage them to do so, then let them.

By introducing conditions that can be selected by the operator/tester, we can define a single entity, capable of exercising a combination of related scenarios in a controllable manner. We can use careful thought and intent to replace myriad “tests” which exercise one and only one aspect each.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.

General, Business, Software Quality

DevOps Testing Culture: How to Build Quality Throughout the SDLC

Organizations need conceptual and QA has crucial strengths to effect that change. Here are three winning action plans to change your QA culture and integrate it with the rest of your SDLC.