Moving from Manual to Automated Tests

This is a guest post by Peter G. Walen.

Many organizations want the benefits of automated testing. They believe that the “next step” in testing is to build automated versions of what they manually run. This seems logical, but is it going to give them the improvements they want or need?

The Challenge of Automating Manual Tests

Recently I was approached by three different people with very similar problems. All of them work in organizations that are trying to “go Agile” and for their leadership, this means “Automate All the Things.”

Talking with them at a local coffee shop, they each described very similar problems. The causes of these problems also seemed very similar to each other. Some arose from how the manual test cases were developed. Some were the result of taking “functional” tests from when features were developed or modified and taking them as written into the “regression suite,” where they joined other tests with similar origins. Others were simple “happy path” confirmation scripts, intended to make sure some piece of functionality continued to run.

Some tests seemed to be inherited legacy scripts whose purpose was no longer recalled and people working with the software did not understand. All three people described almost exactly the same problem. Loads of manual tests they were expected to run on a regular basis (weekly, monthly, per release) and not enough time to run them and do good testing of the new development on which they were “supposed” to be spending much of their time.

One of these, a friend who is a fairly new test lead, complained that she barely had time to get her team to run through the tests on a single OS/platform, let alone the plethora they were supposed to support. Her “solution” was to run the suite of tests on a single OS/Platform combination each time, and the next time run them on a different platform.

I was not sure this was doing what she hoped it would do. I was gratified that she was certain it was not but was “the best her team could do” given the constraints they had to work with.

The obvious solution was “automate the tests so they can be run more efficiently.” This seems like a reasonable approach, and I said so to each of them. And then I made a clarifying statement, “As long as you have some idea what the purpose of the test is. What do you expect this test/script to tell you?”

Many organizations have huge test suites, functional, regression, performance, whatever, and will boast about how many test cases or suites they have and run on a regular basis.

Automating those suites can be very problematic. The crux of the issue is “Why are we running this test?” We can have a general idea about what we want to check on a regular basis. We can also be focused on the core functions of the software working under specific conditions. Identifying those conditions takes time and effort. Simply using the “this is how we always do it” may seem reasonable at first, but to me, this is often a clue to underlying problems.

Modern Test Case Management Software for QA and Development Teams

Evaluating Tests – Why does this test exist?

When I’m looking at how to test something, whether it is to test new functionality, verify that existing functionality has not been impacted by recent changes, or looking at performance or security of the software, the core question I try and address in each test is “What can this test tell me (or the product team) that other tests are not likely or cannot tell me?”

Many times, I’ve seen loads of tests added to test suite repositories that are simply clones of scenarios that already exist in the repository. These tests are calling out to be reviewed, refined, trimmed and perhaps pruned to keep the full test suite repository as relevant and as atomic as possible.

Having redundant tests for variances in implementation environment, platform, OS, may seem thorough, but are people really using them as intended? Having worked in environments where that seemed reasonable, we very quickly abandoned it because changing one process or script task point usually resulted in updating multiple scripts which were effectively identical.

This sort of made sense in the late 1980s and early 1990s (when I was first wrestling with this.) Then, the hack was to track each script’s execution and result in a spreadsheet with each possible configuration listed. Now there are much better options (ahem, e.g., Ranorex.)

The interesting thing is, those types of tests, redundant ones intended to be exercised on multiple environments, exist with at least some level of understanding of what they are for. Often times others are run simply because they are in the list of scripts to be run.

What About the “Automate Everything” Approach?

My concern with the “automate everything” idea, at least when it comes to regression tests, is that the same level of thoughtless conformance will exist among the people writing the code to do the automation. No one will ask why these tests are needed. No one will ask what the difference between any of the tests are. No one will understand what it is they are “automating.” Finally, very probably (at least in the instances where I’ve seen the “automate everything” model implemented) no one will ask any of these questions for a long, long time after the “automated testing” is implemented.

When looking at functional testing, testing being done to exercise new code or modified code, I’ve seen many instances where the people doing the work have an attitude approaching “check the box.” They write a quick test in the platform of choice without considering what it is they are doing, or why. Many will look for a simple happy path to be able to automate, without looking at potential areas of concern. When asked, the response I’ve seen tends to focus on “main functionality” and not “edge cases.”

Yet odd behavior is not often found in the simple, “happy” path. In my experience, exercising the application with an attitude of “what happens if this should occur…” leads to unusual results. Sometimes, they have value to reproduce and add to the manual or automated test suites. Sometimes they do not. It is in these cases that take consideration around how to create the scenario, set the environment and define the sequence of events to exercise the instance, that the greatest value is found.

Recommendations

To make any form of testing meaningful and valuable to the organization, thoughtful consideration is needed. Tests must be considered that would provide the greatest amount and value of information. The significance of the information produced can, and should, help drive the decisions around exercising the tests, let alone taking the time to write the automated scripts to carry the tests forward.

Once created, test suites must be reviewed periodically to make sure the tests present are still relevant and fulfill the purpose they were intended to serve.

Do I think automated testing is important? Yes. Absolutely. I find it invaluable in many, many scenarios. Using the right tool for the purpose at hand is terribly important for you to be able to have any level of confidence in the results.

Make sure your automation tests make sense. Make sure you are using automated testing for those things that the tools at hand are capable of testing and giving reliable results.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Automation, Programming

How to Report On Test Automation (Tools + Metrics)

Test automation streamlines repetitive tasks, accelerates testing processes, and ensures consistent, reliable results in software testing. Reporting on test automation provides actionable insights and visibility into the test outcomes, enabling teams to mak...

Software Quality, Integrations, TestRail

How a Document Management Company Streamlined Testing and Boosted Efficiency with TestRail and Reflect

A leading document management company in the document management industry has made significant strides in streamlining digital content access and organization with its cloud-based document management platform. Their suite of software products enables compan...

Automation

How to Improve Automation Test Coverage 

When developing automated tests, test coverage is fundamental for improving end-user experience and increasing software quality. This article outlines actionable steps and strategies for improving your automation test coverage.