Old Testing Must Die

This is a guest post by Peter G Walen.

In 2011, Alberto Savoia proclaimed “Test Is Dead” at GTAC. His presentation was, “controversial.” The problems raised still exist. People still do the things that make testing slow and expensive compared to what can be done. Why are so many companies still doing them?

Alberto Savoia gave “Test Is Dead” as the opening keynote for the 2011 Google Test Automation Conference (GTAC).  He presented, among other things, an argument around testing and an “Old Testamentality” and a “New Testamentality.” Social media among software testing “experts” exploded.

Loads of people came down hard on him and James Whittacker, who introduced Savoia. Loads of people denounced him as a sham and his message as wrong and misguided. Many dismissed his talk and assertions as biased, at best.

Here’s a link to the actual talk for you to decide if this criticism is valid.

There is much in that talk worth considering. What I found interesting is the emphasis around building a product, albeit sloppy, and pushing it out to see if there is any real demand for it. In some circumstances, this makes perfect sense. Before investing many thousands of dollars into a product, getting a generally working prototype available might be a decent idea.

If you think a bit, I bet you can recall loads of products that launched with great fanfare and faded into the ether almost immediately. As far as that goes, it seems reasonable to consider if it is worth a company’s time and effort to perfect a product before finding out if anyone wants to buy or use it.

This is a fairly common approach in many industries – the most obvious that people may have encountered first hand is foodservice and restaurants. Dishes will be developed that seem good to the chef and staff, then floated as a “special” for a short time. If demand is there, it likely will get a bit of polishing, preparation instructions will be broadly distributed and the kitchen crew trained in making the dish, instead of just a few people. It then becomes a regular menu item.

Are we the only industry that sees an issue with this? Or is it our own sense of self-preservation that made, and continues to make, so many people scream in protest about this idea?

But, Testing

I suspect what had so many people upset was how Savoia dismissed what he called “Old Testamentality” in software. He described this as Requirements leading to Specifications leading to Design which led to code and testing. When coding finished then testers tested the code.

Requirement documents and Specification documents were paramount. They drove Design documents and the Test Strategy document(s) and from that Test Plans, with Scripts, Suites, Scenarios, and Cases. The expectation was, of course, that test planning was based on those other documents.

Any scenario not covered by those other documents was impossible because the documents were perfect and covered everything. Variation was not possible. Ever.

Of course, Requirements got defined and pulled together and reviewed and modified and reviewed and modified again. After a while, they would be “agreed to.” In some places I worked, that simply meant the people who were trying to describe what they wanted the software to do simply gave up arguing with the IT people who kept saying that was a bad idea and would not help them with what they wanted to get done.

The Requirements would be translated into a formal document and signed. Then copies of this got sent to the Specifications team who developed specifications based on the requirements. The Specifications step was complete when all the Requirements we converted to Specifications or the time ran out, whichever came first. These then would be sent out to the Design team.

The Designers would look at the Specifications and build a Design which reflected the Specifications. Sometimes they would go back to the Requirements document and make sure their Design still matched. Then the Design would be printed as a document and sent on for people to write code and build the Test Strategy and Plan.

When the Test Plan was complete, along with the Test Suites and Scenarios and Cases, the test team would wait for the code to finish. And wait. And wait.

After some time, usually after the scheduled code completion date, the code would be completed and able to be tested. There would be much celebration and pizza and t-shirts to celebrate the landmark achievement. Lots of work was done and the project was the best project ever. And code was only a little late, considering how complex it was. Testing could begin.

And for that reason, the testing team was not at the party with pizza and t-shirts. They were testing. Because their test time was seriously impacted by the code being so late.

Modern Test Case Management Software for QA and Development Teams

So, Testing?

I’ve worked in organizations following paths very similar to this. I can recall four very carefully documented work models and approaches. I still have copies of the documentation explaining the work models and the great benefits from this “new” model.

There were stage gates and checkpoints and signoffs required for each work item, including test plans and scripts. Any variation from the signed-off and agreed to path required heavy lifting to get approval.

Don’t get me wrong. There are times when this level of checking is needed. Systems where people could be seriously injured or killed. Software where life and limb are on the line, this makes perfect sense.

Other applications need a detailed level of confirmation, if not to the level of life-threatening. Systems that do navigation or communication also need very careful testing with great attention to controls and details. I’m not arguing this is not the case.

What I have found is that for most organizations, the extreme detail of information is a good idea. I have also found that most organizations are incapable of such detail in advance of the development work starting.

Translated, for most organizations, these development structures will not work. Testing limited in scope or bound by rigid guidelines can be excellent, as long as every possible condition has been anticipated and planned for before testing starts. This is the presumption of most of the work models like this.

As much as we would like, and advocates of the above approach would teach, most software will not behave in ways that can be absolutely anticipated.

Testing done in controlled, structured models, by definition, does not allow for variations in possibilities. The presumption is that situations, and all possible paths within the software, have been identified and considered during the Requirements, Specification, and Design phases of the project. Without a clear vision of a fixed end-state these conditions are nearly impossible to achieve.

Testing That Works

It is this fixed scope, pre-defined and nothing else testing Savoia proclaimed as dead in 2011.

He was wrong. It is still very much with us and for many companies, it is the only testing they know. Perhaps that is why it has not yet died.

Testing done when all development is completed is problematic, at best. If the testers start working on code or a function that was written a month or two before, how likely is it the developers will remember the specific piece of code involved? How quickly can they drop what they are currently working on and address each issue for something they have marked as “done”?

Testing takes time. Fixing defects and issues found takes time. In my experience, in environments like this, fixing the defects takes longer than testing did, sometimes it takes as long as the original development work.

It is past time for that to change. Spending huge amounts of time working in isolation around what testing should look like for a given project is often less reliable than other options.

I am not advocating “winging it” when it comes to testing. Rather, I’d suggest another approach: conversation. People developing code, people working on design, and people planning tests working together, contributing ideas, thoughts, and concerns as equals. Work in conjunction with each other and with the people whose vision and intent the project team is acting upon.

This isn’t some Post-Agile vision of what is supposed to happen. It isn’t a utopian view of people magically doing things and writing on sticky notes and windows with dry-erase markers. It is, instead, a model I have worked in and seen good success as the result.

This gives the test team and the developers time and access to each other’s ideas and insights. Everyone contributes to the functionality and the testing around the functionality.

Delivering Value

This makes test planning realistic. It speeds understanding and allows space for changes and variation when encountered. It allows the entire project team to focus on the real needs of customers. It allows the team to deliver software with clear value in solving customer needs and problems.

When I’ve used this approach, it has helped improve communication and understanding. It has helped the team work together instead of as a collection of individuals working on the same project. It allows the testers to focus their efforts on what the changes will look like.

They can drive their planning around how the new and changed software will work. They can look critically at the differences and build meaningful tests that will inform the entire project and not be a green check that something happened.

They can also reduce the effort necessary to build documentation that may not be relevant by the time the product is deployed. Testers can pair with developers to create meaningful unit tests which illuminate behavior not considered.

In the majority of circumstances, it is the evidence around testing that matters. The carefully prepared volumes around test strategies, plans, suites, etc., can have some purpose in some circumstances. However, they still provide no real evidence around what testing was done and what the testers encountered.

Instead of making sure “all the testing boxes are checked,” drive the test effort to make sure the product is the best that can be made. This is what should replace “old testing.”

“Old testing” must die to allow testers to contribute to their teams and organizations as fully engaged professionals.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.

General, Business, Software Quality

DevOps Testing Culture: How to Build Quality Throughout the SDLC

Organizations need conceptual and QA has crucial strengths to effect that change. Here are three winning action plans to change your QA culture and integrate it with the rest of your SDLC.