I’d characterize a lot of the software work I’ve done over the years as a feature factory. Usually this factory is kicked into action by a new customer. Salespeople write a contract for a date — and a list of features that don’t exist yet. After the contract is signed, the development team spends their next several months churning through the features one by one. The testing work for these features can span from easy to “what the heck am I supposed to do with this?” but I can always see a way forward.
However, I have spent a lot of time lately with changes that aren’t as obviously customer-facing: refactors, library upgrades and changes to core functionality such as authorization. Let’s talk about what testers can do in these scenarios.
Get TestRail FREE for 30 days!
I most often see these changes done in a big-bang style: Product managers stick a card in the backlog for something they don’t really want because it isn’t great for sales demos — library upgrades, for example. Once that card hits the top of the queue, a developer takes it and starts working. That developer goes into the cone of silence for a few days or a week, and when they emerge, there is a new change that might affect the entire product.
Daily testing work usually has a direct flow to the customer. When I start working on a feature, I can ask who will be using it, what their organizational role is and what they care about. But for technical changes, the answers are “Everyone and everything.” That’s not very helpful. When the change hits a build, testers don’t know the important details and are stuck playing context catch-up.
I don’t like big-bang development. I generally prefer to work in a pairing development pattern of a developer and a test specialist working on one change at the same time until the change is ready to push to production. I stay in the pair for these cards and perform my normal role. That means learning in depth about the code that is changing, asking questions when I don’t understand something, and building crucial context.
The phrase “building context” sounds vague, so let’s break it down. Let’s say I am part of a developer pair working on a change to our authentication library. At a superficial level, I would look at that and say, “Yup, we use that to log in.” So all that needs to be tested there is whether a person can log into our product, right?
After working in the pair, I learned that this change also affects sessions, timeouts and application access, as well as logging in and out. So when it came to testing, I had to make edits to my session time and server time to simulate session expirations while watching the network tab in Chrome’s web developer tools to see that the appropriate API calls were made. I also had to log in with various user accounts and see that the role-based security was still respected, and of course see that logging in and out in various platforms worked normally.
I know it can be a challenge to stay mentally engaged on these changes. Staying with the pair, or at least as part of the development process, saves time and makes for more effective testing.
Regression testing is ubiquitous in software work. Every time we make a change — code, infrastructure, platform or configuration — we introduce the risk that the product might fail in a way we aren’t expecting. We do regression testing, often in the little slot of time between when a code change is done and before we push to production, in hopes that we might discover some of those surprises before our customers.
Small changes that we work on day to day, such as adding a new field to a page or altering a workflow, hopefully have small side effects. But we can improve the odds of this with thoughtful architecture and a dash of test-driven development (TDD).
I take a precision approach for this scenario. I will take a look at the code that changed and the code that our change interacts with, look at the places in the product I might see those changes manifest, and then talk with developers and testers about how they feel. Is the code generally stable? Or is it delicate and does it tend to have new problems when we make changes?
It is sometimes less obvious to me how to approach technical changes. Let’s say we are upgrading the version of Elasticsearch used in our product. To do that, a developer has to bump the version of Elasticsearch installed on the server, update the version in a few build configuration files, and then make any syntax changes required to work in the new version.
I would make an assessment of our product to make sure we are testing the right things. There are a few places the user types in text strings to get search results, there are a few pages that have panels with data searched for based on a create date, and then there are some minor reporting features where a person can filter a page based on search criteria. Each of these needs to be tested, but how much? The answer to that lies in what changed between Elasticsearch versions.
I would use a “none, one, some, a lot” heuristic on the searchable pages. This spans searches that return nothing at all, return one item, return a few things, and server-intensive searches that return a large amount of data. To complicate that, I will think about the types of data returned. Do characters in text matter? Does the length of the strings being returned matter? These are questions we may want to know the answer to.
In some cases, we may have questions about how the new version performs, too. For library changes, I build my strategy keeping in mind what changed between the old and new versions, think about where those changes might manifest in our product, then adjust that coverage map based on how our customer uses the product.
The company I am working with today does a lot of TDD work. We write a test, run it and watch it fail, write some production code to make that test pass, then refactor the code so that is is cleaner, easier to read and more performant. The aftermath of this is a suite of tests that tell us when we changed something in a very fast feedback loop.
After a big refactor, we will usually see a lot of old tests fail. This is prime time to refactor tests that were based on pre-refactor product, but also to review coverage and make sure we are testing the right things. Pretend that you have a monstrosity of a class that handles user management. There are too many concerns in one place, the code is not DRY, and it’s tightly coupled to too many other parts of the product. We want to make the class as small as possible by creating new classes. This will make a lot of tests fail. Rather than just porting all of these results to their new respective classes, the tester should prompt questions that help with review.
I generally ask developers to step me through a test. We talk about what it covers and why we need it. Maybe we want to see that a method is called, a value is set or an error is raised. Through this process, ideally we end up with more effective tests and new tests that cover important areas we hadn’t thought about before.
What to Do Next
The most important thing I want to get across is that a testing specialist should remain active and involved in technical work. We can learn important context that is needed for future testing, help devise a better strategy for regression-testing large changes, and design better test automation. There isn’t an aspect of the development process that doesn’t benefit from testing, and that includes technical, non-customer-facing work.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime
- TestRail Leads in the Spring 2020 G2 Grid for Test Management
- Announcing TestRail 6.3 with Enhanced Jira Integration