Testers working on the same product or project for weeks or even months at a time might experience “tester’s block”. Tester’s block, similar to writer’s block, can leave testers in a rut, repeating the same tests over and over, without finding any new information. Fortunately, there are some mental tools, and some strategies that help testers adjust their mindset during such difficult times.
James Bach and Michael Bolton have identified two techniques that testers can use to help reframe a problem: Focusing and defocusing. Focusing is a strategy testers can use when they are facing information overload. This technique suggests that testers repeat their tests, simplify their environment, and remove inputs to decrease the overall scope of the test being performed. So, instead of testing an entire application at once, a tester who feels overwhelmed might choose one part of the application, or even one aspect of one part (e.g. a checkout page on a retail website) and repeat their tests in that area to make sure that they are getting a detailed idea of how that specific part of the application works.
Defocusing, the opposite of focusing, is a strategy used to broaden the scope of tests by testing for multiple factors, and trying to confront, analyze, and break patterns in existing tests. Testers who defocus may experience new ideas or patterns by increasing the scope of their examination. For example, in testing a checkout page, the tester can defocus by checking the checkbox that says shipping and billing addresses are the same, and see if they can fill in the billing address anyway. If the billing fields are still active, the tester can try a different billing address to see if they can introduce an error when they try to submit the two addresses, despite having selected the “same address” checkbox. In this example, the tester is introducing a number of potential problems, and may uncover multiple error states as a result, potentially leading to some new testing ideas.
Get TestRail FREE for 30 days!
Take a Tour
Some time ago, James Whittaker recommended the strategy of taking tours through the software as a means of re-imagining the way components of the software relate to one another. For example, he suggests using a “Business District Tour” as a metaphor for an approach that might include boundary testing on pieces of the application that support core customer interactions. Meanwhile, a “Historical District Tour” might include testers devising ways to ensure that legacy code is not impacting new code or current features. In his well regarded book, Exploratory Software Testing, Whittaker offers other tour ideas as well, and of course a tester could always generate some of their own, custom tours, that are suited to the specific context in which they are working.
Testing with Extreme Personae
Very often, project managers will think of “personae” that represent who they believe the typical users of the product are, and acceptance criteria are likely developed with these personae in mind. In some companies, these personae don’t change very much, and may stifle the tester’s creativity. What about the extreme persona? The user who turns everything they touch into a flaming pile of failure; the user who always finds the edge cases and uses the software in ways that nobody imagines? Testers can break out of a testing block by imaging who these people are and what they might do to the product. For fun, it may include giving these extreme personae extreme names and job titles; anything to jog the imagination.
Permissions for creating, reading, editing, and deleting data may seem like a strange place to spend a lot of testing energy, and a lot of testers don’t really explore the complexities of handling permissions and data security through the full use of an application. For example, in a data-heavy application, A tester can explore user types (e.g. regular vs. administrative) to ensure that the correct data is surfaced for each type. A tester can also follow the path of processes like creating or deleting data, and test all of the areas of a product that these processes touch, ensuring that data is created and deleted properly, without causing other issues.
WebApp Performance Testing
When testers think of performance testing, their minds jump immediately to tools like JMeter, or to concepts like how many requests the api can handle at a time. This is a natural and correct way to think about performance testing; however, these are not the only ways a website’s performance can be challenged.
For example, what happens to the webapp when the tester decides to perform multiple clicks on an element that is performing a get or a set action? What if the tester opens the site on multiple browsers with the same account? What if the tester overloads the browser and then tries to use the app? Is the app equipped to deal with typical user behaviors that place load on a machine? These are just some of the ways that load is introduced by a single user, but many testers do not think of these as performance issues and might initially ignore testing them. Yet some investment in user-induced performance issues may yield unexpected results.
So Many Ways!
There are so many ways to think about testing that a tester who is feeling stuck does not need to look very far to find new ideas. In fact, the software testing community is continually coming up with new tips and strategies for inspiring creativity and surmounting blocks when they arise – cards, games, dice and apps, to name just a few.
If you’ve been testing for a while, probably you have some of your own strategies. If so, we’d love to hear about them in the comments section below.
This is a guest posting by Jess Ingrassellino. Jess is a software engineer in New York. She has perused interests in music, writing, teaching, technology, art and philosophy. She is the founder of TeachCode.org
- Where has the Test Manager Gone?
- TestRail JIRA 7.0 Test Management Add-On & Updates
- 10 Web Security Testing Tools Every Tester And Developer Should Know
- TestRail Highlight: Test Management Project History and Test Case Versioning