I was talking with a user experience expert early in my career. We worked for the same company, and although we were in different roles, we spent a lot of time working on the same product line. He was asking about testing and said something along the lines of, “Don’t you just click around?”
I wouldn’t expect someone outside of the development team to understand testing work, but that was a good realization of two things. First, it is hard for outsiders to understand what testing is, let alone what good testing is. And second, the group I was working with had done a poor job of describing our work to people who might benefit from knowing more.
Describing testing is a skill, just like performing the tests. Here are a few baseline ideas to get you started explaining the true nature of a tester’s job.
Get TestRail FREE for 30 days!
1. Testing Doesn’t Improve Quality
Every once in a while, I will see a person posting their outrage to Twitter or Facebook because of a software problem. Their banking app won’t let them transfer funds, or their word processor won’t let them set some text to bold, or their insurance provider’s website is showing an incorrect monthly balance. The first question is usually, “Wow, didn’t anyone test this software?”
No amount of testing will improve a piece of software. When you go to the doctor for a yearly checkup, they might note that your cholesterol is starting to get a little high. That doesn’t automatically make your cholesterol levels drop. You have to do something, usually changing diet or activity level. The same is true of software.
Nothing magical happens when a tester discovers and reports a problem—this isn’t the Hogwarts School of Witchcraft and Wizardry. A developer has to make a decision that the problem is important, and then someone has to fix that problem.
2. Testing Is Hard to Estimate (if It Can Even Be Estimated at All)
Early in my career I was in the waterfall version of a pre-sprint meeting. We were reviewing a big batch of features, and developers and testers were giving estimates of how much time they thought it would take to do the work. Each tester told the PM how long they needed to test a code change, but when it came to be my turn, I had no clue how to estimate testing—and I said that. A few heads whipped around to look at me: That was obviously the wrong answer. I finally said two hours, and the meeting moved on.
Estimating testing to some extent means you think you can predict the future. Moreover, estimates only work when you don’t find any problems.
Let’s say you are testing a new change. You are working on a page and the flow feels awkward, so you stop and talk to a developer and see if anything can be done. You find a couple of bugs, so you have to spend time spelunking in the server logs, try to make the problems happen again, and then wait for the problems to be fixed and then appear in a new build. Eventually you get to a point where you know there is lacking coverage, but you need some data setup to move forward. Each of these testing activities takes time, and they tend to be unpredictable or surprises.
My preference is to work closely with developers and have test time be part of development time.
3. Testing Is a Broad Skill Set
It takes a lot of different types of skill and knowledge to be a good tester. Some testers perform their job on an intuitive level, and do it pretty well. They take new software and find important problems every day.
Improving usually means that we have to take a specific skill and study or practice it somehow. Some of the skills I think are core to software testing are experiment design, learning different ways to describe and explain software (also called modeling), problem-solving techniques (also called heuristics), and observation techniques.
4. The Plan Is Not the Work
Take a look at how the scientific method is taught. Schools like to pretend that everything happens in a nice, clean flow—observation, question, hypothesis and so on. Test plans often give us the idea that testing is a clean and sanitized process. First I am going to test authentication, then I am going to test permissions, then I am going to test integration with a third-party product. The reality of testing is that we jump in with an initial idea and learn something new, hopefully quickly. This new information might drastically change the plan or affect how long things will take. New information always puts a plan to shame.
5. Exploration Doesn’t Mean Random
I still occasionally run across people who ask how you know what to test if there is no test plan. According to this bunch, anything that isn’t planned is just random bouts of clicking and typing without any focus. Exploration—or at least good exploration—is done with a mission in mind. Let’s say you are working on a new feature change. The page has a few text boxes and a couple of radio button sets. You can explore these fields in a systematic way that will cover the questions of how the data you enter work with the product, and how those fields are dependent on each other. After that there are questions about how the customer will use the product and things like performance and stability. Each bit of testing work added to the flow is intentional.
6. Your Testing Bottleneck Is Probably a Symptom of a Bigger Problem
The best example of the bottleneck problem is the regression testing cycle. This happens in waterfall and in agile shops that dump a bunch of code changes on the test group a day or two before a release. The only thing between the software and its production environment is the test group. The rest of the development team usually says something along the lines of, “How can we do exactly what we are doing now, but make the testers go faster?” This testing bottleneck is usually created by having distinct handoffs between roles on the team, and bad quality in the first builds gets sent to the test group. The later testing occurs and the more bugs there are, the slower things go.
7. Testing Does Not Mean Bug-Free Software
Quality is a relationship between a person and the software they are using. Sometimes it is hard to predict how our customers will feel once they finally get to use a piece of software. I have worked on products that I thought were in good shape when we released. I worked closely with developers, we asked questions, tested, and found and fixed problems, and then we released. But soon after the release, the customer came back with some important bugs that we never imagined. There will always be occasional bugs, no matter how good the testing is.
8. Collaboration Will Probably Help
My favorite example is a company that practices Extreme Programming. For any code change, there are two developers and one test specialist. We start each new change with the questions “How will we test this?” and “How do we know when we are done?” We usually start a change by writing a new test and sprinkle more into the development flow where we need them. The test specialist—me, in this case—writes some tests, helps design some tests, and asks questions that shape code design. We build a new product frequently to see how things are coming together, and once we have a somewhat done version of the change, we explore as a group. When we are done, the change is ready to ship.
You don’t have to do radical collaboration like this, but chances are if your developers and testers work more closely together, your product quality will improve.
9. The Tools You Choose Are Important (but Not as Important as the Skilled Tool Operator)
Talk to a tester about automation and the Selenium toolset will come up every single time. Selenium is probably the most fully featured and supported UI automation tool that exists today. It is also used when other tools and strategies might be much better.
A skilled tester will ask questions before jumping into a tool: “What problem are we trying to solve? Should we be testing this thing in this manner? How do we want to design this test?” Not doing that usually gets people a three-hour build and tests that fail randomly for no good reason.
Your manager won’t know everything about software testing. Their job is different and much more broad. But they will definitely have questions about your work. Having a few of these themes ready to explain in a simple, clear way will be good for your manager as well as your career.
What else would you like your manager to know about testing?
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime
- 10 Must-Read Books You Didn’t Know Were About Software Testing
- Session-Based Test Management Software with TestRail
- Where has the Test Manager Gone?
- What Continuous Delivery Means for Testers, QA Teams and Software Quality