When I first heard about risk-based testing, I interpreted it as an approach that could help devise a targeted test strategy. Back then I was working with a product-based research and development team. We were following Scrum and were perpetually working with tight deadlines. These short sprints had lots to test and deliver, in addition to the cross-environment and non-functional testing aspects.
Learning about risk-based testing gave me a new approach to our testing challenges. I believed that analyzing the product as well as each sprint for the impending risk areas and then following them through during test design and development, execution and reporting would help us in time crunches.
But before I could think about adopting this new found approach into our test planning, I had a challenge at hand: to convince my team.
Get TestRail FREE for 30 days!
Tackling the Challenges
Working with a close-knit, startup-type environment comes with its own set of challenges. The team was sort of wary of introducing any new ‘processes’ or fancy terminology. We followed a very raw and to- the- point approach to all our tasks and were good at accomplishing the desired outcomes, so the team and our manager were not into adding any overhead unless they were convinced it would add value.
I took it upon myself to provide a convincing argument backed up with facts to support the case of risk-based testing.
I started by studying the user stories and testing patterns of our previous sprints. I went through our Jira items from the past three sprints. I brought out the sprint backlog items of each sprint and researched the testing tasks performed, time spent and the defects logged against each one. Here is an excerpt of what I found:
- The user stories planned for each sprint were obviously not all of the same value or priority, but the trends observed in time spent on test design were almost the same for all user stories of each sprint.
- Test execution tasks varied in time spent on each user story, but that was mostly based on when the user story was delivered within the sprint. We followed multiple drops within the sprint, and the sequence of delivery of the user story mostly depended on when the developers got to it.
- The number of defects logged against each user story varied, not that we counted that as a measure of quality. But what I observed was when the defects were logged. Some were found within the sprint testing, while others later, like in the next sprint or in the final user acceptance testing before release.
Then I studied the user stories from a risk-based point of view and performed a simple point- based risk analysis on them.
Benchmarking Risk and a Focused Approach
Based on my knowledge of the product and experience, I listed risks associated with each user story, giving out numbers for probability and impact and then multiplying the two to get the risk priority number (RPN), as shown below.
|Likelihood||Impact||Risk Priority Number||Extent of Testing
depending on RPN
|1 = Very High||1 = Very High||The product of likelihood and impact||1–5 = Extensive|
|2 = High||2 = High||6–10 = Broad|
|3 = Medium||3 = Medium||11–15 = Cursory|
|4 = Low||4 = Low||16–20 = Opportunity|
|5 = Least||5 = Least||21–25 = Report Bugs|
This gave me a benchmark to analyze the user stories’ testing tasks and defects found from a risk-based perspective.
I listed and categorized the defects of each user story against its corresponding risk area. The major finding here was that the number of defects against risk areas were not mapping with their extent of testing as per their risk priority I had calculated.
For example, a user story related to localization had a risk priority number of 16, which meant it should have gotten a cursory or opportunity extent of testing. But the number of defects found were almost 10 in the current sprint, and more were logged in later sprints. On the other hand, another user story about data transformation in Excel using formulas had a higher risk number so was listed for extensive testing, but it had only a couple of issues logged in the current sprint and some defects in the later sprint. Most important issues were actually logged during the final user-acceptance test cycle.
This was indicative of our current test strategy. We were giving equal time, effort and importance to all user stories, doing the maximum test execution we could within the sprint, and logging the defects found within the sprint and in upcoming sprints during regression, as well as further testing. We were relying on the tester’s knowledge to determine how best to test, how much effort to spend on the story and how many issues could be found.
What we were missing was a pointed focus on user stories about their individual needs and importance. By showing this analysis, I indicated to my team and manager that in the noted sprint, we could have spent less time on the localization story, and maybe more test design and execution could have been done for the other two stories. We may have done that later, in the following sprints, and found the issues, but obviously those bugs found earlier would have been of more value.
If we had followed a risk-based approach, our testers would know where to concentrate more. We’d know when to team up on stories because we’d know where the extent of testing needed is higher, instead of everyone focusing on their own assigned user stories only.
This approach helps in task allocation because we know where to focus our test efforts and limited time on the maximum value areas so they can yield the best outcomes—all without adding overhead, cumbersome process or tasks.
Analyzing the data from our own sprints proved to be a convincing argument for my agile team to look at the benefits of risk analysis and using it for risk-based testing. I hope this encourages you perform a similar analysis so you can discover the value of risk-based testing from an agile viewpoint.
Nishi is a consulting Testing and Agile trainer with hands-on experience in all stages of software testing life cycle since 2008. She works with Agile Testing Alliance(ATA) to conduct various courses, trainings and organising testing community events and meetups, and also has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.
Test Automation – Anywhere, Anytime
- TestRail Again a Leader in the G2 Grid for Software Testing
- Announcing TestRail 5.7 with Enterprise Features, new API Endpoints and Edit Result Permissions