This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
I have heard testers working in agile or scrum talk about the benefits of being involved in the development cycle earlier. They imagine smooth sprints with no last-minute regression testing, and burn down charts that gracefully approach zero, instead of looking like a steep plateau. They envision features that can be delivered as soon as they are committed to the code repository. In my experience, that usually starts with testers getting invites to planning meetings and, too often, it stops there. Perhaps if things go well, there is an occasional opportunity to perform light testing on a developer’s environment before the normal waiting patterns.
The API presents an opportunity to break through a barrier that prevents testers from being effective earlier in a development cycle.
Most of the companies I have worked with recently, have a sprint burn down chart. These charts plot the number of stories in the sprint tracking system that were placed in the ‘done’ column over the last couple of weeks. Managers of the technical team use burn down charts to explain how the sprint is moving along to the people working above them. At the majority of those companies the burn down chart was a flat line, maybe occasionally moving up when product managers added scope, until the last couple days of a release. Developers were writing code and completing tasks, but the test group couldn’t do anything until they had feature with a User Interface change.
The API acts like a middleman for testers. Some of my best experiences with API testing have been whilst pairing with a developer. While the API is being developed, I can ask questions about the design and build test automation. Building an API is like building any other type of software. Sometimes the APIs have good specifications, sometimes the specifications are out of date or wrong, and sometimes they do not exist at all. When there is a specification, I can build tests based on those JSON keys, data types, and workflows. After the developer makes a change and gets it running in their environment, we can run the tests and discover where our understanding of that change differs. When there is no specification, we learn about the feature while it is being developed. The essence of exploratory testing is in the process of writing new code for an API, asking questions, and building automation to discover where we are thinking differently.
Let’s flashback for a minute and think about what normally happens when we wait for a UI. By the time the UI is delivered, there are only a few days left before a push to production. Time is short, and the development team knows that making any bug fixes at this point adds risk. The task of the testing team is to figure out and execute a testing strategy, as quickly as possible, to find the most important bugs lurking in the product. The testing team investigates aspects of the product related to data, workflow, the look, feel and user experience, and complete specialty missions such as performance or security.
That UI test strategy usually starts with the basic question of ‘What can I do with this?’. I like to creep into the software by entering and submitting some values that should work, in theory. After that I enter more questionable data, alpha characters in numeric fields, data formats that shouldn’t work, and unicode characters. Finally, I will find out how everything comes together, a full workflow, from account creation through to search, checkout, back-end processing and reporting.
Teams that have an API testing strategy can cover most of the data and workflow testing before a User Interface ever exists, and much faster than they could through the UI. That leaves time for a smaller, and more focused testing mission once the product is closer to done.
Separation of Concerns
There is usually a flow to testing and reporting bugs. A tester could be exploring a piece of software, entering values, imitating how an end user would navigate, and then discover that they can’t create a discount of $5.50. They might immediately open their shell to copy the exception from the logs, paste that into a bug tracker along with some information about how they found the problem, and then move on. Some testers might spend more time on the report, and some might spend less so they can get back on mission. This flow leaves a big knowledge gap, and that gets manifested when developers click the ‘could not reproduce’ button in the bug tracker.
Where is the bug? Is the problem in the User Interface, the database, or in a section of back-end server code? Or is the problem in a 3rd party library that creates an interface between your product and another data capture? This problem is more obvious and easier to solve for products built on an API. Imagine a developer and tester are working on an API change to report on the number of highchairs sold over time. Testing done on this change is done in isolation. To create a new highchair report to return data from 11/13/2016 and today, we directly call that piece of code. If an error is thrown, it is directly related to that code. Maybe it doesn’t handle the data types, or the size of response being returned. Maybe privileges to access that set of data weren’t accounted for. For test groups that are developing technical testers that can work closer to the code, many of these problems are discovered while they are testing an API change with the developer that made it.
Being able to isolate changes is also a risk reduction strategy. One product I work on has a coupling problem. It is hard to predict how a change made in one part of the software will affect everything else. When we make large changes, it usually takes days of investigation, and working through automated test failures to discover everything that went wrong. An architecture built on APIs creates a way to make very small changes, and hopefully predict most of the places that change will cascade to. For example, imagine that a new highchair is added that has size and color options. Will the report break? Will the broken report cause other bad things to happen on the page that plots that data into a graph? This is easy to discover; make a GET call to the report endpoint, specify the data range that includes a product with this new data, and see if the data is returned correctly. Testers can focus on the exact change, and without a lot of setup or effort.
A Brief Note on Careers
Technical testing approaches appear to be in higher demand than ever. Not because of a conference talk, claiming that testing is dead, but because development strategies are evolving more and more often. Testing approaches must adapt to the shift in development, and one way to do that is by learning to use more tooling and testing closer to code. There are lots of ways to be technical, code is only one of them. I don’t think the testing field is at risk of becoming smaller, but learning technical skills like testing an API will make you standout.
Products built on an API offer a powerful option for software testers. Rather than being forced into waiting until a day or two before a release to test real software, they can be productive as soon as a couple of lines of code are written. New bugs can be described more accurately. Making smaller changes means testers can focus on the exact change, rather than testing the change then doing general ‘regression testing’. The API might be a good place to start for testers that are interested in developing their technical ability, or getting close to the spirit of agile.
- TestRail Leads in the Spring 2020 G2 Grid for Test Management
- Announcing TestRail 6.3 with Enhanced Jira Integration