This is a guest posting by Justin Rohrman.
Performance testing always felt foreign to me. My first exposure was in the early 2000s, through the tool JMeter. I recorded a few scripts, ran them a couple of times against some product builds, and quickly realized I didn’t know what I was doing. (Luckily, we passed that work off to a professional performance person who definitely did know what she was doing.)
The main ideas behind performance testing haven’t changed much since then, but it is more accessible now to people who need basic statistics. More and more testers want to add some performance testing capabilities to their repertoires.
If you have a passing interest in performance testing but don’t necessarily want to use a dedicated tool, here are five good places to start.
Get TestRail FREE for 30 days!
Your most basic tool for performance testing is your instinct.
A few months ago I was working on a page that stores labels for different uploads — PDF, audio, video, scanned files — in our product. The page was an editable data grid with two columns: one for the label name and the other for an external ID that was used by third-party software systems. I began testing with one row in the data grid to see how different types and lengths of data were handled. Pretty quickly I began to wonder what would happen when there was a lot more data.
With the help of a developer, I opened the Rails console and wrote a loop that called the function responsible for adding new labels 50 times. In a matter of seconds we had 50 rows of data on the page. The page was slightly less responsive, but it didn’t seem like a big deal. I added another 50 rows, and then a hundred, and things started to feel slower. I didn’t have any data — this wasn’t based in any sort of rigor — but it seemed slower to me. I talked with a developer, and they agreed that we should probably update it to use paging or lazy loading to prevent the page from loading too slowly when the data got too big.
There were no tools needed, no statistics necessary and certainly no spreadsheets — just my gut.
But what if you need some data?
The first place I go when I want to substantiate my gut is browser tools — in Chrome, of course. (Yes, even if your product is predominantly used in Internet Explorer.)
Before you open a page in question, open the developer tools in Chrome and select the Network tab. You can do a typical baseline and then new data comparison, like you would with simple performance testing, or you can inspect the call. Initially, you’ll see a chart displaying the length of time for each call tool, along with a a list of every time your browser had to talk with the server. This is your HTTP traffic. You can filter that list based on the type of call if you need to.
You probably aren’t interested in every call on the page. In my previous example, I was interested in the GET to the endpoint that returned the list of labels. Select the call you are interested in learning more about and you’ll see a few tabs you can use to view more information. I normally look at Headers, Response and Timing. Response will show the exact data and the structure of that data returned by the server. Timing shows some detail about how long your call takes.
These statistics are useful when you are making a change that is supposed to directly affect product performance. I tend to leave browser tools open any time I am testing in a browser. There is too much information there to ignore.
Number of Calls
Looking at the number of calls on a page can introduce a point to explore. Some people refer to this as the “chattiness” of a page.
Let’s say you are on the same label page we were exploring earlier. This page is in production now and has been seeing some heavy use, and the customer called to complain that the page has been slowing down.
I open the page in production in the customer environment, and the page does feel slow. Maybe there is too much data on the page, maybe the edit controls we are using are too memory-intensive, or maybe the server or database is starved for resources. There are lots of things that could be going wrong, and I want to start with the easiest and most effective way to learn about what is happening.
In this case, I’d either lean back on Chrome developer tools or open a tool that can display more information, like Charles Proxy. The first thing I look for is the number of calls. If you open a page that your customer has performance concerns about and think, “Huh, that is a lot more calls than I would have expected,” then you can probably start a conversation. More than once I have opened a page and noticed a call per row of data.
If you want a constant source of performance and user experience data, all without leaving your browser, there are a number of plugins you can use. Page Load Time will display the length of time every page you visit takes to load. BlazeMeter is a tool people use in conjunction with JMeter to record HTTP traffic, then import that set of calls into JMeter to run again later as a performance test. YSlow is a classic that records HTTP traffic and offers information about why your page might be loading slowly.
Most companies that build software-as-a-service (SaaS) products are running monitoring tools in their production environments. Simple usage of these tools usually involves capturing a handful of API or specific HTTP traffic. Each call is compared to some configured threshold in the tool — say, 30 milliseconds. A person will get paged every time a call takes longer than that configured threshold to complete.
These notifications are calls to action that tell the team something may be wrong and needs to be investigated. That investigation might include server resource configuration, database configuration or server resources, and product code and any surrounding configuration. More sophisticated tooling will run calls at some interval and against specific networks to simulate usage patterns based on where a person is located in the world and what time of day or season of the year it is.
Configuring and monitoring these tools is typically the realm of your local operations or DevOps person. But getting involved in these processes as a tester would be a prime education in performance, monitoring, architecture and modern testing practices.
So, You Want to Try Performance Testing?
The shift to production monitoring has made performance testing less and less the domain of specialists who have to know performance theory, tooling and lingo. If you want to break into performance testing, start by actually thinking about product performance. After you get a gut feeling, decide what sort of data you want to collect and where. Let your existing tester skills be your guide.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime
- TestRail Again a Leader in the G2 Grid for Software Testing
- Announcing TestRail 6.0 with UI Enhancements and Docker Support