I went to a peer workshop last fall, and one of the topics that came up was testing as a specialty. If you aren’t familiar with the peer workshop format, usually a person will quickly present a topic. This part might last 30 minutes at most. After this, there is a facilitated question and answer session. This part lasts as long as there are questions. We spent the better part of an hour talking about whether mobile testing should be considered a specialty. In the end, we concluded that, despite resting on skills used by every other testing profession, mobile testing has its own special set of skills and knowledge that don’t overlap with other parts of testing.
So, what are the skills and bits of knowledge that make someone a mobile testing specialist?
Get TestRail FREE for 30 days!
The Skill Set
Most traditional web applications fail to take movement into account in any way. The laptop is expected to stay in the same place, have consistent access to power, and a steady wireless connection. This will be true 99.9999% of the time. In the rare case when power or the internet fails, the user will blame the network, not the application. These exceptions are things the programmer, and likely tester, can simply ignore. For mobile applications those risks are real. Meanwhile, the user is much more likely to blame the application. Here are some of the most common issues in mobile testing, and how they play out.
Connectivity: Mobile devices might slip in and out of connectivity as a person moves in and out of WiFi or 3G connection zones. It’s important to question what happens when a person is moving in and out of connectivity while entering data, or what happens to user authentication. There are some proxy tools that can simulate these conditions, but the best bet is moving around with the device so that you can experience the same randomness that a customer might. Testing data and session persistence, and how that relates to connectivity is unique to mobile devices.
Power Constraints: All types of software use resources. Web browsers use some amount of memory, and drain the battery to some degree. If I’m using a resource intensive application on a laptop, I know that I’ll need to stay around power. Most mobile device users don’t, or can’t plan to be around power. There are two ends to power on mobile devices — does your software use resources at a reasonable rate (and if not, why?), and what happens to your software when power starts to dwindle. A few years ago Skyhook, a friend-location-finder application, launched at South By SouthWest. Skyhook was feature complete, but also continually monitored the location of friends – leading to intense power drains which destroyed the reputation of the application.
Input: A touchpad and a full sized keyboard are luxuries on mobile software. That leads to new kinds of usability issues, not to mention worrying about whether the user can comfortably enter data on the popup keyboard. They’ll also need to make selections in radio buttons, date-pickers, and checkboxes. I like to ask the question of “do I really need to use a keyboard here”? Typing on a mobile keyboard is no fun. Many fields that would normally be typed in should be a search field or a drop list on mobile devices.
Gestures and Rotations: Mobile devices can have hidden functionality, namely gestures and rotation. People might rotate the app to get a different view, but what happens when the device is rotated repeatedly and randomly?
Technical Work and Tooling: Mobile testers have a whole host of technical approaches and tooling to learn that will help them to test native, hybrid, or mobile web apps. The most popular relate to UI automation — proxy tools like Appium, and device specific APIs such as iosDriver or AndroidDriver for controlling the mobile User Interface. There are also platform specific IDEs, emulators, simulators, HTTP traffic monitors, build orchestration and distribution tools, and various device clouds to consider. Chances are, all of these will fit into your mobile test approach at some point.
Heat and Bandwidth: Similar to power constraints, mobile applications that make a phone hot or require a lot of bandwidth can cause users to abandon the application and even review it poorly. Constant bandwidth use on a cellular connection could lead to network-charges that don’t show up for a month or more. A reputation for excessive cellular use, in the same way as power or heat problems, can destroy an app’s reputation.
Each of these is something to remember for your test strategy, or the beginnings of a test technique. The skill comes in knowing when, and how much of each of these behaviors should be included in your mobile testing and why.
Each mobile operating system — iOS, Android, Fire OS, Windows Mobile, and a handful of others — behave and look differently from each other.
A few years ago I was working on a mobile app that was being developed for iOS and Android in tandem. One day we would get a new build for iOS, and the next day we’d get a new build on Android with matching features. When I was testing software in browsers, I would normally look in the most important browser (Internet Explorer) to do in depth testing and get a feel for how the software was supposed to behave. After that, I would use the behavior in IE as a point of reference. If something acts differently in Chrome I could refer back to IE and say the Chrome behavior is a bug because IE is correct.
This is very difficult to do between mobile platforms. Each mobile Operating System has their own distinct style and user experience. Large stylistic differences, or usability differences might be OK and only someone familiar with how these platforms work will be able to make that judgement.
There is also the matter of how your software will interact with the platform as a whole. A few weeks ago I was talking with a development manager at a company that makes IoT devices. Their latest product was a mobile app that would give the user control over all of their devices; smart thermostats and light switches for example, from one interface. Access from phone to device was done either over WiFi or a Bluetooth connection. The team was dong well with iOS, but the manager referred to Bluetooth on Android as the ‘Wild West’. Sometimes it worked, sometimes it didn’t, and they were having a hard time figuring out why.
Knowing where a platform excels, and where it falls short can affect your test strategy in extreme ways. But, only people with deep platform knowledge, or the ability to root out these sorts of problems will find these hidden traps.
Modern software development philosophy says that testers and developers should work together as closely as possible, and with good reason. It helps with information and culture sharing, we save time on what is traditionally a find, fix, build, retest cycle, and each group can guide what the others do. In my experience, mobile products tend to move back toward separation and phasing. Developers write some code using something like Xcode and maybe do some cursory testing using an emulator. After this, they spin up a new test build, make it accessible via a distribution tool like HockeyApp, and the testers go forth and do their testerly duties. This creates problems of efficiency like it always has.
Skilled mobile specialists can be effective closer in time to when the code is being written, which can lead to fewer testers finding more important defects earlier. These testers might start by investigating the API this new mobile feature relies on. Then they could use emulation tools against the feature branch to find problems early before a build is made available through distribution tools. Once a build is available, they can kick off an automated test suite against different hardware sets and operating systems using device clouds. By the time they need to use a real device, most of the low hanging fruit bugs have been found, and many device and platform combinations have been covered. The remaining work on real devices is for behavior you can only experience on a real device, like walking around, gestures, and rotation.
Chances are this skill set doesn’t live in one person, and if it does they will be very expensive. Having the mobile tester work closely with the developer makes it easier to get past those hurdles.
Marc Andreessen coined the phrase ‘software is eating the world’ in 2001. That is still true today, and might be more accurately stated as “mobile software is eating the world.” The stakes of each mobile release are higher since the release cadence is slower and less predictable. Identifying these skills and approaches, and where they fit on your team can mean the difference between a successful release, and an uninstall.
What skills do your mobile testers have? What skills does your team need? It’s time to write them down, and figure out how to address the gaps.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
- Announcing TestRail 6.3 with Enhanced Jira Integration
- TestRail Leads in the Spring 2020 G2 Grid for Test Management