Tester’s Diary: New Adventures in Accessibility

Two software testers standing with tools in hand

This is a guest post by Rachel Kibler.

I moved teams a few months ago, in part to start up an accessibility campaign for our website. When I started poking around with accessibility (haphazardly, I might add), I found issues and tried to get people to talk about them. No one wanted to.

Jenna Charlton was a great sounding board, coaching me in how to test for accessibility as well as how to build a community of practice around it. She recommended finding allies — people who were really interested in accessibility — and getting them more pumped up about it and willing to talk about it.

Those allies have been few and far between. I didn’t realize how hard this would be. I continued on with my testing for accessibility from time to time, but when I reached out to other testers, I was told that a tester would be coming on to focus on accessibility “soon.” That person didn’t seem to materialize.

A little while after that, the dictate came down that each team would be responsible for its own accessibility. Our teams are mostly divided up by customer segments, so the website pages have some overlap. None of us have any formal training on accessibility, and I think I’m still the only one who has tried testing for it.

Modern Test Case Management Software for QA and Development Teams

A few weeks later, my product owner asked me via chat to “start doing accessibility testing.” I had been playing around with the WAVE tool from WebAIM for a few weeks already, which shows contrast issues, hierarchy issues and alt-text issues.

I asked how far he wanted me to take this, and the product owner paused. In response, I started sending him things that were wrong with our site. After the sixth or seventh, I told him I could keep going, and he was going to need to tell me when to stop.

He changed his mind to performing accessibility testing just on “new development.” That was an unsatisfactory answer to me, and I asked if I could slowly file accessibility bugs for older stuff too. He agreed to that, and he told me that our design team is already coming up with options for our contrast issues.

The next step will be learning accessibility testing together through mob testing (also called ensemble testing). I’ve learned that there are a variety of tools to use, the most prominent being the WebAIM tools, and their site has a bunch of great resources about how to use screen readers and such.

My next personal steps, in addition to learning the tools better, will be to go into our site and do more things without use of the mouse or the screen. I’m apprehensive about this. I’ve come to realize just how much I rely on all my faculties to get through flows, and I am nervous at how unyielding our site might be.

Resources to help me learn have been the Ministry of Testing website, with all the prior TestBash talks; Jenna Charlton; talks about accessibility from Instructure, which makes Canvas and puts a heavy emphasis on accessibility; and Twitter users talking about #a11y (the first and last letters of “accessibility”, with “11” for the eleven letters between them).

As the CEO of my company, 1-800 Contacts, says: “Every person who comes to our site has a level of visual impairment.” This is a powerful incentive to make sure our site works with blurry vision, but I can’t forget about people with motor impairment, hearing impairment, developmental issues or many other types of disabilities.

I keep coming back to the mantra that when we build better software for accessibility, we build better software for everyone.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Rachel Kibler is a tester at 1-800 Contacts. She can be found on Twitter @racheljoi or on her website at racheljoi.com.

Comments