Screen readers are not testing tools
Testing with assistive technologies is an important part of any accessibility review. However, especially when auditing against the Web Content Accessibility Guidelines (WCAG), they should not be the primary tools to use for testing. Here’s why:
Support Eric’s independent work
I'm a web accessibility professional who cares deeply about inclusion and an open web for everyone. I work with Axess Lab as an accessibility specialist. Previously, I worked with Knowbility, the World Wide Web Consortium, and Aktion Mensch. In this blog I publish my own thoughts and research about the web industry.
Screen readers are specialized software made for people who need them. That means that they have incredible features to make computer use easier, especially when users cannot see the screen. They are also not only for using the web but also the rest of the operating system.
This complexity means that they are not as straightforward to use as people make it out. “Just test with a screen reader” can have unintended consequences. Many testers do not know that screen reader users rarely use the tab key on websites to navigate around but what I call “arrow key navigation”. In this mode, a virtual cursor traverses the page, including non-interactive elements, and reads what is selected by that cursor.
Deque has a good overview of screen reader shortcuts (and gestures for touch screen screen readers). JAWS uses the arrow keys and modifiers to read the next section, character, line, or sentence. Narrator uses Caps Lock plus arrow keys. NVDA Insert plus arrow keys. VoiceOver VO (by default the combination of Control and Option) plus arrow keys. But this is just for linear traversing. Screen readers have dedicated keys for jumping to headings and landmarks, getting a list of links, and interacting with more complex UI elements.
Screen reader testing pitfalls
I often see testers criticizing buttons made of <div> elements that have no keyboard event handler, like keyup, defined as inaccessible to screen reader users. But that is not the case. It is inaccessible for people using the keyboard without a screen reader, for example, when using switch or sip-and-puff devices. But screen readers can simulate mouse clicks, and some do by default. Select a <div> with a tabindex=0 attribute, and press VO+Space and it will be clicked.
Even elements without roles, tabindex, or keyboard event handlers can be found and activated by screen readers. The experience would be bad because it is impossible to know that you could interact with something or what happens when you do. Screen readers are built to work with badly coded interfaces if needed.
That means some faults might not be easily spotable when the screen reader is enabled. Another example of this is the focus indicator. Most websites have bad focus indicators, so for low-vision individuals, screen readers have an always-on focus indicator that also indicates the non-interactive sections on the page that are read.
Another thing that I often see is that screen reader testers that do not use screen readers all the time tend to want more explanation from the screen reader than what they usually do. Some recent examples that I came across are having two <select> elements with the accessible names “Select month” and “Select year” in a date picker and a list of conversations where, to select a conversation, multiple buttons started with “Show conversation with (name)”. The role of the <select> element already has the information that you can select something, and in a list of conversations, the information that you are selecting a conversation to show is redundant.
While both are not WCAG failures (they still describe what the button or select does), redundant information can be a barrier and slow down users significantly. My general advice is to not add details over what can be seen on the screen to accessible names unless the control would be difficult to understand otherwise. Usually if there is visual context for a button, that context is also present for screen reader users (or at least should be when following WCAG 1.3.1 Info and Relationships). The conversations might be in a labeled section or following a “recent conversations” heading, so the repetition is not needed.
What to look for before testing with a screen reader
Many failures that you would find when using a screen reader can be found more conveniently with dedicated testing tools. A11y-tools.com has really excellent bookmarklets that, for example, can show you focusable elements on a page. You can find all links and buttons and show their accessible name, even comparing it with the visual label. It's much more convenient than navigating around with a screen reader, trying to locate issues.
Polypane has similar tools1 and it is what I use to test. Using Polypane Peek to quickly check for the accessible name for a button or link is extremely convenient. If the accessible name is wrong, I can instantly check in the elements tree or accessibility tree where the accessible name is coming from and formulate a solution for clients with this information2 .
Because knowing that something is broken is a tiny part of testing. Actionable audits identify the root cause or misunderstanding of the error and also information on how to resolve the issue and learn how to never repeat that issue again. Without looking at the code, it is impossible to know why the output is not as expected: Is it an error on the page, a setting of the screen reader, or a weird quirk of the screen reader/browser combination? And if you need to look at the code to determine that anyway, why not look at the code directly?
Screen readers show the symptoms of bad code, but not the actual problems. They are an indirect way to test.
When to use screen readers
There are some interactions that are better tested with screen readers at the moment. Tooling around ARIA live regions, for example, is abysmal. Using a screen reader to check them is the easiest option. The same goes for complex interactive patterns. Even if the roles and accessible names are all present and working in theory, verifying that the interaction makes sense when you do not have the visual reference is important.
This is also the time for usability testing with screen reader users. Find all the low-level accessibility issues, like wrong accessible names and missing and misleading roles, before you conduct usability testing. There is little more embarrassing than finding simple mislabels when a user tests the site, because it wastes valuable time that could be spent on more detailed interactions.
Conclusion
Testing with screen readers needs a significant amount of insight to understand how they work and how people are using them in practice. For testers that are not otherwise screen reader users, this information is difficult to come by, and it takes a lot of experience to identify the source of a perceived issue.
Dedicated testing tools will get you to your result quicker and with more insights. Verifying with a screen reader is a good practice.
Comments & Webmentions