This is the third in a series of posts reviewing the advice in Think like a UX Researcher, by David Travis and Philip Hodgson, from the perspective of involving people with disabilities in UX research (see Part 1 and Part 2). In this post, we’ll look at their advice on usability testing and the ways that you can avoid making common mistakes when running evaluations with people with disabilities.
In a recent survey of UX professionals, usability testing was the most widely used UX-research method. In a nutshell, you observe participants attempting representative tasks using a digital resource, identify difficulties the participants encounter, and use this knowledge to identify ways to improve the design of the product.
When you involve people with disabilities as participants, usability evaluations can provide user experience insights that you might not uncover from an accessibility audit.
Five common mistakes and how to avoid them
Like any research method, the validity of the results of a usability evaluation are correlated with the care that they’re carried out. In Think like a UX Researcher, Travis and Hodgson describe five common mistakes in running usability evaluations that can affect the reliability of your results, and they provide helpful guidance to ensure you don’t repeat these mistakes. These mistakes may take specific forms when working with participants with vision, hearing, cognitive and mobility disabilities, as do the steps you can take to avoid them.
1: Talking too much
Usability evaluations should focus on observing participants, which means giving participants the peace to attempt tasks without intervention. This can go against a natural instinct to want to help when participants struggle. Silence and inaction are difficult to endure.
With participants with disabilities, there is a high chance that they might encounter barriers due in part to accessibility shortcomings. If you can limit when you intervene, that means that you can focus on understanding their behavior. And when you’re working with participants who use screen readers, keeping quiet allows them—and you—to hear the screen reader’s speech.
2: Explaining the design
You might be tempted during a usability study to give some description of the design, either up front or in response to a participant’s question. Try to avoid the urge! Explaining a design makes usability evaluations a less realistic representation of users’ behavior since participants will adjust their behavior in response to their insights from your explanation .
The risk that you could affect the behavior of participants with disabilities is particularly high for people who are blind or have low vision if you explain the layout of a page, or the location or behavior of a control before or while they interact with it. These descriptions give assistance that wouldn’t be available to the participant if they were interacting with the resource on their own. Save your design explanations for the end of the study, when the participant is reflecting on their experience.
3: Answering questions
When participants get stuck trying to complete a task, it’s natural for them to ask for help. In a usability study, a moderator may want to help, but when you do so, you lose an opportunity to learn more about users’ behavior when things go wrong.
When an accessibility-related issue seems to be the cause of an issue that a participant is asking about, use your judgment to decide how to respond. You might try to encourage them to find a way to continue by turning their question into a question. If they ask you for help, ask them in turn what they would do next in a situation like this. Or you might decide that the issue is such a significant blocker that you don’t need to spend any more time watching the participant attempt and fail to get around it.
4: Interviewing rather than testing
Usability testing is about watching participants interact with a system. Any questions that you have about their experience should wait until after they have interacted with the system.
For evaluations with people with disabilities, you or other project stakeholders may have lots of questions about how participants use technology and how they deal with specific challenges. This is especially so for observers who may not have seen someone with a disability use assistive technology to interact with a web site, app or other digital product. Curiosity is great, but don’t let it get in the way of the evaluation. Save any questions on their wider technology experience for the end of the session.
5: Soliciting opinions and preferences
Since the goal of a usability evaluation is to observe behaviors and figure out what parts of the user interface prevent users from completing tasks, you should also be careful about asking participants for their opinions and preferences. In our experience, participants with disabilities are generally extremely keen to share their experiences and views about accessibility in general, which means that usability studies are an opportunity for you to broaden your understanding of what makes an accessible user experience. But to make informed decisions on how best to enhance a product’s design, you do need to be cautious of comments that are aren’t related to the product being evaluated.
That said, a usability evaluation involving people with disabilities can be a valuable opportunity to gather some data on preferences about particular attributes of the design, such as the presentation of information or the way specific interactions are supported. For example, in early prototyping, suppose that you have concerns about candidate color combinations being considered for a visual design. A short usability study with people with a range of vision impairments that affect color perception can help. Gather data by observing participants as they interact with different color designs so that you can objectively compare designs based on experience. Add to this by asking participants for their feedback on the legibility of each color scheme.
One last observation
While a common feature of these five mistakes is too much verbal communication, when you work with participants who communicate non-verbally, for example through a sign language or an augmentative and alternative communication (AAC) device, make sure you are deliberate in providing opportunities for them to give feedback.
Usability evaluation and other user research activities with people with disabilities are a great way to learn from an audience that is historically neglected in user research and who are likely to be especially keen to provide their input. The next post in this series will consider what Think like a UX Researcher has to say about how to communicate the results of user research.