In the previous post in this series, we highlighted the importance of evaluation in the user-experience design process. This post explores the practicalities of usability testing—one of the most valuable evaluation methods—and demonstrates its value in creating accessible digital products.
What is usability?
Usability is a term that’s closely associated with user experience—so much so that they are sometimes used interchangeably. It’s helpful to think of usability as a key contributing factor within the larger context of user experience.
Usability relates to the ease, efficiency, and satisfaction with which a user can complete a specific task using a product. Tasks can be as diverse as reading a newspaper article, finding a store’s closing time, making a reservation at a restaurant, making a stock trade, locating reliable online health information for advice on a medical condition, or comparing product reviews for refrigerators.
There is a range of usability evaluation methods, from expert inspection of an interface to using cognitive modelling software to simulate human behavior. This article focuses on usability evaluation with people, or “users,” as an especially effective way to discover potential issues with a product from the perspective of people we expect will be using the product. (The term “user testing” is sometimes used to describe usability testing—but it’s a misleading term because we’re testing the product, not the user.)
Usability testing involves asking test participants to carry out one or more tasks using the product, and observing and recording what they do in response. We use this data to identify the nature of any barriers preventing task completion, and to help us figure out how we might adjust the design to reduce or remove these barriers.
We can collect additional data from a participant by asking them questions about what they did after they attempted a task. In some cases, we might ask them to tell us what they’re doing while they’re attempting the task (a technique known as “think-aloud protocol”). We might also gather relevant information about the user to help us place our findings in context: For example, it might be helpful to understand their prior experience or expertise with the product that we’re evaluating or with similar products.
It’s important to stress that usability testing’s primary focus is observing people meaningfully using the product, ideally in a context that’s natural and familiar to them. Watching someone interact meaningfully with a product yields far more valuable insights than showing them a product and asking them to tell us what they think about it ever could.
When we design with users in mind, we should evaluate our designs with users at the earliest possible opportunity. This helps reduce the effort and cost of adjustments that we may need to make later in response to what we learn from the evaluation. It reduces the risk that we discover a user-experience problem so late in the life cycle that it will be too difficult or costly to fix. This goes for accessibility too. Early evaluation means that we can identify and resolve accessibility issues before they become fully integrated.
Practicalities of usability testing
There are a few areas to consider when planning and conducting a usability test. Underpinning all of these areas are available resources—namely, time, expertise, and budget—which inevitably influences our decisions.
Purpose of usability testing
Firstly, what’s our goal for running the test? In a UX design process, we might want to conduct a usability test of a component that has been built in a development phase. In this case, our focus would be on learning how we can optimize the design of that specific component. Or we might be evaluating the usability of a completed product that we plan to overhaul or replace, in which case we would want to have a much broader focus on issues across the product.
We need to carefully define tasks that have some real-world validity that will prompt test participants to meaningfully interact with the product. These tasks should encourage participants to use areas of the product that we’re most interested in evaluating. We might include some very specific tasks each with one clear method of completion. We might also want to set vaguer tasks with multiple potential completion routes. Additionally, we may employ stepped tasks, if we’re interested in features’ discoverability as well as their usability.
We need to think carefully about who we involve in the evaluation. Ideally, it should be representatives of our target audience, covering a representative range of experience and skills. The number of participants we involve will depend on budget and time and what we want to find out. But recruitment can take time, so if we need results quickly we might decide to recruit co-workers as participants. If we do, we need to carefully watch for behavior that is influenced by their increased familiarity with the product, and avoid making design decisions that are unduly biased towards users with extreme product familiarity.
Location and method
Ideally, usability testing takes place face to face. For all but the most formal and advanced studies, this needn’t require an expensive lab. Look for a quiet location where a test participant can interact with the product while a moderator can observe and take notes.
However, in many situations, especially in the current period where the coronavirus pandemic has placed restrictions on travel and social distancing, face-to-face testing won’t be possible. Thankfully, technology lets us conduct remote testing where we can use video conferencing software to connect with participants, observe, and record their experience using a product with their own device. Remote methods also allow us to involve people from a wider geographic range than would be practical for face-to-face testing.
When time and resources are especially limited, we may also want to consider unmoderated testing. In this case, we send participants a series of tasks and ask them to go through them on their own, recording their results to send back to us. With this approach, we lose the opportunity to observe and ask questions in real time. But it does let us quickly gather data from multiple people in multiple locations without having to commit time to observe each participant.
When we involve people in research activities and capture data associated with them, we have an ethical obligation to ensure that we treat our participants and their data with respect and care. A usability test might not be on the same scale as a federally funded research project. But it’s still worth following a recognized code of ethics for human-centered research, such as the principles presented in the Belmont Report.
Including people with disabilities in usability testing
Usability evaluation provides a valuable opportunity to get first-hand perspectives of people with disabilities as representatives of a product’s target audience. That’s why it would be misguided to consider evaluation with people with disabilities as a separate, accessibility-focused effort conducted alongside “conventional” usability testing. Instead, make a conscious effort to include people with disabilities whenever you’re gathering perspectives from prospective users.
We need to ensure that our testing method is inclusive when involving people with disabilities in usability testing:
- Identify known accessibility issues with the product being tested and make efforts to resolve them before the usability test. Or define tasks in a way that test participants won’t encounter known barriers.
- Make sure that test materials are accessible to participants with disabilities—including the participation consent forms and methods for participants to record data.
- For in-person studies, make sure the test location is accessible to participants. For remote studies, make sure that the technology used to run the session and capture data is accessible.
- Make sure that the data recorded includes any additional relevant information that helps put findings in context, like the type of assistive technology that a participant uses and their level of experience in using it.
A usability evaluation’s findings are so valuable because they are the observed experience of people using a product for its intended purpose. Fixing barriers encountered by people helps you create a more usable, effective product.
An easy way to identify screen reader barriers on your website is using TPGi’s free tool JAWS Connect. JAWS Connect allows you to effortless collect feedback from JAWS screen reader users about your website’s accessibility. They can surface barriers or trouble spots they encounter anonymously, enabling you to leverage the power of crowd-sourcing without any heavy lifting on your part.
In the final post in this series, we’ll reflect on the user experience process and its role in an organization’s digital accessibility strategy. We’ll also consider some of the other parts of an accessibility strategy that need to be in place to produce great accessible digital experiences while minimising risks associated with inaccessibility.
This article is one of a series of introductory articles explaining the importance of user experience (UX) to digital accessibility strategy and practice. Read all posts in the series:
- UX Series 1: Universal Design and Digital Accessibility
- UX Series 2: User Experience and Digital Accessibility
- UX Series 3: Digital Accessibility and the UX Design Process
- UX Series 4: Digital Accessibility and the UX Testing Process
- UX Series 6: Connecting UX with Digital Accessibility Strategy
For more in-depth information, read our Inclusion Blog’s UX articles. To learn more how we can help you integrate UX best practices into your digital accessibility strategy, view our UX services or contact us.
David Sloan is User Experience Research Lead with TPGi. He joined TPGi in May 2013, after nearly 14 years researching, teaching and providing consultancy on accessibility and inclusive design at the University of Dundee in Scotland. He is an active participant in a number of W3C accessibility-focused groups, and is an Advisory Committee member of the annual W4A Cross-Disciplinary Conference on Web Accessibility.