Best Practices for Structuring Accessibility Testing: Part 2

In part one, we covered the building blocks to prepare for accessibility testing.

Teams must define a tangible, achievable level of effort and commit an amount of time to testing. The next step is for teams to have a repository of shared accessibility knowledge.

ARC’s KnowledgeBase provides the foundational resources to prepare for accessibility testing, including:

  • Role-Based Training: The accessibility considerations for each role in delivering an online experience – for developers, designers, digital marketers, copywriters, and product owners.
  • Testing Methodologies: Documented step-by-step manual testing procedures for each WCAG Success Criteria, including WCAG 2.2.
  • Design Patterns: A guide to creating common design patterns and website elements in an accessible manner with functional techniques and expected behaviors.

In Part Two, we cover best practices in performing accessibility testing. Being deliberate in how accessibility testing is structured is critical to embedding it in an organization’s processes.

When working with clients, one of the common challenges teams experience when trying to get buy-in on accessibility testing isn’t due to lack of will or skill but a lack of clear steps.

With clear steps to prepare for and perform accessibility testing, teams, even those with less expertise in accessibility, can perform full-scale, full-coverage Web Content Accessibility Guidelines (WCAG) audits.

WCAG 2.1 Level AA is the prevailing standard for online accessibility – it’s made up of 50 guidelines, or in WCAG terms, Success Criteria.

Each Success Criteria represents a check, with its own testing procedure; that’s 50 checks to apply to every page in your application. Each check takes time and a specific skillset.

Being deliberate in how an audit is organized is fundamental to delegating and performing the testing efficiently and accurately.

Manual or Automated Testing: Which Comes First?

Accessibility testing includes both manual and automated testing.

Manual testing is the only way to cover all WCAG Success Criteria. Therefore, the best practice is to perform manual testing first to be aware of all issues then apply automated testing to track progress on some of the issues.

However, manual testing requires a higher level of effort. Automated testing is the fastest way to start getting test results and understanding the current state of accessibility.

Perform Manual Testing

1a. Plan Out Your Testing

It’s not feasible for testing to cover every page of a site. Instead, testing should be based on a representative sample of page content.

A representative sample must:

  • Include the template components such as navigation bar and footer, and other recurring components on the site.
  • Be diverse enough to capture the range of accessibility issues that could occur on the site – for example, in different sections of the site.
  • Cover the site’s highest-visibility, highest-traffic user journeys. Be sure to test the content that is most interacted with.
  • Consider the current accessibility of the site. If parts of the site are thought to have critical accessibility issues, these parts should be selected to ensure that these issues are documented and worked on first.

To create an accessibility test plan, divide your website into a set of samples. For each sample:

  • Take a screenshot of the sample.
  • Note the steps to reach the sample – what actions must be undertaken, e.g. entering specific form values to trigger the component to appear.
  • Give the sample a name and a numerical ID to aid communication between people in multiple departments.
  • Note the kind of user interface (UI) element i.e. a tab panel, a drag-and-drop interface.
    • Include references to ARC’s KnowledgeBase for expected behaviors and accessibility requirements for the kind of UI element. This helps your team understand the accessibility expectations.

With a test plan, and its collection of samples, it’s seamless to coordinate testing. The work can be delegated and easily divided among multiple testers.

When testing is handled in ARC Capture testing is scoped to the representative samples identified in the test plan. ARC Capture provides step-by-step instructions for each WCAG Success Criteria and kind of user interface element. This makes the testing repeatable – there’s consensus about the results. Plus, everyone on the team can view and track the results in progress, in real-time.

Pro-Tip: Make each representative sample around the same level of effort to test. This makes it easier to delegate work equally and track the work of individual testers. When testing is completed for 3 of 10 samples, it’s great for managers to be able to clearly state that testing is 30% complete.

1b. Group Related Accessibility Guidelines

There are 50 accessibility guidelines (for WCAG Level AA). That’s 50 checks for each sample.

It saves time to group related accessibility guidelines. Therefore, testing can be done by people with varying skillsets – just the skillset for that group of guidelines. Moreover, grouping makes testing easier – with less jumping back and forth between different testing tools or assistive technologies.

Recommended grouping of accessibility guidelines:

  • Keyboard and Focus.
  • Color Contrast and Use of Color.
  • Images and Graphical Elements.
  • Page Structure and Navigation.
  • Interactive Elements and ARIA.
  • Responsive Design and Text Formatting

TPGi’s ARC Capture provides guided manual testing which is focused on empowering testers, even those with limited accessibility experience, to reliably carry out full-scale, full-coverage WCAG audit.

To streamline testing, ARC Capture divides the 50 WCAG Success Criteria by topic. ARC Capture goes a step further and breaks each WCAG Success Criteria into a set of smaller accessibility tests, which makes the test easier to follow and helps ensure that multiple testers will reach the same conclusions when applying the tests.

1c. Select Browser and Assisted Technology Combinations

The behavior of assistive technologies is the ultimate test for a user interface.

The industry standard is to test with JAWS/Chrome. It’s the most popular combination for desktop screen reader and web browser – according to the non-scientific WebAIM Screen Reader Survey. NVDA/Chrome or NVDA/Firefox could also be used if a team does not have a JAWS license.

The combination of MacOS VoiceOver and Safari has value, but it’s important to note that this combination is the primary desktop screen reader for less than 5% of screen reader users (according to WebAIM screen reader survey).

If possible, it’s suggested to instead test a website on an iPhone using iOS VoiceOver and Safari – this is the most common mobile screen reader combination (in the USA). Given that perhaps 50% of web traffic is from mobile devices, it’s an excellent way to increase confidence in your test results.

Perform Automated Testing

Once you’ve found your website’s accessibility issues with manual testing and remediated them, automated testing is helpful in guarding against regressions. It may not be possible to set aside time for full-scale manual audits once an app is in production. Ongoing automated testing provides instant feedback which can fit into existing development and design release schedules.

Moreover, the main strength of ARC Monitoring is that accessibility rules are paired with tailored resources from ARC’s KnowledgeBase. With these clear steps, testers with less accessibility expertise can reliably carry out tests.

Deciding on a Rules Engine

Both Axe and ARC are accessibility rules engines, but they have some crucial differences. The Axe rules engine is open source and its main selling point is that it gives no false positives. To honor this philosophy, ARC Monitoring only returns Axe results which are automatic fails i.e. errors. “No false positives” is a restrictive standard. The web is complex, and there are a lot of edge cases. As a result, Axe flags fewer issues. Engineers with less accessibility expertise who are relying on automated tools could miss issues and in turn produce inaccessible content.

In contrast, the ARC Rules is more concerned with overall code quality. There’s so many permutations of browser and assistive technology and operating system. Delivering high-quality code provides more value and predictability. “No false positives” is not the main goal of ARC Rules. Instead, ARC helps detect code smells, a developer term for a code pattern that could be indicative of larger problems.

Learn more with an in-depth comparison of ARC, Axe, and automated accessibility testing tools.

Creating User Flows for Automated Testing

Like manual testing, automated testing works best when it’s scoped to components. With ARC, testers can create user flow monitoring. User Flows provide explicit instructions to ARC on

  1. the segments of the screen to identify as components.
  2. the steps to navigate an application, so automated testing can reach deep into user journeys.

Page-level testing provide signals about the overall accessibility health of a website but involves a degree of redundancy as the same element is flagged on multiple pages, which means more results to inspect. Scoping results to specific components is helpful in facilitating the remediation process.

While ARC Monitoring is housed within the ARC Platform, it’s easy to apply automated testing using TPGi’s free browser extension ARC Toolkit. With ARC Toolkit, run scans on any web content that is open in one’s browser. Easily scan pages which require authentication, a specific point in a user journey, or even local file URL’s.

ARC Toolkit allows testers to visually inspect the issues reported in ARC Monitoring. Plus it includes tools to visualize a page’s tab order, heading structure, ARIA usage, and more. ARC Toolkit is great for creating shareable screenshots which clearly show an accessibility issue.

Track Your Progress

Accessibility managers are accountable to leadership and other stakeholders at their organization. The ARC Platform provides defensible data to validate the ongoing work in accessibility. ARC Monitoring’s WCAG Density Score states the number of errors automatically detected, providing a measure of code quality.

Without ARC, you would need to provide defensible data by tracking the time spent on accessibility tickets or the number of accessibility tickets solved through software like JIRA. You could also run automated scans on the websites of competitors in your sector so you can compare your progress. But none of these are as simple to share with leadership as an ARC report.

Conclusion

Accessibility is a multi-faceted topic which cannot be the responsibility of a single individual or even a single team. Accessibility requires organization, effort, and clear steps so each person understands their role in the testing process.

To support your accessibility journey, schedule a demo of ARC.

Categories: Accessibility Strategy, World of Accessibility
Tags:

About Aaron Farber

Aaron is the Senior Accessibility Platform Consultant at TPGi. In this role, he supports the ARC Empowerment Program, which trains organizations to plan and execute in every area of accessibility using the ARC Platform. Aaron has supported web accessibility initiatives at organizations ranging from small businesses using WordPress and Shopify to the largest technology companies in the world. Aaron’s goal is to make accessibility easier to understand and into a mindset that people consider for every situation. Aaron is a former web developer and coding bootcamp instructor.