- Good morning. Good afternoon everyone. My name's Anthony Priore. I'm the digital marketing specialist at TPGi. We're just gonna wait one minute as people sign in and we'll get started briefly. - Yeah, I see people trickling in. This is great. Yeah. Hey, add in the chat where you're located. I'm here in Northern California, Sacramento, to be specific. - [Anthony] And I'm in Pittsburgh, Pennsylvania. So a little far off from where Aaron is. - That's what's so beautiful. My favorite thing about the webinar is seeing these different locations. This is like the, we're starting with my favorite part 'cause I just think that's why I'm in accessibility. 'Cause the web is so amazing at how it connects people around the world. And we're all connected by this cause, by this challenge. And it's beautiful. We got upstate New York, Atlantic City, Minneapolis, Los Angeles. I lived there a long time. San Francisco. I lived there too. Yeah. With Janesville, Wisconsin. Chattanooga, upstate New York. Okay. I've been to the Baseball Hall of Fame once a long time ago. Frankfurt, Germany, south Florida. I've been there many times. Knoxville. Great. That's beautiful. - [Anthony] Yes. There will be a recording emailed out after the webinar. We'll make sure everyone gets that. - Albuquerque. Okay. Vancouver, Washington. Yeah. Yeah, I've thought about moving there. I guess if this webinar goes well, maybe you can help me. Why haven't you, why haven't you? Do you work for the travel board there? I'm just kidding around. No, it's great. I know. It may happen soon. We'll see. - [Anthony] I still see a couple people trickling in, so we're just gonna give it one more minute. But yeah, it's great to see where everyone's logging in from and how accessibility brings us all together. - Imagine where accessibility on the web will be in 10 or 20 years. It'll really something to think about. And that's actually one of the things that most excites me about the future. BC, nice. AI will do most of the work. I don't think so. I think that the future we will be using website templates, app templates that are accessible out of the box. That's the future I feel. I'm happy to talk about it more at the end if you have questions. I think you're already starting to see it. Things like Webflow. - [Aaron] Alright, I think we can dive in and get started. It's good to see a lively group today. So thank you everyone for joining us for our webinar, How to Perform accessibility Testing Part Two with TPGi's Aaron Farber, senior accessibility platform consultant. Before I get started, I just have a few housekeeping items I'd like to go through. So firstly, as we pointed out in the chat, this session is being recorded and we will email everyone the recording after the event. And secondly, we do have captions available. They're auto-generated, so feel free to turn those on and use them as needed. Next, we will have time, as Aaron said, for a live Q&A later in the webinar. So if you could please use the Q&A box. Sometimes if you send your questions to the chat, it can get lost. So we'll monitor both, but if you could try to make sure they go to the Q&A box instead. But we'll answer as many of the questions as we have time for at the end of the presentation. And then lastly, if anyone needs any accessibility support training or usability testing, we will send out an email with a link to schedule a time to speak with one of our experts after the session. So with that, I will let Aaron get started and take it away. Thanks everyone. Thanks Aaron. - Hi everyone. Welcome to How to Perform Accessibility Testing. This, we had part one, How to Prepare for Accessibility Testing last month. And that webinar recording is available. Just go to our TPGi.com blog and you can go and find the webinar register and get that recording. So this is part two. And the final part, you know, there's no part three, we saw how that turned out for The Godfather, right? So just two parts, and now let's move on to bit about me. So in 2016, I changed careers from public policy to technology. Yeah, it was kind of a, I was a political brat working in the capitol for a few years. And from 2017 to 2019, brand accessibility testing and development shop focused on small businesses, the kind that used like WordPress or Shopify. I would, you know, audit their site and then actually handled the development myself. And then in 2019 to 20, in 2019 I joined TPGi. It's an opportunity to work with the biggest brands in the world, seize the greatest impact on accessibility, I'm always so grateful for this opportunity. And for the first two years I was here at TPGi, I worked as an accessibility engineer, which is primarily delivering manual audits, testing applications, whether they conform to the Web Content Accessibility Guidelines, WCAG. Then in 2021, I became embedded with a large tech company, the largest, one of 'em at least, and helping them establish an internal accessibility program. So, and then now in 2022, I came back to TPGi, the mothership supporting our ARC users, ARC being the Accessibility Resource Center. And we'll get into that. So with this kind of experience of implementing accessibility from small teams to large teams, I've gained a lot of experience in how teams can make effective accessibility programs. And I will say that today is focused on accessibility programs for large organizations. I think a lot of the things that we cover today are not things that would make sense for a small or mid-sized business to do themselves. So I'm focused on organizations which have lots of development teams, probably a centralized accessibility pro management team. So if you're an accessibility program manager, this is applies to you. Alright, so our agenda today is yes, an overview. Then we'll talk about our Capture. We're gonna cover manual and automated testing. We're gonna go over by test by rule. I'm gonna explain what that means. It's very important and fundamental to our philosophy here at TPGi. And then generated reports and HelpDesk. Oh, okay. All right. So the Accessibility Resource Center. ARC, yes, it contains the tools and resources for every aspect of managing an accessibility program. And I wanna say that today, don't focus on the product, focus on the vision. So think of, you know, I'm demonstrating our product, but all of the ideas that are kind of, and concepts that are built into our product are universal. Like they are ideas worth implementing in your own program. Let's go. Alright, so ARC is made up of really three parts. Provider, which is to our, you know, which we really covered more last month. And that is to plan, create, and manage accessibility reviews. Capture, which we'll be talking about mainly today, is to perform manual and automated accessibility reviews. Lastly, we have ARC monitoring, which is to run automated scans between manual reviews, right? For instantaneous feedback. Alright, so when it comes to managing, running an accessibility program at a large organization, time presents a very real challenge. Enterprise organizations, they must focus on increasing efficiency and manual accessibility testing and reducing barriers to entry. What do I mean by that? I mean, there is actually a pipeline problem in accessibility, I feel, right? Like, it seems like there's a limited number of accessibility experts and the best organizations are able to just take reasonably skilled technologists and bring them into accessibility testing and make them able to do that. Testing infrastructure grows over time. And it requires significant initial investment. And that's something to consider. You know, everything I show again today, you could hand roll yourself, but you know, with our product, yes, we have a product, you can get all of this instantly. Accessibility programs must demonstrate their value to internal stakeholders. I think that manual testing is quite time consuming. And again, like therefore, you know, it can, and again, with accessibility, it's almost like security, right? Like when you do a good job of accessibility, you don't hear anything. You know, people just use the app normally as they would normally use the web. Well, WCAG presents a challenge, that too. Why? Well, WCAG success criteria have nuance and evolving interpretations. WCAG is used in legal and governmental settings as it should, you know, and it must be precise, but that leads to a significant use of jargon. You know, WCAG is not bedtime reading, it's not something someone sits down and reads end to end. And the thought of someone doing a manual test as they actually are learning WCAG and becoming familiar with the success criteria, that's a very big challenge. It's very difficult. Okay, well, could you test using WCAG's official documentation? Yes. WCAG does provide documentation on how to test these success criteria. WCAG's understanding techniques they do, understanding techniques, that's kind of their manual testing resource for each WCAG success criteria. So Understanding Techniques for 2.1.1 Keyboard is 5,000 words. I mean, it's been so long since I've read a book. I think I forgot how, 5,000 words is very subs, that's a lot of stuff to go through and I think that's intimidating for anyone and I think it's probably unrealistic that someone would just go and refer to this resource. So instead, test by rule rather than guideline. And ARC Capture breaks down what WCAG's success criteria into smaller, easily repeatable manual tests. So for example, here on this slide we see a screenshot of 2.1.1 Keyboard Level A, and we see a sample of the manual rules. So I see we have a rule for character key shortcuts, right? That's not something that a computer can detect automatically. We have hover and focus content, keyboard navigation interaction, no keyboard traps on focus and on input, which are two of the WCAG 2.1 success criteria. Alternatively, yes, you can test by guideline. ARC provides manual testing procedures for every WCAG success criteria, including draft WCAG 2.2 success criteria. And I'm happy to take questions about WCAG 2.2 at the end. Yes, they're easy to understand and they're, and I've just said test by rule, not guideline. Well, the guideline is helpful for custom elements which don't fit neatly into established design patterns. ARC Capture test by rule approach. Now, why do you test by rule? Well, it's easier to apply individual guided tests then test interfaces against the entire guideline. This enables testers with minimal accessibility experience to reliably carry out full scale WCAG audits. Testing by rule reduces nuance. It is fundamental to accessibility testing that multiple accessibility testers reach the same outcome when evaluating an interface. Yes, when you are testing things against a general guideline, that leaves more room for subjectivity. And yeah, that's why. But it is take work to break down a guideline into these individual rules. And you know, that's what we've done at TPGi when I worked at a large tech company, they were doing that work themselves. They were going through all the different audits they had done in the past and coming up with a list of individual rules, and kind of like to test for, it was a big effort for them. The largest companies, as I said, with significant internal accessibility programs all audit using that approach. I do not know of any, and I only have exposure to perhaps a limited number. I don't know like, but I have not seen any large company which has an internal accessibility program that actually tests using the WCAG success criteria. Instead they test all these individual rules which map to WCAG success criteria. Happy to talk further about that. So I talked about this in our first session, group related accessibility guidelines, group the guidelines by topic. This allows testing to be divided by among team members and their respective skillsets. You can delegate accessibility responsibilities to more individuals, and this empowers teams to start accessibility testing on their own. Like you are an accessibility program manager, you're working with a development team that is new to accessibility. Yes, you may not want to assign them certain testing responsibilities like a dialogue or something that's kind of like the boss level of accessibility, right? Like, but that team of I assume reasonably skilled developers is certainly able to carry out testing on keyboard and images, other things like this. So you can again group the guidelines, and say your team can work on these right now, we'll come to the rest when the accessibility team has bandwidth, whatever. So here's the core groups of related guidelines. I'm gonna breeze through this, but I kind of, you know, at the start of this year I did an understand test solve webinar series where we focused on each one of these individual groups. Once again, you can find those webinars on our blog. Keyboard and focus, color contrast and use of color, images and graphical elements, page structure and navigation, interactive elements and ARIA, and responsive design. And each of these groups is again intended for a specific group with a defined skillset. So again, I'm gonna breeze through these slides. I'm just including these slides strictly so that when we share the slides, after you have this knowledge, you have some of the divisions of WCAG success criteria, especially if you're doing this hand rolling, this whole program, color contrast, images and graphical elements, page structure and navigation, interactive elements and ARIA. I hope the wheels are turning as we're going through this, that you're thinking like, "Oh yes, one group I could see who could do color contrast testing." Oh, interactive elements in ARIA, that's gonna require familiarity with a screen reader. So you can, again, you see how it's like we're dividing the testing by skillset. Responsive design. So ARC goes further, ARC divides the WCAG success criteria even further into 20 topics. So you see for example, we don't just have images, we also have sensory issues, we have repetitive content links, sequence focused keyboard, pointer in motion, native and custom controls, all these other things. I didn't include them in my list because I think just most websites are not going to have issues for example with like pointer in motion. But as you can imagine TPGi we audit some very complex applications. Alright, so manual and/or automated testing? Well when you are going about accessibility testing, clearly distinguish what can be tested via manual and automated means. I think among accessibility experts, there's a level of criticism towards automated testing. And I think, you know, automated testing is really underestimated in its value. Automated testing enables teams to rapidly get an aerial understanding of accessibility for an application. As I said, time is a real challenge for internal accessibility programs. If someone on this call is a accessibility program manager and your team has no backlog of applications to evaluate, hey, let it be known in the chat. I don't believe you. Okay? So automated testing helps you get a quick sense of the app, the accessibility for an application, and teams can run their own scans, they can start working on some of these issues. All right. Combine manual and automated testing. aRC Capture separates each WCAG success criteria into a set of automated and manual rules. Automated rules, yes they flag issues which can be hard to find during manual testing. And I love this screenshot here because it demonstrates a very good example. So here we see a screenshot of ARC Capture and we see that keyboard, the group, the keyboard guidelines are broken down into automated and manual rules. So under automated rules we see the first one here is access key used multiple times. For the newer people on this call, you may not be familiar with access, but access keys are right, we don't see 'em on the web very often anymore. But access key is an attribute which basically is built into HTML, which defines a keyboard shortcut. These are not exposed to assisted technology. There's no way just in your browser or just on your own to pull up a list of whether a site uses access key. So that's a great example of the value of automated testing that it can pull this up, it can just easily scan the page and find those access key shortcuts. So, and access key is basically presents a number of accessibility issues, and they're kind of old school honestly, and you don't see 'em as much anymore, which is even more reason why it's helpful to have automated scans. I can't tell you how many developers I talk to that are not even familiar with this element and they'll have these on their website, because of course they kind of get left there. People forget they're even there sometimes, honestly. Alright, so ARC Capture scan, you know, but again, when I'm doing testing personally I think first run automated scans, ARC Capture scans, whatever, ARC Capture is a desktop application. That's very important to understand here because ARC Capture, and I don't know why I said desktop application, it's not really the case actually, it's built into your, it's a web application, it's built into your browser. So ARC Capture scans whatever is open in your browser view port, making it easy to scan pages which require authentication or add a specific point in user journey, right? Like this is a really significant issue I find for accessibility programs at big companies, right? Like what do accessibility programs at big companies spend a lot of time doing? Just getting access to applications and especially with automated testing, very often it requires scripting or some API or something in order to access the environment. Well with ARC Capture, again, you just get it open on your machine, and you can run the scan, you know, you don't need to just get ARC access to your application or any of these steps. And that of course would apply to using any of these open source accessibility APIs. I will say that for those of you who use these open source accessibility APIs, one option for you is that you may be able to run them in your console, in your JavaScript console in the developer tools. But I think that is such a clunky way of doing it. But it could be done. Automated testing does have a level of noise, right? Like so manual testing and verification of automated results is necessary. Always manual testing is necessary not only for verifying that something is a legitimate issue, but to gauge its impact on user experience. Something might represent some kind of technically a WCAG violation, but when we are assessing priorities or what to work on, especially when we're working with developer teams where they have like, you know, a limited bandwidth, we'll say yes, we don't want them to work on every issue necessarily. We have to prioritize them for them, tell them what to work on first. I think that is actually a really important work for an internal accessibility program. Next with ARC Capture, curate the results from an initial scan. So Capture enables teams to dismiss automated findings which don't violate WCAG or have an impact on user experience. Here we see a screenshot of ARC Capture and we see a list of the automated findings such as ARIA hidden used, unable to determine text contrast against an image background, an auto complete attribute missing, non-list item child of list. And you can go through these results one by one. We have testing resources on how to determine whether that's a legitimate issue. Okay, well, Capture provides steps for manual testing which are focused on the tools and interface that testers are actually using. You see this is, we were talking about this Understanding Techniques document earlier from WCAG. Like that thing is kind of ignoring how people are actually going about their testing. So here with our resources, we'll talk about how to use dev tools, how to use ARC, our toolkit, all these different tools available that you'll actually be using. And that is much easier to understand than kind of like the general guidance provided in, and I don't wanna say general 'cause it's very precise, but it's tool agnostic, as it should be. It's WCAG, they shouldn't be endorsing companies. Okay? Now for the clever people on this call, I think you're probably really thinking like, "Well, you've talked about the easy stuff, you've talked about automated issues, you've talked about testing like keyboard. Well, how do you test custom UI elements?" Now Capture provides testers with resources on the specific accessibility expectations for design patterns which go beyond what is available in native HTML. So here we see a list of manual rules for accordion, breadcrumb, carousel, checkbox. So it's very well, this works very well, because when an auditor comes to a component, they recognize certain UI elements in that component and they can easily go to the manual rule for that kind of UI element and apply those tests, see if it matches those expectations. Again, somebody new to accessibility may not. I think it's unlikely to understand the user expectations for let's say a combo box. So this is a great tool and this is where we really make it powerful. And this is something again that applies to your organization as well, is define solution templates. This saves QA from having to reinvent the wheel every time they do an audit. And it adds guardrails to the manual audit findings. Like you're in helping, you know, ensure that testers are not flagging elements of a dialogue or whatever issue that is actually not a WCAG failure. And I think that is very important for internal accessibility programs. I think that flagging issues which are subjective really can threaten the credibility of accessibility. Patrick Laukey did an excellent webinar on what's not a WCAG issue. It's worth looking up. So from these templates, testers can add and subtract from the template. So here we see a screenshot of ARC, we see a dropdown list for the rule dialogue that says no and we have select a solution. I see the modal dialogue does not follow the established design pattern. We have that same solution template, but for iOS, so right, like ARC Capture could be used for mobile audits, or I should say native mobile audits. It's not just for web audits. Keyboard focus is not moved to the relevant elements. And when the dialogue is dismissed, keyboard focus is not maintained within the dialogue until it is dismissed. Keyboard focus is not returns to the place or control that triggered the dialogue. And you know, one great thing I think about ARC Capture is, and I don't talk much about this because again, I'm trying to not make this all about ARC Capture, but really like the ideas here, but ARC Capture auto updates. So as we add more solution templates, like you'll just have those within ARC Capture. And so we're always adding new ones. Yeah, the web is vast and complex. There's a lot of stuff out there. And this is honestly, writing these solution templates is absolutely one of my favorite things to do at TPGi. The technology industry is about growth and membership and mentorship. Watching people grow in their careers is a beautiful thing. And I am so grateful to TPGi for having this process where they took me when I was so amateurish at accessibility and grew me really by using these tools. You know, so ARC cap, and this is how, what facilitated that is that ARC Capture provides a built-in QA system enabling senior accessibility engineers to review the work of junior members of the team. So this facilitates collaboration and encourages junior members to clearly state which issues they have questions about. So again, this is a process that plays out everywhere, plays out so many days in my career here, which is that yes, I can identify, you know, most of these accessibility issues, maybe they're very clear cut, but I come across a mobile check deposit, right? Like you take a picture of the check and you deposit in your phone. I am not sure exactly what is the accessibility expectation there. I can write a comment on an issue about that and say like, "Hey, can you take a look at that?" Or like, I have a question like, "Does it require an announcement when you move the camera?" Whatever, all these kinds of things. And I think this is really helpful because I find I'm a former developer really, and I guess this isn't unique to technology, but I think for junior members of an organization, right? It can be really difficult to ask for help. You know, it's like getting somebody on teams. I got a question all this. It's really nice to just have it within the actual tool for, I think it actually kind of removes some of the anxiety associated with that. Again, I'm projecting, right? Alright, so now with ARC Capture, you can provide findings in different formats. That's very helpful, you know, because Capture auto generates the findings from an audit, Capture provides three different reports intended for different audiences. So Capture auto generates a report, which is a comprehensive Word document, it just contains all of the issues. A workbook, which is kind of more technical. It contains all of the information about the issues as well as the exact address for those HTML nodes on the page. So someone can easily paste them in and find it on the page if there's confusion about, where's the error located. And that makes sense also, right? Because we don't have screenshots within the workbook, but we have the screenshots within the report. So the workbook is a comprehensive spreadsheet and it's ideal for importing into Jira. So that's one way that I see a lot of people use that report generated by ARC Capture. Last, we have a PowerPoint presentation that's a high level report intended for executives and other external stakeholders, probably not a part of the development team. Now, what's the most difficult part of accessibility? It's not testing. No, it's remediation. That's the truth, right? The most difficult thing is actually solving the issues reported in an audit. And with these solution templates, we provide clear steps which empower designers and developers to solve those issues reported. However, yes, accessibility can be complicated. So a HelpDesk is a must for extended support. So ARC provides a HelpDesk for teams to get continued support on issues. So again, you give them the report, they have a way to follow up with you and file issues about exact issues in that audit. So in our HelpDesk, the tickets are tied to an exact finding within the audit. So again, this removes a lot of effort of like, "Hey, here's the issue I'm talking about and I'm gonna write a paragraph telling you where it is and how to reproduce it and all that." Like no, we can just, we know exactly, we have agreement, we have consensus, we know what we're talking about, both of us, that being the developer and the tester. And that's facilitated by our HelpDesk. So HelpDesk also helps accessibility programs track their work and demonstrate their value and expertise. The number of HelpDesk tickets which you've solved, you know, is a really quantifiable and helpful measure for accessibility teams. It's impossible to track just random emails from teams that you get. "I got a follow up question." All of this, you know, so again, like I really encourage every organization that has a big accessibility team to use a HelpDesk, whether that be Zendesk or whatever, you know, and then you can again take those amount of tickets you've solved and you can do that to demonstrate the value of the program, the work that you're doing. In fact, if someone emails you, I would actually encourage you to just create a ticket for their email, you know? Alright, so that kind of takes me through ARC Capture and some of my big idea, you know, the big ideas here on how to have an accessibility program at scale. Enterprise scale. We talked about dividing manual and automated testing, testing by rule rather than guideline, developing solution templates, having a built-in QA system where senior engineers, which you have fewer of, can review and confirm the work of junior engineers, auditors. These are all fundamental parts of having an accessibility program. And I can't emphasize enough the significance of the QA program because, you know, we deal with this at TPGi, I know every tech company does. There's turnover, people are gonna move on to other jobs, different opportunities. And so it's really important to always be growing your own internal institutional knowledge about accessibility. And this process facilitates that so much. And I'm excited for your organization to kind of take on these lessons. So again, you know, ARC Capture, reach out to us if you're, and schedule a demo. It is a excellent way to reduce the level of effort and accessibility expertise required to perform manual audits. You can more easily bring new people into your accessibility team. You don't need to crawl LinkedIn for another, like find, let me find, okay, I gotta find some accessibility engineer who's already got all the experience. That's impractical, and second, I dislike it. You know, I think that's bad for accessibility, bad for the web. We wanna grow new accessibility professionals and you can do that in your organization. So, all right, let me see. I don't think we got many questions, but again, I'm happy to chat about accessibility at enterprise scale, about individual WCAG success criteria. We can look at WCAG 2.2 resources, all of this. So let me see the chat. I don't see anything you think, oh wait, this chat, here we go. Oh, it turns each specific test into a widget. Yeah, I think so. I mean, you know, kind of, yeah, we have tests built for each kind of UI widget. Getting content creators to do it right the first time to minimize the need remediation. I agree that is fundamental. The amount of time it takes to report an issue and solve an issue is so much greater than just shifting left earlier in the design and development process and removing the need for remediation. And that is something that is kind of a big part of the ARC platform is that we provide tutor modules, which are self-paced training modules and accessibility training modules. What's great about them is that they're role based. So I might even just drag it onto the screen here just 'cause you know, it's just easier. So here I'm looking, well actually, I'll just grab this tab. Lemme see this. And you see here that we have TPGi Tutor. I'm right now I'm displaying our ARC web platform. So these accessibility training modules, they're role-based. You see, we have one intended for front end developers, for marketers and product manager types, for iOS and Android developers, for those who create PDFs or other documents, even for HR, you know. So again, each person and organization has a role in accessibility. And I find one thing that is difficult about a lot of the accessibility trainings that are free online is that they're too big. Like they cover a lot of things that are out of scope for someone's actual role. So these are much smaller. Each of these modules probably takes like 15 minutes to like an hour to complete. And if you were to go through all of them, it would certainly prepare you to receive a accessibility certification such as from the IAAP. So I bring up this tutor to say that another good way that accessibility teams, I think, make an impact at their organization is through training, through these self-paced trainings. You know, it's very difficult to embed accessibility in different tech stacks and development teams, but assigning trainings is something easy to do in a sense. Like, right, like it's just a matter of consuming it. And then later, you know, again, like when they, I can only think this way, I can only think in like a legal way, but then later if accessibility issues come about in your audit, it's like, well, I don't understand, like your engineers consume this training. They know that a generic div element is not a button. Like why? You know? So it takes the onus, it puts the onus on people to be responsible. They consume the trainings, they should be aware, and again, it just saves everyone work down the line. All right, let's see, I got another question here. It says, "I am a tester for our team and we regularly use W3C's report tool. What's your thoughts on this service related to ARC?" You know, let me look, I'm gonna actually pull it on the screen. I've never seen this or, well, what is this? No, I don't know this one. But, you know, here's the issue I have with it is that I'm looking at the screen, I'm, well, I'm looking at the website now and I've moved on to step four audit sample. It has multiple steps here, is, you see what they're doing is they're testing by guideline. They're saying all non-text content. And you see it's just the actual rule. Whereas here within ARC, if I was to review the resource, and again, I'm not viewing ARC by rule resource, but let's actually look at our resource on testing 1.1. So this is our resource on testing against the actual general guideline, not the individual rules that we've defined. You see here is that we have test procedures for image elements like IMG elements. We have test procedures for CSS background images. We have test procedures for SVGs and Canvas and right. And for people here that are accessibility testers or developers, you know that each one of those kinds of non-text content has very different behavior online. So it's not so simple as to just say, oh, test non-text content. Well, again, each one of those involves a really different testing procedure. And people may not even, again, like real, you know, like, especially CSS generated background images because it's like, that's a very difficult thing for automated testing to flag. And it can be, and you know, or CSS pseudo content, those kinds of things. So we have testing resources for each one of those things, testing procedures rather than this general guideline, which again, requires more, when you have a general guideline, it requires more experience and judgment in order to apply that standard. When you have these individual distinct tests, they're much more easily repeatable. And that makes the whole process way more efficient. So I don't, I think it's great that W3C produce this evaluation tool, but to me, again, it's falling into the problem that I said earlier, which is that it's making to, it's, it actually, it's like a high expectation of people's knowledge. It's like requiring people to have a lot of accessibility knowledge in order to carry out this testing. And that's the problem, you know? And so, but again, you know, it's a good resource. It's free. Thanks for sharing it. Well, we still have people here, but I saw, I'll go over something cool that I like, which is the WCAG 2.2 resources we've recently created. So I'm gonna go to WCAG test web. So right now I'm reviewing our knowledge base. I'm kind of like, you know, I get in my silo, you know, my bubble, right? Like we have so many different parts of ARC and I start, you know, kind of not describing each one, you know. So the knowledge base to me is the greatest value of using ARC. You know, TPGi has been around a long time, since like 2001, something like that. And so the knowledge base is the culmination of all of that work. So we have the modules on the first set of accessibility topics from WCAG reference, which summarizes each accessibility guideline to the accessible web content development module, which provides our battle tested solutions and front end development techniques. We have ones for iOS and Android accessibility, but now I'm gonna show you the WCAG test web manual resource. So again, this one is our manual testing procedures for every WCAG success criteria. I'm gonna scroll to the bottom and I'm gonna pull success criteria added in WCAG 2.2 up. So you see here that we have resources on 2.4.4.1, 2.4.1.1, focused, not obscured, for example. So it's a really, these are great resources and they make it again, comprehensible. And we strive to not use jargon, right? So ARC Capture, all of this we have, we're totally ready for CAG 2.2, with ARC Capture you can be ready to start doing WCAG 2.2 testing. All right, let's see if there's any more questions. Going once. - [Anthony] Yeah, I'm not, I'm not seeing any so- - [Aaron] Twice. - [Anthony] unless Aaron, you have anything else you'd like to go over? We probably can give everybody a few minutes back and break, but overall great session and thank you all for attending. - No, that covers, you know, I encourage you all to reach out to me over LinkedIn or whatever else. And I'm always happy to have conversations with accessibility people or people passionate about accessibility as a topic, as business, everything. And if you're interested in scheduling a demo about ARC, you know, to follow the instructions in the last slide. Yeah. Otherwise, I want to thank everyone for being here. You know, accessibility doesn't happen by accident. accessibility is not the default, right? Accessibility requires a commitment and leadership and all. That's what all of you are showing today just by being here, you've made a substantive effort to be accessible to learn more about how to do it. And I see we have one last question. "With you speaking much about ARC, can you provide a range of costs?" No, I can't. One beautiful thing about my job is that I don't deal with contracts. I don't have like a Salesforce ID, I don't do any of that, I just give advice and it's a beautiful thing. But I will say that ARC, you know, depends on the scale and complex, the scale of your organization. So the cost is really related to that. And you know, we offer, and that's just such a complex question, you know, so again, get in touch with us, we'll take it from there. I know that we have pilot programs and these kinds of things to give you a taste before you sign the dotted line. Yeah, let's see, I got a point here. It says, "Thank you. Lots of good information. We're just getting started. I feel better armed to get management buy-in and support for better tools to facilitate and accelerate our testing and remediation efforts." Let me tell you one thing that I think is underestimated in terms of management buy-in which is creating accessibility testing plans. I think when you start dividing an application by components and demonstrating that your testing is resembling the way that teams actually work, that gets management buy-in, it makes it seem like you're a part of the team, you're embedded in it. I think the other thing that helps get management buy-in is benchmarking, you know, providing examples of, and that's again, where automated scans can be really helpful actually, is you can scan the application prior to the audit and then after remediation and you can see the difference in scores. I think one thing that's really difficult when it comes to accessibility is a lack of quantifiable data, but that's what these automated scans through ARC help give you. Alright, I got another question here. Anonymous attendee, what kind of name is that? "You mentioned that organizations are typically not focusing on WCAG success criterias directly. Can you speak more to that? How does that work on a VPAT and accessibility confo," Sorry, I'm just saying abbreviations, right? I have to explain the full abbreviation. That's a AAA WCAG success criteria, you know, so VPAT, a Voluntary Product Accessibility Template, and an Accessibility Conformance Report. Okay, so when I say organizations not focused on WCAG success criterias directly, can I speak more to that? Yes, this is actually an excellent question and one of the biggest things that I've noticed working with big organizations. Testing with WCAG, testing the actual WCAG success criteria there, when they wrote WCAG, they wrote it to be, they were smart enough to realize that they made a general enough to apply to all kinds of web content. They knew that they could not predict all the different things that would be on the web in the future. Like in 20 whatever, in 2008 or whatever when they first created WCAG 2.0, you know, they had no idea that there was gonna be something like Pokemon Go, right? Like there's no way to predict that. And so to apply these guidelines that are written generally, it requires a lot of nuance and judgment from auditors to determine like, okay, like, well, what is non-text content? Or like, what's the behavior for CSS generated content? Like, is that announced by assistive technology? All these things, there's so many like things with it, comes to it to accessibility that it's better to break these things down into the individual rules. It makes it so that people with less accessibility expertise can carry out these audits. The other thing that I will tell you, and this is my hottest of hot takes, is that I see, and maybe this should be a talk if you think it, should be a talk, let me know in the the chat, but there's this thing that we say, right? That like WCAG is the, well, let's just think about like, why is WCAG significant in the first place, right? Is that courts say like if they find an application is inaccessible, the court says, "Hey, we don't have like specialized knowledge to really determine what accessibility is online." So use WCAG as a remedy, right? Like this whole big board of people around the world is consortium. They've determined what are the minimum level of accessibility required for applications and things like that, right? However courts say, and again that like, you may not have to use WCAG if you, you don't have to use WCAG if you already have your own custom, your own internal accessibility guidelines. So I wanna clarify something is that when organiza-, I find like Microsoft, Google, Amazon, these people, they're defining their own internal accessibility guidelines, but that does not mean that they are ignoring WCAG. It's just that the internal accessibility guidelines have to map to WCAG. So again, what we're seeing is that they're not testing against WCAG, they're testing against their own guidelines, which they've defined, which again, are supposed to be equivalent to WCAG. And I think one of the reasons that you're seeing this more and more, I see every big company doing this now, is because I think that WCAG is getting bigger and more complicated with more guidelines being added each version. So I think, and some of the newest WCAG 2.2 success criteria I think are ruffling some feathers. So I think that like, again, they're doing their internal guidelines kind of as a way to not have to deal with some of the pain points of WCAG. So again, testing by rule reduces level of expertise required to do audits, and it saves you from having to carry, having to test some of these WCAG success criteria that are likely outta scope or just unlikely to cause an affect to prevent someone from using an application. So I think that's what's, so why I keep talking about that. I think it's a really big deal and I think you're gonna see more and more organizations just use their own internal accessibility guidelines and then under the hood it's gonna map to WCAG. So, and I think again, it's a good way to go about it for some of these organizations because it's simpler than WCAG. It's better to have like, hey, we have these 10 guidelines with all these different rules versus like 57 or whatever it is with WCAG success criteria. Next, what about Accessibility Creation Plans? I said creation like that because it's capitalized, those letters. Okay, can't large organizations just make it an evaluated requirement for all employees and build it into content creation workflows to certify that basic accessibility elements have been considered and designed in from the start? You know what's one of my favorite expressions, I say it a lot, is there's no difference, actually Yogi Berra said it, but there's no difference between theory and practice except in practice. Okay? So like, yes, organizations create these mandates but they gotta be enforced. So who's enforcing them? That's always gonna be the challenge with those kinds of things. And one thing that I wanna add here, which I didn't include in this presentation because it just, it's not related directly to Capture, but I think for the people on this call that are accessibility program managers, one thing I, again, I was embedded with one big company and I think one tool that they used that was really effective is that teams may file waivers for not fixing accessibility issues. So the waiver requires them to state why they did not fix it. Because, oh, it's a third party tool, we do not have control over the code base. There's no other vendors which we can use or to like, we did not fix this issue because we have this alternative way that people, we direct people this other accessible way of doing some app or thing within the application. But I think again, a really good big way to get management buy-in is to provide this waiver system. Because I think it's one of these things where it's like, I think if you present this way that oh, like there's no way to get around this. And like teams are like, we can't fix this, then they don't say it, they don't say anything. I think this happens a lot. And so again, the waiver system I think is really huge and it becomes something you can track, right? And it's almost like CYA policy for the accessibility team that it's like, hey, that team again, you're encouraging that team to be open and it's keeping track of something that you can always fix later, like you saw later. So again, waiver system, a really huge thing I think for internal accessibility programs. And I think every big company is starting to do that now. And then again, you have a board, you know, or that's probably something your head of accessibility program would be doing of evaluating those waivers and determining which ones you'll accept. Yeah. So again, I really encourage that. All right, I've got another question here. "To your point on what WCAG success criteria being streamlined, it totally makes sense also, not every CMS, that's a content management system, can match every WCAG success criteria, but you should try." Yes, I agree. You know, that's a major challenge, you know, for using CMSs, right? Especially like WordPress, right? Like is you have limited control over that code base. And I think that's ultimately why the biggest organizations in the world, they control their entire code base, right? they're not using these prebuilt templates and things because they need to be able to make those kinds of changes. And that's why again, I think this presentation today was for larger organizations rather than the small people that use WordPress or Shopify or whatever. But it's an excellent point about CMSs. One thing I encourage for people that use CMSs, I think this is really underestimated, is when you're considering some WordPress plugin or whatever, go to the review page and see if other people have made complaints or feedback about its accessibility. Very commonly I'll see that I'll go there and they're like, "Hey, we received an accessibility complaint about this booking engine. We let them know, they haven't been responsive," therefore stay away from that plugin, see what other options are out there. And people also put good feedback. So again, that can be an easy way for you to make that decision. Now you're an accessibility professional, I assume, so you can kind of test the widget yourself, but for business owners, other people that don't have that technical understanding, that's a really good way to just kind of do one little check. All right, well, we're over the hour. I'll let everyone thank you for your attendance and yeah, I am a resource for all of you here, so feel free to reach out and take care. Thank you. - [Anthony] And thank you Aaron, appreciate your time. Always a pleasure. Bye everyone.