- [Kari Kernen] Good morning, good afternoon, everyone. We will be getting started here in just a few minutes. Gonna give a couple minutes for everyone to get entered into the room. And good morning, good afternoon, everyone. Still got one minute till start time, so we'll let some additional attendees get into the room before we get started. So, just be patient with us. And again, I wanna thank everybody for joining us today. I'll give it just a little bit more time. Seems like we still have a few attendees rolling into the room here. All right, again, just wanna say good morning, good afternoon. Thank everyone again for joining us today. My name is Kari Kernen and I am the senior manager of Sales Development here at TPGi. I just wanna go through a couple housekeeping items before we get started. This session is being recorded and we will email everyone the recording usually within one to two days after the event. We have captions available, so feel free to use them as needed. We'll try to answer as many questions as we can throughout the presentation. And at the end of the presentation, please use the Q&A box, not the chat, to ask questions. Sometimes, we miss questions if they come through the chat box as opposed to the Q&A. Lastly, I just wanna mention if anyone needs any accessibility support training or usability testing, et cetera, I'll send out an email with a link where you can schedule a time to speak with one of our experts after the webinar. With that, I'm gonna let Justin get started and provide an introduction for himself, and we will get going today. - Thank you, Kari. Hi, everyone. Thanks for joining us today. So, we're gonna be talking today about shifting left and baking accessibility into your Software Development Lifecycle. A couple of topics that I wanna cover with us today. One really why shifting left is an important step for you and your development teams. It's a little buzzwordy, a lot of, getting a lot of traction all over the place, but what does it really mean? And how is it going to help you and your teams build more inclusively but also manage your risk long term? We're also gonna look at how the ARC Platform can help you test earlier and test often. Test early, test often longstanding phrase used in test driven development. You wanna test as soon as you know that, as soon as you're starting to develop code, it's beginning to test that, and retest, and test, and retest, and keep that cycle, that pattern going. So, we'll talk about how the ARC Platform and some of our tools can help you with that. And then finally, we're gonna go through a lot of some of our tools about that. We're going to help you getting, that are gonna help you to shift left that accessibility testing and what you can begin doing today. We have a number of free tools that will help you and your teams get started with that process today. So, real quick, just a little bit about me. I'm not gonna spend a whole lot of time here. I'm a product manager for the ARC Platform and all of our related, those related products. I've worked in accessibility since 2001, originally as a US federal contractor, working with the likes of like Department of Labor, Department of State, USAID. They sadly have been in the news a lot recently. But I've been with TPGI now for the last eight years. Started off as a developer, was a consultant for a time with TPGi. I've been a business analyst. I've done test automation. I have done a number of roles and wearing a whole lot of different hats since 2001, all focused on digital accessibility. So that's kind of in, I've got some interesting qualifications that are, that kind of help me see all the different steps of the Software Development Lifecycle and to put back on those different hats and think about what do I need to lead my team or to work within a team focused on accessibility. So, that's a little bit about me. So, let's jump in and let's level set real quick. So when we talk about the Software Development Lifecycle, what are we talking about? The SDLC, as I'm gonna abbreviate it 'cause Software Development Lifecycle doesn't exactly roll off the tongue. When we talk about the SDLC, it's a structured process that's gonna help us outline how our team moves forward and delivers software. Everything from building it, testing it, deploying it, maintaining it. It's a core set of steps. Some of those common steps that you're gonna see, things like planning, discovery, design work, not necessarily design in the UI per, but design can also be architecture design. How are all these pieces going to fit together? Development like the actual building of something. Testing can be everything from developer led, test driven development unit, integration testing all the way through QA, automated testing, browser testing, automated browser testing, exploratory testing. Testing covers a wide gamut of topics. Finally, once you get it through deploy, from through testing, we need to get it out there. We need to deploy this, what it is that we've built. And then long term, what does our ongoing maintenance look like for what we've built? So, how are we going to maintain this thing over time? So, where does shifting left come in? So in that traditional model, accessibility testing is really kind of comes in more towards the testing side of things, where it is part of the development cycle. When we get into that, when we look at that traditional model, all that work happens, and then we get it deployed and then we start testing it. Then, we start doing things that results in a lot of costly rework, some maybe frustration with developers who think maybe their goalposts, the goalposts are moving around or shifting around on them. And so, we wanna limit that. That can also mean whether we miss deadlines. Maybe not everyone knew that accessibility was a requirement of the deliverable. And so now, we've got the team's gotta go back and fix all these things that they never took into consideration at the start or you might actually end up delivering it and now your application, your deliverable is in a non-compliant state. So it opens you up for risks at the end of your, from maybe legal risks, maybe financial risks, depending on your target audiences, et cetera. So, the shift left model is all about taking that risk, eliminating it by focusing on the accessibility requirements at the start. So, we're gonna be looking more at the, during the planning phases, the discovery phases before we even really get into development. Loading up our team, getting them the tools and things that they need, so that we can mitigate those long-term risks, that we can mitigate any rework that we're gonna have, we would have to pay for. We're gonna try to keep our deadlines in place, so that we're not having to redo things at the end of our sprints or at the, you know, right as we're getting ready to deploy. And best of all, we're gonna make sure that in that shift left model that before we even get really out of the development phase and into testing, that we've got a more inclusive product, so that everyone can use it. And that includes our testers, because even our testers need to have a more inclusive product. It's gonna make it easier for them to test if our development team is actually building something that is accessible before they even commit their code, okay. So with this process, where's the ARC Platform fit in? And ARC and our various tools. We've got a number of tools out there. Some of them are built into our ARC Platform. Some of them are publicly available, open source tools, that we make a, you know, available on GitHub, like our Color Contrast Analyser. Others are freely available like ARC Toolkit, which is a browser plugin, we'll talk about in a minute, for Firefox and chromium browsers. So, we've got a number of tools out there and along this, along through the lifecycle, no no matter if you're doing a waterfall project, if you're a small lean shop, a scrum shop, if you're a shop that's, you know, a big enterprise shop that's following safety, like these individual phases have some sort of similarity to the way that you are building software. And so what I wanna talk about today and kind of go through is where are some of the places where ARC can help you along the way? In the planning phase, we're gonna talk about ARC Tutor, which is our on demand class type of education software. So, it's going to take you through various courses and allow you to train up not only yourself, but your team so that you have at the start, even going into a project knowing that everyone is starting off with the same foundational knowledge of accessibility. Then, we're gonna, when we talk about discovery, looking at ARC KnowledgeBase. Discovery is all about figuring out what it is that we're trying to build. And when we're trying to figure out what to build, having something like the ARC KnowledgeBase, our library of knowledge to fall back on, to help you answer some questions maybe around what are the best ways, what are some of the design patterns that maybe we need to include before we even start building something out, what are some of the core knowledge that the team needs to have or we need to be able to go and look up, how something is going to be defined. Also during discovery phase, as a product manager, spend a lot of time in acceptance criteria, in my story writing. The KnowledgeBase is a great way if you're trying to build out some new UI pattern of being able to go in and figure out what specific line items do I need to include in my acceptance criteria. Then, we start talking about design. We're starting to build things out. If you're working in Figma or any other tools, having a Color Contrast Analyser that can help you test the colors, great way to get to test colors. Make sure that the things that. the colors and things that you are using achieve a the correct color contrast ratio. Then when we start talking about development and testing, our toolkit really coming in play. We really wanna front load that more on the developer side, allowing developers to use ARC Toolkit to test out. So as they're coding out a webpage, a new component, et cetera, that they have the tools right there in their browser to be able to test and make sure that what they're building is accessible before they even commit that code. Testing, same tool. Giving your QA, your testers the ability to use that same tool that the development team is using, so that they can confirm that, "Yes, this thing that the developer just sent to me to test out has passes all the same tests in our toolkit." I can ensure that they were, it's a checks and balances sort of thing where you, the QA, is now testing and affirming the developers are actually running the test as part of their development process. When we look at deployment, we have a couple of tools play very similar, Playwright and WebDriver, that we'll talk about here in just a few minutes. Great tools. They utilize our ARC, they're wrappers around our ARC rules engine, which allow you to, as part of your CI/CD pipeline or in any browser test automation frameworks that you may have, to be able to run the ARC rules engine against pages, components, whatever that you want and fail a build in the event that you, and in the event that a certain error or a certain number of errors are returned from the scan. And then finally, at that point, once we get to deployment, we've kind of gotten too far out of, like that's as far left as we wanna go. Everything from planning, discovery, design, development, testing, deployment, all those, that's where we wanna shift. Ongoing maintenance is really too late in the game. But from a safety net perspective, ARC Monitoring, being able to set something up on a schedule that even in the event that something gets through your development through, you know, everyone got the right training and all the acceptance criteria were there. Everything passed. You had your CI/CD pipeline all set up. Everything was testing. Sometimes things go make it through, particularly if you're building some in a component-based architecture, where maybe it's not until everything is integrated into the same webpage that problems start to creep up. And so, we'll talk a little bit about from an ongoing maintenance perspective, ARC Monitoring, and some of the things that you can do there. We've got a question already so far. "Is there any type of Google extensions that could help develop us use ARIA when coding?" So our ARC Toolkit, and we'll talk about this in just a minute, it does test for ARIA, test for whether or not the ARIA that is being used in the page is correct, is valid. So, we have a number of tests there. We make sure that the, in the event that you utilize a particular type of role, that the, that specific attributes or states are defined correctly. So, we have tests there that show up in our toolkit and that is available in chromium, any chromium browser, so Chrome, Microsoft Edge. Opera is a chromium browser now. Firefox is not a chromium browser, but we support Firefox. So, that type of extension would help you test for that. It's not gonna necessarily lead the way in telling a developer how to build, what are roles or attributes in order to apply to build out a component though. So, it's not led that way. However, we do have in the ARC KnowledgeBase and in our Tutor, we do have some Tutor modules that help show that off, particularly around like design patterns and different ways to use ARIA. So, that's where I would wanna start developers off, is really with that KnowledgeBase article or those Tutor modules in order to build up that knowledge before they even start building something. So first off, we're gonna talk about ARC Tutor. I mentioned that this is part of the planning phase. The great thing about ARC Tutor is that it allows you to train yourself or your team in all facets of digital accessibility. We've got courses out there from everything from introduction to accessibility, all the way through like really deep dive development techniques around for not only the web, but for Android, iOS. We've got PDF Tutor courses now. We have just an absolute wide array of different courses out there that you can come in and you can review those courses and train yourself, train your team to get that knowledge, that accessibility knowledge, so that when you start moving into the discovery and later stages of the SDLC, that it's not quite so scary. You've done your initial investment. You've learned how kind of the direction that you want it to go, and now you can start to shape when you start getting to discovery and design. You can start to better shape your expectations around what it is that you're going to build. The ARC Tutor and KnowledgeBase courses are maintained by TPGi team of expert engineers. These are folks that have been in accessibility for even longer than me and I've been in this for 24 years now. So, it's the wealth of knowledge that we have available in the knowledge center between ARC Tutor and our KnowledgeBase content is really amazing. It's probably our, it's not our best kept secret 'cause we talk about it a lot, but it is a gem in our own little crown. And yeah, if you ever, if you, if you're, if you subscribe to the ARC Platform, you can come in. You can take the Tutor modules and it's great. So, let's jump over to what Tutor looks like real quick. So, this is our introduction to digital accessibility. If you're brand new to accessibility, this is a great place to get started. Each of our Tutor modules or courses have an estimated course duration. So in this particular case, it's letting me know that this is 55 minutes long. That there's a brief course introduction, but some of the topics that are available in here, things like accessibility, disability, how people are affected by disability, depends on, to a certain extent, on the type or types of disability they have. So, we'll introduce you to those concepts. Some of the impact that accessibility will have on people with disabilities. Then, we'll cover assistive technology, advocacy, and litigation role-based accessibility. The various guidelines that we use to determine whether or not something is accessible or not. All of these are lessons within this one introduction to digital accessibility course. And as I said, we have courses across a number of different topics. Everything from brand new to accessibility, let's introduce you to all these core concepts, all the way through to native mobile development, web development, building accessible PDFs, et cetera. Moving on to discovery. We've got the ARC KnowledgeBase supporting us. The KnowledgeBase is an accessibility library containing detailed instructions for creating accessible code for your digital content. Like it literally is a encyclopedia of everything that you want to do, right in building something to be accessible, tons of examples of what to do wrong, how it should be changed. And we walk you through at a very granular level what it is that you need to do to build something to be accessible. Again, maintained by our expert engineers and covers a wealth of topics, web development, Android and iOS development. We've got design patterns, accessible user experience. And there's a vast wealth of knowledge available in our KnowledgeBase. Looking at the KnowledgeBase, I'm just gonna bring up one of our more popular KnowledgeBase modules. This is our Accessible Web Content Development Module. Some of the topics that are available just for building accessible web content. We've got information on cognitive disabilities, complex images, CSS layouts, a number of articles on messaging and errors for forms. If you're working on logins and authentication, there's four separate articles on different types of things, on different patterns and concepts that you need to keep in mind when building out logins and authentication. Skipping around, if you're working in media, working on like YouTube videos, what is it, what do you need to know in order to embed a YouTube video or to embed some other media player, what are the, or building your own media player. We've got articles on that. Single page applications. I'm gonna dig into tables real quick. So when we talk about tables, this is how granular we go. Individual articles on using the caption element, use of headers and ID attributes for complex tables, editing tabular data, tabular data. So a wealth of information, very targeted, very specific on topics that will help you understand at that low level. When you're working on a table and you need to add a table with maybe a filter, what exactly is it that I need to keep in mind when adding a table filter, a filtering a table. So wealth, great wealth of content, great wealth of knowledge, and definitely something that you want to use to build out when you're in that, those early stages. So that when you move into the design, when you move, when your team moves into the development phase, that they've got an understanding or they have this as a resource that they can use to help inform their development rather than coming back into it later on and having to figure out how do we have to recode this particular component or this, in this case, maybe a particular table with a filter, how do we need, what do we need to do to make this accessible. Before we move on, a couple of questions are coming in. Oh, one disappeared. "Can the Tutor training modules be added to a learning platform such as Cornerstone?? So, our Tutor training modules, if you subscribe to the ARC Platform, we can, we do make our Tutor training modules available in a SCORM compliant format. Please reach out to one of our sales folks and they can help you kind of get, you know, talk you through and get up, get set up with that. SCORM is a very interesting specification, and I don't know if it necessarily works. I don't know the ins and outs of Cornerstone, but if Cornerstone will allow you to import SCORM compliant content, then yes, that we do make those, that information available in that format. And we support a couple different SCORM versions. I don't have them off the top of my head though, but we can get that to you if you just wanna reach out to us. Great. Let's see, moving back. So, that's our discovery phase. So far, we've talked about planning. We've talked about discovery. We've built up, we've brought the team in. We've gotten everyone hopefully some basic amount of accessibility knowledge, so we're not going into this fold. I like, one of the things that I like to talk about is there's a lot of similarities between accessibility and security. If you've done any type of work in security, there's a lot of things that you don't know until it's too late. And accessibility can be much the same way. So learning as much as you can before you start building is going to save you and your team a lot of, it's gonna lower your risk. It's gonna lower your rework, et cetera, as you continue to build. But once we start actually getting into the, once we get outta discovering, we're starting to get into how are we actually going to build this thing. We get into that design phase of the Software Development Lifecycle. And while we don't necessarily have any architecture tools, we can look at from a UI/UX perspective of design our Color Contrast Analyser. Now, this is a free tool. It is a desktop based application that's available. You can find it on the TPGi website. When we distribute these slides, I've got links here for both our Windows and Mac additions of the tool. It's the same tool, just across platform. But what's great about the Color Contrast Analyser is it allows you to very quickly and easily check your foreground and background colors, not only against the success criteria in the WCAG standard around text, but also UI components. And this is important to help ensure proper contrast for individual that might be colorblind or low vision. And so, what's great about this tool is that as opposed to, you know, something that's built directly into Figma or something that's built a browser based tool. And there's a number of tools out there today that do color contrast analyzing. But one of the things that I keep coming back to using the Color Contrast Analyser for is that it's one tool that I can use not only to test something, I can test Figma Designs, but I can, if I have a PDF that I'm working on, I can test that. If PowerPoint, if I'm working in a PowerPoint, I can test in there. I can test any other application that appears on my desktop in the Color Contrast Analyser. If I've got a tool that's baked into my browser, I can only really test a webpage. I can't test things that are outside of the browser. So, that's why having a desktop application is particularly useful. It has a lot more utility for the team as we're building different things, okay. I'll show real quick how the Color Contrast Analyser works. I'm gonna switch back to my browser here. Oops. And. We've got a little demo site here. If you ever need a good, bad, worst case scenario for accessibility issues, we have a tpgiarcdemo.com. It's a just a little site that we built, but it is just chockfull of everything that you should never do. So, it's a good test. And I'm gonna bring up our Color Contrast Analyser here. What's really nice about the Color Contrast Analyser is that... Doing some things before earlier. What's nice about the Color Contrast Analyser is that we have an eyedropper tool. So, you can go in and I can enter a particular color in HEX or any number of different types of values. So we support, you know, HEX, HEX with opacity, or alpha channels RGB. I can drop information in here directly from my Figma designs just to test things out or I can use the eyedropper tool to test as well. So in this particular case, maybe I want to test this awesome store text. For the background color, we're going to grab this red. So, I'm just using the eyedropper tool here to grab that red and I'll use the eyedropper tool again to grab the white and the text. What that's going to show me is that for that these two colors together have a contrast ratio of 6.4 to one. This passes the WCAG success criteria 1.4.3 Contrast Minimum AA, check. Not only does it pass it for regular tests, but it also passes it for large text. If we start looking ahead at AAA, we're trying to achieve AAA pass for success criteria, then it's going to pass for large text. But it's going to fail for regular text because it does not achieve that seven to one ratio. And we do have, you know, additional information in here that helps provide you with some basic information on what we're looking for, how we perform this particular test. If I had a particular, an icon in here, a UI or some other graphical component, then we'll also have the test in here for 1.4.11, in which case these two colors pass as well, okay. In the event that your colors don't pass, there's some really neat tools that are available in here. So, let's scroll down here real quick and take a look. We've got this women collection text here. The color is the same. It's still white, but the background color, let's see what that looks like. So, let's snag that. In this particular case, you can see that all of these, all of our success criteria fail because we, the contrast ratio is 2.6 to one. So, it doesn't even achieve that base three to one contrast ratio that we need for larger text at AA or for our non-text contrast. So, but I want, so I wanna find a better color for this. So, there's some really kind of neat tools that are built into Color Contrast Analyser. Maybe my design is set up right now that I really wanna leave this kind of mustardy yellow color background color in place. But is there a foreground color that's going to allow me to keep that, to keep this base color, but still have the, but just change the text. So in this case, I'm gonna look at RGB and I'm gonna enable our color sliders. I'm then gonna synchronize our color values. So, I can check this box to synchronize, and then I can use the sliders here to kind of move this around, to change the color of my foreground color. And if I am, as I'm doing it, it will allow, it'll recalculate the color contrast ratio as I'm changing that value. So, I'm gonna slide it all the way. Once I start getting into the dark grays, my color contrast ratio went all the way down to one to one, and now it's heading back up to 2.9. So, once I cross three with this kind of dark gray color, and I've got a little sample here showing me what it looks like. I've now crossed that three to one, but this isn't considered large text. So, I need to get my regular text up. So, I'm gonna keep dragging this up here until I get it right about there. So now, I know that 393939, this dark gray is a potential color that will now pass that color, that minimum color contrast ratio in order for me to achieve 1.4, to pass the success criteria of 1.4.3 color contrast minimum. So, very useful tool to convey this information. There's also some other really great things in here. There's the ability to copy the, if you're doing testing and you get these, the colors in here, we make it available, so that you can copy the results directly outta here in either long form mode or a short form mode. The settings give you the ability to really kind of dial in exactly how you want to convey that information to your team. Maybe you're gonna copy and paste that information out over into a Jira ticket. So, it's we give you the ability to define what both of those are and how you want that information to show up wherever you're pasting it, so that you can communicate back to your team exactly what needs to change or what the values were you found during testing. So, very useful tool not only for designers, but for developers, testers, et cetera, to test color contrast across, not just webpages, but anything that appears with on your monitor. So, that was our Color Contrast Analyser. Again, a free tool available to you on both Windows and Mac. Looking ahead at development and testing, I mentioned earlier ARC Toolkit. Again, this is another one of those of, one of our free tools. What's great about the ARC Toolkit both for development and testing is that it's going to allow you to quickly uncover and resolve any accessibility issues on a webpage. So you, it's very easy to just run a scan, see the results, and then begin working through, if you're a developer, you start working through the issues that were found, fixing each one and then running the scan. It's very lightweight. It's very fast. So, you get a very quick turnaround when you're actually testing what's being rendered within your browser. The ARC Toolkit does use the exact same rules engine that we use within the ARC Platform within anything that runs our automated testing is all uses the exact same rules engine. There's no difference between what you would test in the version of our toolkit versus what's gonna be run in our ARC Platform or later on when we talk about Playwright and WebDriver, same rules engine across all of those. One of the other cool things about our toolkit also allows you to easily highlight other areas of interest to accessibility. If you wanna know where all the headings are on the page or what at least what all the headings that have been defined on the page 'cause sometimes, we make things look like a heading that aren't actually headings, then we can draw boxes around those and highlight these things. We can also show you the tab order, how someone who is going to be tab, using the tab navigation to navigate the interactive elements on the page. We can show you the direct, the way that that goes. This tool is available, as I said earlier, for Chrome, Edge, Firefox, Opera. There's a lot of chromium browsers, Brave. So if you've got a chromium browser and you can install things from the Chrome Store, then our toolkit is for you. It is also available for Firefox for any of my fellow Firefox users out there. So, let's take a look at what this tool does and how I might use it. Going back over here to our awesome store, I'm going to bring up ARC Toolkit. So, ARC Toolkit is similar to other accessibility testing tools. It's available within the developer tools. So alongside my, you know, the console, my inspector, network, all those other things, it's available in here. So now, I've got ARC Toolkit and I'm just gonna have it run the tests against this particular page. When it runs those tests, it gets, well, we see a number of tests that have come back. We categorize results from an automated scan into three categories. Things that are errors or things that we know to be false. That there is a, there is a definite thing here that failed a specific automated test. And we wanted to alert, we wanted to notify you of it. We also have alerts. Alerts are things that we think are going to be errors and the kind of the conditions are right for this to be an accessibility issue, but it requires some level of human intervention to check those out, to confirm them, and to determine, oh yes, this actually is something that I need to address or no, this is not something that I need to address. This is actually built this way and, you know, thank you ARC for reminding me of or letting me know that you thought that this could be an issue. And then finally, we also identified best practices. There are a number of industry recognized best practices that we as a company, we as an organization, we as an industry really have kind of looked at and said these are things that don't necessarily fail a specific WCAG success criteria, but from a best, but really from a best practice are things that you should address. So, errors and alerts are all testing against WCAG success criteria. Best practices are looking more across what it is that we really want to deliver a good user experience for someone who is using our page, either through a screen reader or other assistive technology. Each one of these issues, I'm just gonna open up, let's say this ARIA hidden one. Each one of these issues provides us a description of what it is. In this particular case, aria-hidden was used on a parent of a focus of element. So we've got, it's telling us that aria-hidden="true" is used on an element that contains one or more focus child elements. And then, it's gonna list out for us all of the individual elements and where they occurred within the DOM that we need to go and look at and fix. If I can highlight these, I can copy the XPath. If I need to drop this in, if I'm a tester and I need to drop this into a ticket, I can copy the XPath, I can copy the source. All this information is available to be moved into a ticketing system. Now, ideally, because we're shifting left, none of these issues should make it out of development. As a developer, I wanna go through, I wanna fix all of these items, not just this particular one here, but all of the ones in this list or maybe looking specifically at specific components, I also have the ability to run my test against a specific section of the page. So, you can jump in and look at specific sections, specific components within a page and ensure that maybe that one component that you are working on today that that's accessible and that that does not leave, that code does not get committed in until all of the errors are defined. That you've satisfied any of the alerts and maybe if you know, if your organization is interested in looking ahead to best practices that you've addressed these as well. So, a lot of capabilities, a lot of flexibility here in ARC Toolkit to see where you need to be focusing your attention from an accessibility perspective. Also, I talked about those highlight functions. So, the ability to highlight certain areas, certain aspects of the page. Here, I'm gonna highlight our headings. So, headings is going to bring up for me a list of everything that it, that our toolkit found as a heading on the page. And I'm gonna just drag this off over here. Oops, not that far. But it also draws visual boxes around everything. So, I can look down my page and ensure that everything that I think is supposed to be a heading actually was identified as a heading. So, very useful tool and for testing. Not only automatic, using automation to quickly identify some of the tests that we can find, some of the things that we can test for accessibility through automation, but also helping us to do some softer testing and making sure that things like headings, landmarks, or tables that everything is marked up and defined the way that we expect it to be. Let's see. Got a couple of questions coming in before we move on. One, "Can the ARC tool be used in SCORM environments?" Not sure. I'm gonna answer that as written, though I feel like there's probably some more nuance in that. So if the question i can ARC Toolkit be used in a SCORM environment or any type of LMS, ARC Toolkit is a, because it is a browser plugin or browser extension, it will work and run anywhere that you can, on anything that you can load up in your browser. So if you are, if your LMS is web-based, even if it's behind your internal firewall and you wanna maybe you're trying to test to make sure that your LMS content is accessible, then yes, our toolkit, you could go page by page through your LMS content, through that content and test to make sure that it is accessible. The next question, "Can the ARC Toolkit be configured to test against various WCAG standards? For example, WCAG 2.2.1, 2.2, et cetera." Yes, it can. So by default, we test against, we always test against the latest WCAG standard, but you can come into the settings and you can tell our toolkit to use a particular standard. You can also configure whether or not you even want to show errors or best practices, whether or not you want it to show the shadow DOM or test the shadow DOM. Maybe there are specific topic areas that you don't want it to test anymore or that you want to exclude from your testing. So yes, you have the ability to come in and focus your test on a particular standard. So going back to WCAG 2.0, this would eliminate any tests, any automated tests that we added, specifically testing success criteria from WCAG 2.1 or from WCAG 2.2 okay. All right, I'll come back, if we have some time, I'll come back to that last one here in a little bit. I wanna keep moving on. Again, ARC Toolkit, great tool for developers, great tool for testers. Anyone else on your team that needs to, you know, quickly test for accessibility. Content managers, content editors, if you're building, you know, your plugging content into WordPress, you wanna use, see that with that live preview page, did you make sure that you got all of your, you know, did you add alternative text to all of your images, did you mark your table up correctly, all those things. So our toolkit, great tool. Again, freely available for Chrome, Edge, Firefox, all those chromium browsers. Moving on to deployments. For deployments, we have two tools. They're really the kind of the same tool, but they're really focused on two different methods of browser automation. One of those is ARC Playwright, which is fairly popular these days and is a great tool for automating the browser. WebDriver or Selenium is a very similar tool, very similar set of tools for driving the browser and giving you the ability to test things in the browser. What we've done with for both of these tools is we've developed a NodeJS wrapper around our rules engine that allows you to very easily plug the ARC rules engine into Playwright, into WebDrivers. So that if you have existing tests either of, in either these tools or you're thinking about getting started with either of these tools, then you can very quickly get up and running and start adding ARC tests into your, into these tests, which this would, oops, let's back up just a second. So in order to do this, it's fairly easy to get to get started. You really are just kind of writing any other, any type of test. In this particular case, we have a, in this, in the screenshot that I have up here, there's a test that's against the homepage that's testing that the homepage should not have any automatically detectable accessibility errors. And what we're going to do as part of that test is we're going to, using Playwright, we're gonna tell it to go to a specific page. In this case, this is your-site.com, but internally that could be any page that you can, that your browser automation tests can get to. We're then going to tell the, tell Playwright to load to create a new ARC scanner within that page and to perform the analyze function on that page. And then, we're going to assert or expect that those results return no errors and that they're equal to the number of errors that come back or equal to zero. So, we're not looking at the count of alerts, the count of best practices. We're gonna look really on those errors and see if that equals zero. But if I write this out as an automation test and I have my CI/CD pipelines set that all of these tests have to pass in order for the bill to be successful and to go on to the next stage, then if I don't, if I got to this stage and my accessibility, and I haven't addressed all of my accessibility errors and one error is returned, this test is going to fail, which is then going to stop my bill process and thus stop my deployment process. So, this can be a very easy way to go through to automate testing if you've got, so that things are not going out past your build pipeline. Very useful if you have component libraries that you wanna test. Maybe you have a component library out there that you want to write a series of browser tests for or that you've already got browser tests for. Easily plug the ARC Playwright wrapper into that. Add a, you know, within a couple of lines, now you're testing that component for accessibility, alongside the testing of the various functionality that you have for that component. So, very useful tool, very easy to integrate into those browser automation tests. One of the great things about ARC Playwright and our WebDriver solutions is that you do get complete control over the scope and the scan. Far more than we give you through, even through ARC Toolkit or through our through ARC Monitoring, which we'll I'll mention in just a minute. But we give you really detailed ways to scope down that test. If you wanna run just a single test, you have that capability. If you wanna run only the WCAG 2.1 Single A tests, you can run just the tests related to the WCAG a 2.1 Single A. So, a lot of flexibility on how you want to scope and control that particular scan. Let's see, before we move on, let's see, question, "Does ARC provide suggestions for remediation and when issues are found, does ARC provide the WCAG e reference for traceability?" Yes, it does, and yes it does. So, we do provide remediation. We do provide reference information. I'm gonna go back, it's kind of easier to see in toolkit. So, let's just open maybe something like empty label text. So, let me make this a little bit bigger, so we can see it. So for empty label text when this fails, so we have some remediation text here, the description is associated form with the control is empty, but we're also letting you know that this is part of Success Criterion 1.3.1 Information and Relationships. And then from here, we also, we do link you off to the Understanding Guide for SC 1.3.1 so that you can get information there. In some upcoming releases of toolkit, we're gonna be also linking you into to the ARC Platform, which will allow you to access directly that KnowledgeBase, so of content. So for empty label text, we can link you directly into the ARC Platform where you can find out from a specific article all about empty label text, why it's important, and how you should even more detailed information on how you should address it. As I mentioned earlier, once we get to ongoing maintenance, we've left kind of the realm of the shift left. Really the, by the time we've deployed, even deployment is really late stage in the game. But from an ongoing maintenance perspective, having something like ARC Monitoring in place is incredibly useful as a safety net. You wanna make sure that you have something on a schedule that is always scanning to make sure that you don't have a blip, that nothing makes it through your releases. Because sometimes we might, you know, a developer might think that an alert isn't an issue when it really is. Testing and you know, might agree with them, because maybe not everyone was on the same page as far as the original training. When we go to write our browser automation tests, maybe we don't see it as a, the alert as an edge case or maybe we're only testing for errors. And so making sure that from a long-term historical view of our application that we're monitoring and we're seeing how the team is doing over time. It's also, monitoring is also great if you're just getting into accessibility and you've got some, you know, an older website, an older application out there. We do trend analysis. We will, you know, start that scanning up as soon as you begin, start monitoring. That way, it's a nice... I live in Colorado, so everything's a ski hill out here. But it's a you've got a nice slope. As you see your team improving accessibility, you're watching that curve trend down. It's a great feeling to see all of that work that your team is doing over time, starting to trend down in the right direction. So, having a safety net out there is important. From a monitoring perspective, you can scan, you can do a domain scan where we crawl each page just like, you know, a Google bot sort of does. You can also use user flows to monitor user specific paths. If there's a specific path in your application, like if you're an e-commerce site, maybe you wanna make sure that the search on the homepage is accessible. All the other stuff on the homepage, not as important. That search is key. The product page, the checkout process, you can monitor very specific paths to make sure that those paths are accessible. And then, use domain scans also to make sure that the entire pages are accessible. Our new dashboard gives you a lot of capability around viewing and filtering down your automated findings in our dashboard. So, you can tell the exact story in our dashboard that you need in order to work with your team. A lot of folks like looking the criticals and highs first, they wanna lower their risk by focusing on those. So, we give you the ability to filter down your results over time to seeing what is my risk by just looking at the criticals and highs. You can also quickly access really useful remediation guidance and learning opportunities for each finding. I'll show that real quick. So for our no image role finding, we can see, we've got some basic information here about the number of sources, components that it was tested on, the how many instances have been found within the current period, which for us right now, this is the month of February. This is also a scan of our the awesome store. So we see that this is an automated test from our interactive controls topic area. It was categories as an alert with a severity of high. This is a failure of success criteria. 1.1.1. If we determine for that it's not just an alert, but actually an issue that needs to be resolved. And that this affects screen reader users. We've got remediation guidance in here on what needs to be addressed as a result. Assistive technologies may not announce the element as an image. So, what should we do? Well, we should add role="img" to that element. And then, we've got a number of inline related articles here. Things about hiding SVGs, providing the correct role state and properties. More information about success criteria 4.1.2 is in here. And then finally, we list out for you all of the places where this was found. Because we were monitoring a number of different pages, we can show you all of the different pages this was found. Particularly on our I'm a Product page, we found eight different SVGs where this was an issue. And then, we provide you with information on where each one of those SVGs existed. So, we know where we need to go and what we need to fix. So again, monitoring, great way to put a safety net in to make sure that even though you're shifting everything left and you're trying to front load your process as much as possible to ensure that no accessibility issues make it out of development, out of testing, that having that safety net is very beneficial long term. So in review, shifting left your accessibility efforts to earlier in the development process is really gonna help you lower your risk. I can't say that one enough. There's so much risk involved these days with around accessibility. There's a question here about EN 301 549, which is an EU standard. And with the European Accessibility Act coming up here in June, a lot of people that are in the EU or do business in the EU, have websites in the EU, they're going to need to address their accessibility issues prior to that. And so you need to lower that risk as soon as you can so that, especially once you do fix everything, you don't want to introduce new bugs, new accessibility issues. So, lowering your risk by focusing more of your efforts earlier in the Software Development Lifecycle, shifting left is a great way to manage that. It can also help you reduce long-term costs associated with that rework. The later something goes into the process, the more cycles it's going to take to be addressed. A lot of times, accessibility issues just get put into the parking lot or into technical debt. Something, someplace where we just don't have the capability to go back and get to them until maybe the end of the year. Maybe we have a closeout sprint at the end of our, at the end right prior to release where we go through and kind of clean up as many things as we possibly can. 'cause a lot of teams now are focused on feature development, bug fixing, maintenance, that sort of things. And you want, so you want to address the accessibility as soon as the, in the process as you can and eliminate those long-term costs. And of course, creating a more inclusive product from day one. Having that, having a product that when you ship it, you know is going to be accessible from the start. As I mentioned, ARC Platform includes several tools, some of them free, some of them not to help you understand and test for accessibility at every stage of the software development process. No matter what your process is, we have a tool for that that can help, you know, with the planning and discovery stages, design stage, all the way through the Software Development Lifecycle. So, definitely get started with the free tools. Get out there, test, find out what you don't know about your application, about your website, and then start to build up some knowledge, you know. Happy to, we're happy to talk to everyone. Of course, we are about accessibility. Most of all, we really wanna see everyone succeed in accessibility. We want, you know, been in this for a number of years and you know, I would love for accessibility to become a non-issue and put me out of a job. And so anything that we can do to help you get better at accessibility is a win for us. So with that, I know there are a couple of questions. I'll start looking through those real quick. Let's see. Yep, here's the one. "If we have sites that are also in the EU, can the tool also check against EN 301549?' So yes, so version just like Section 508 in the United States, the EN Standard also has different versions of it. The current version that is out, which is I think version 345, which is an odd numbering, harmonizes with WCAG 2.1. And so for the most part, you can run just the WCAG 2.1 test and look at everything through the lens of WCAG 2.1. In coming up soon, we have mappings coming for all of our tests. We're going through and remapping all of our tests, so that you can actually view the results through the lens of EN 301 549, the same way that you can through the WCAG standards. So that if you kind of switch over to that standard, then all of the remediation, all of the references that we show in our toolkit or in ARC Monitoring, all of those references will be specifically to the sections within the European standard. "Does the ARC tool provide accessibility test reports and can the reports be exported to Word, PDF, et cetera?" No, so ARC Toolkit does not provide that capability. It's meant to be more of a developer focused tool where you're working on it within a particular, within a particular page or component, whatever you're focused on. So, the development cycle there is, the goal is for, you know, none of those issues to make it into version control. So, there's not a way to get that information out of ARC Toolkit specifically. In ARC Monitoring, so if you were to come into the ARC Platform and look at ARC Monitoring, we do have ways in ARC Monitoring with our dashboards and we have some other dashboards that are gonna be coming on later this year, like an executive dashboard. We also have a dashboard plan for components, more of a developer focused type of dashboard. In those cases, then yes, there's gonna be the ability for you to get some of those reports and things out, so that you can share them with your team. Okay. Oh, we are right at time, aren't we? Lemme do one more question. "Regarding testing, can the ARC Toolkit drill down and test the contents of iframes if found in web applications?" Yes, our toolkit will test not only it'll drill down into iframes, it'll test within the context of the iframe. We'll even let you know in the results which iframe where we found specific issues. But we also treat the shadow DOM kind of as much the same way as iframes, where we will look into the shadow DOM, test what we can within there, and then report back up, hey, within this custom component that was well, that was built, we found, you know, these types of issues. There's some things that we can't do across the shadow DOM boundary, but that's everyone kind of has that problem. That's just, it's the way the shadow DOM works. But yes, we can drill down into iframes and and test there. You can also, if you have a iframe that is maybe a particular component, is encapsulated within an iframe, you can test just that iframe, so that you ignore everything else on the page as well. All right. Oh, we have one more. I'm gonna answer it real quick. "Does it output VPAT certificates if all issues are resolved in past?" No, so we don't generate a VPAT from any type of automated testing. The reason for that is because automated testing can really only test so much. Even through in the future, when we add AI capabilities and things like that to our automated testing, there's not a way to test everything. You need a person to go in and perform some accessibility test to evaluate some success criteria. And so, we only generate VPAT as the, from the results of one of our manual audits. And with that, I wanna thank you all and thanks for the extra two, three minutes. Hope I gave you lots of good answers. And yeah, happy Wednesday. - [Kari Kernen] All right, yeah, thanks again everyone. If you had any additional questions that we didn't get to, feel free to reach out to Ida, that's I-D-A, @tpgi.com and we will make sure that we get you connected with someone that can answer your questions. Again, thanks for participating today and look forward to seeing you guys at our next webinar. - Thank you.