- [Stefani] Hi everybody. I've just started the webinar a few minutes early to let everyone join in. We'll get started in a couple minutes. Thanks for joining us today. We still have quite a few people joining in. I am gonna do a quick introduction and a few housekeeping items in just a minute. Well, okay, it's just noon and good morning, good afternoon, even good evening from wherever you're joining. My name is Stefani Cuschnir and I'm a part of the business development team here at TPGi. I wanna thank everyone for joining us today for, "Advance Your Return on Accessibility with Testing." Charlie Pike is the presenter today. I have a couple of housekeeping items and then we'll turn it over to Charlie. Just so that everyone knows, this session is being recorded. And we will email everyone the recording after the event. We have captions available, so feel free to use them as needed. We will also have time at the end of the session for live Q and A. Please use the Q and A box and we'll answer as many of the questions as we can at the end of the presentation. If you put them in the chat, I'll try to monitor, but sometime they get missed if they're in the chat. Lastly, I'd also like to mention if anyone needs any Accessibility support training or usability testing, would like to get some demos of our tools, I will be sending out an email with a link to schedule a time to speak with one of our experts after the webinar. And with that, I will let Charlie get started and provide an introduction to himself. Thanks. - [Charlie] Very good, thank you very much. Thank you, Stephanie. So my, yeah, my name is Charlie Pike. I am the Director of Platform Success with TPGi. What that means is I'm really the liaison between the customer and our platform, which I'll be showing you a lot of today. An advocate for the platform. I've actually been in TPGi for a long time. I was one of the founding partners of TPG. So I've been in Accessibility for nigh on 20 years now. And if you notice there's a little sort of illustration of me here on the front slide with a shamrock. Yes, I'm in Ireland where I'm talking to you from right now. So today we're gonna talk about Advance Your Return on Accessibility or ROA with Testing. So we're gonna be very focused on the areas of testing, how to test, where to test, and how to make the most out of your Accessibility testing, and particularly the different kinds of testing that you can do in Accessibility. So just a quick look at the agenda. So I'm gonna do an introduction and really explain how Accessibility testing breaks down, and then talk a little bit about the various phases of Accessibility with regard to testing. From discovery, to rolling out a testing program, to a mature state where you're testing in your pipeline and you're capturing issues before they ever make it to production and in production and your monitoring. And I'll show you a little diagram of how we see that working. I will leave 10 minutes, hopefully we'll have 10 minutes or so for Q and A. As Stephanie said, put your questions in the Q and A panel and we will go through them at the end when we come back to it. I am, this is largely going to be a demonstration using actual example data in our platform. And our platform, I do have a few slides here. I'll hopefully not spend too long going through slides. We'll get into a demonstration where you can see clearly these kinds of things that I'm talking about actually in action. So, the analysis onion. So I wanted to talk first of all about Accessibility analysis in general and the different types you can do. And anyone who's been to one of my webinars or presentations will recognize this analysis. An onion diagram. This is my running metaphor for Accessibility testing. There's a lot of, there's a lot of, if you like, false ideas about Accessibility testing, particularly around automated and manual testing and how do you use them effectively with some organizations focusing much more on what's automated and other organizations focusing almost purely on, you know, manual expert analysis and so on. And leaving out things like AT testing, so on. The reality is you need to do all the elements of analysis really to get a complete picture of your Accessibility. And you need to do them in the right places because some of them are much more time consuming and costly than others. So you need to think about the right type of analysis in the right situation. And that's really what I'm going to be talking about today. So when we say the Accessibility testing onion or the analysis onion, I'm really talking about the different layers of testing. But all of it kind of has to be done to get a complete picture. So, here in the onion we've got an outer layer, which we call domain scans in the ARC world, ARC being our Accessibility Resource Center, our testing platform. And Domain scans would be scans of an entire website or an entire web application. So it would usually be scans over, you know, a hundred, 500,000 pages or more depending on what you want to scan. So that's a broad picture and you're using that kind of analysis not to understand your compliance. Automated testing like this is not going to tell you how compliant you are, for example, with the WCAG guidelines because you can only test a percentage of those guidelines. And there's various vendors argue over what percentage, and we constantly get asked how many of the WCAG guidelines do you test? But in reality, that entirely depends on what you're testing and how large the test set and so on. And it varies a lot between sites and applications, how many we actually test effectively. It's not going to give you the complete picture. And if you rely on this to tell you how compliant you are, you are probably getting very skewed view, okay? And you might look great, but in fact not be accessible at all. So you don't use those broad level scans to tell you whether you're 80% compliant or 60% compliant. You use those broad scans to understand, for instance, where you're most exposed, okay? Where there's a greater clustering of issues, what are the processes that are causing the issues if you treat your issues as symptoms. And I'll show you a domain dashboard in action where you can see those kinds of trends and so on. That broad pictures very good for the larger analysis. Where do I direct my efforts, in that discovery phase particularly, but also when you're monitoring on an ongoing way and trying to, so you've done a lot of work on Accessibility and you're trying to maintain that. These broad level scans will help you to spot problems as they come up and address them there. A layer in, a deeper layer, if you like, from the domain scans or what we call user flows, again, from the ARC world. User flows would be key flows. For example, if you're a hotel website, it might be your booking process, okay? That's the most important flow for you where your customers are coming in and doing a booking. All the steps in that flow. And you might already prioritize those and say, we wanna make those accessible first and foremost, make sure they work. But also you can use those flows to help zoom in on particular processes or particular templates and/or components to the dialogue box or whatever and analyze those in greater detail. See what effect they have on your Accessibility. So that's a finer, more zoomed in level of analysis, usually on a smaller test set. You know, when we do user flows, we're testing particularly typically, you know, 15, 20 to 30 individual components, a dialogue box, a page, and so on. It's a much more focused automated analysis. And I'll show you when I'm, we're doing the demo, how you can use your domain scans to identify where you want to do that focus. Manual expert audits are something that we, TPGi, that's our bread and butter, we do those every day. We're doing hundreds of them a month. These are the full analysis. For example, an analysis of your application against all the WCAG guidelines or single A, double A. We would do, you know, an expert would basically go through all of those guidelines do all of the tests, there are hundreds of them, and give you a complete report. That is full compliance testing, that will tell you how compliant you are, for example, with WCAG. But that work is obviously labor intensive and costly. So you can't be doing that all the time everywhere. So we'll look again at how you can help. You can use your automated testing to help you identify when and where to do that analysis, and then use that analysis to extrapolate when you're doing your broader scans. And then finally, the core of all of this, middle of the onion, the gold nugget in the middle of the onion is always going to be your user testing, okay? We've included in that AT testing, not strictly speaking user testing, but it gives you an understanding of users and the particular problems, particularly in Accessibility. Understanding the assistive technologies and the challenges users of those technologies have is very important. User feedback, where your users are responding to issues on your site and so on, and providing you direct feedback. And to your site is another way of gathering user information and user research, which is that richest vein of data, if you like, in terms of testing, which will tell you just how effective your work is. At the end of the day, no matter how compliant you are, that user perspective is always needed to understand what the user experience is. Is it actually efficient, is it quick to use, is it effective? Have you used the right labels? Have you developed things in a way that users find intuitive, et cetera, et cetera. That's the cream at the end if you like, to do that level of analysis and then to fully understand and prioritize any remaining issues. So, in that onion you have your domain scans, your broad level scans, user flows, your manual expert audits, and your user testing and AT testing as well. You need all of those to get a complete picture. Only then can you really see your level of Accessibility. So let's just look at some phases. If you want to do all of those levels of testing throughout your organization, how do you go about that? How do you even organize the work in terms of rolling it out? So as I mentioned, we have the domain monitoring and you would start at the high level. Logically if you have a lot of stuff, a lot of sites and applications, you wanna start off with your domain monitoring and scanning at the broadest possible level. So there you're scanning websites and web applications. And as I mentioned, this is a screenshot I have up of ARC, of our Accessibility Resource Center and a dashboard. And there we've got various visualizations that you can use. We've got our WCAG density score, which I'll talk about in a little bit, which is kind of an average of Accessibility failures across the site. And you've got your trend line, which tells you really what's going on at a particular time. So very effective for monitoring. So this is where your domain monitoring will help you identify targets if you like, priorities that you need to address. Then as I say, you at that point you can use your domain monitoring to zone in on particular components and things. For instance, you may find your domain monitoring that you've got a problem with a header and a website as an example. You can pull that out into a user flow. And as I said, a user flow tends to be on a smaller number of components. And user flow scanning includes the likes of interactive tasks, such as that booking flow on a hotel website and individual components such as a header, a footer, a dialogue box, a data table, et cetera. So you can separate them out. So you go from domain scanning down a level to look at things more closely. You're zooming in and you're finding things that you want to focus on. Then you do your manual expert audit. In the ARC world, we call that engagements, okay? As I said, you can't do these everywhere. So it helps that you first do your user flows and you focus in on those components that are giving you particular issues or there are particularly important, as I said, like a key flow. And there you do a full expert analysis. It's ideal if you do that on a flow or user flow that you're already scanning automatically on a regular basis because then you can monitor your progress when you start fixing the issues that you find. So you can use your automated testing to track the progress of the remediation. And that's the engagements. And then we talked about the user testing. And we have a service actually in the ARC platform. If you are monitoring a site, you can turn on the service, it's called JAWS Connect. And JAWS Connect allows JAWS users to who come to your domain that you've been monitoring and provide feedback. And it does that in a very, very non-obstructive way for the end user, which is the most important thing. They get a notification that feedback is available, they can provide the feedback and then go back to the tasks they're doing. And they don't have to install anything, figure out how your feedback works. It's just very, very fluid for them. And that feedback actually goes directly to your ARC dashboard that I was just showing you. So it doesn't go into some inbox, it goes into your dashboard right to the people who are in charge of monitoring the Accessibility. And you can compare that to the data you have in the dashboard. See if you've tracked those problems that users raise and you can create a, if you like, a dialogue with your users. So that's a one way of getting feedback, user feedback directly into the resources that you're monitoring and it comes with JAWS. So once you do your discovery and your different elements of analysis, you'll want to do a rollout that you'll want to get testing throughout your organization. Here's a key element of Accessibility, Accessibility testing and remediation. At the end of the day, if you are testing things after they come out, after they're gone into your production environment being released, you are already losing in the sense that it's much more expensive to remediate issues and you are probably, you know, most likely to fail. Because at the end of the day, you're not addressing things upstream. You know, we see plenty of organizations remediate and then lose the work essentially because the next release of the product doesn't have the Accessibility they worked into it. So it can be very costly to do at that point. What you always wanna do and as soon as possible is roll out that testing throughout your organization. A lot of organizations in typical kind of Accessibility path, they start off with one or two Accessibility experts, maybe they build a bit of a team there. Those experts tend to do most of the testing, okay? You might have testing going on in a sporadic way in your organization, a bit of knowledge here and there, but a lot comes down to the Accessibility team. That simply means that they're, you know, that puts them in a bit of a ghetto, which is very difficult to get out of. They're always busy, they're doing all of the testing. Because they're doing all of the testing they have no time to look up and actually get the oversight and the management of the program. What you wanna do is push that testing throughout the organization. So there are a number of ways through the ARC platform that you can do that. We have API integrated testing. So you can move your ARC testing, your ARC rules testing into whatever it is you use internally in terms of your development and your testing processes. You can integrate that using the API. We also have the ARC toolkit, which is a browser extension. That's easy to install. You can run it because its a browser extension, you run it on your desktop so you can run it in quite deep environments before even code has been committed. And it's a simple installation. So you can run that, you can run the same tests you do in the ARC platform with your toolkit and do your test and recheck, fix things and retest through that. We also have a node package, particularly for integration with your CICD pipelines with your automated testing there. Again, using the same tests. A key thing about testing that you have to think about here, one, obviously, as I mentioned, pushing that testing upstream. Two is line of sight, okay? A great difficulty for a lot of the organizations we work with is the testing is done with different tools, different standards throughout the organization. So a team is testing with one rule set here, another team is testing with something else there. Another team has got a tool in their development environment that's doing some tests. You can't normalize that data. It's very difficult for the Accessibility program team to see what's going on and to compare and contrast because it's all different, slightly different data, it doesn't fit together. You want to get that line of sight. That's even more important than this whole business of how many automated tests can you run, or how many criteria can you test. Far more important that you have consistency so you actually see what's going on, particularly in your lower environments. So using the same rule set throughout is key. And then finally when you do that testing, you wanna have a way of reporting and workflow. So you wanna be able to get data out. We have a Zapier integration, which I'll show you, which allows you to integrate your issues. For example, your ARC issues that you find, get them out into Jira, into your book tracking services, whatever you might use. Likewise, the API to integrate testing reports into your development environments. You want to send relevant data to the relevant people in the role. If you want to provide high level data and scoring and things to your executive team, you want to provide actual details of issues and so on to your development team so they can remediate. So it's important again that you have that consistency of data that you can send through and track. So where you're trying to get to is a mature state, and I'm gonna come back to this diagram later, but what I'm showing you here is a kind of imaginary pipeline, software pipeline, that where you've got standard phases. You've got your monitoring phase, your planning phase for your next release, your coding, and you commit the code, there's a test phase, retesting and so on, and a release. And then you're back to monitoring, okay? And it should be a sort of virtuous figure of eight we've got here in the program. So I'm just showing you some examples of where you can do these different types of testings. There's any number of ways to configure this. This is just a pure example, but it'll give you a sense of what a mature Accessibility pipeline might look like, particularly from the point of view of testing. So looking from the point of view of the ARC and the ARC world, it's got ARC monitoring and that's where you really, as I said, you want your line of sight, you wanna have complete oversight over your program there. So you can track progress, prioritize set policies. You really don't wanna be in the weeds tracking specific issues or testing a specific product yourselves. You want your teams doing that. You wanna watch the big picture. You wanna make sure that issues that have been fixed, stay fixed and that broadly your following an effective program. You can use JAWS Connect at that level as well to get user feedback and identify issues from users. And then in your planning phase, you can use tools like JAWS Inspect. JAWS Inspect is our JAWS companion product for testers. Which gives you a transcript of JAWS speech that you can use, for example, to share with your developers and so on. So it makes it very easy to do your JAWS testing. So JAWS testing here would help you prioritize user issues, things that are gonna have a big impact on users. So your compliance testing obviously helps your developers understand exactly what they need to code, but your JAWS Inspect and those user tools will help you understand what has the impact on users and therefore what you should prioritize. Then I showed you the ARC toolkit there. You can use the ARC toolkit at the, at very deep levels to run tests on the code. It's very easy for developers to use and test before you commit. And then hopefully issues are found and resolved right there. Before they even get to the commit phase. And you can set gates, and I'll talk about that later on during our demo, to make sure that code really doesn't come out that isn't already fixed. But one way or another, once you get into your test phase, you've got things like the ARC API that I mentioned and our node kit to ensure that the code is accessible and you can use JAWS Inspect there to do regular high level tests. I talk about that in another webinar, which I'll be doing next week, about a simple Accessibility test that your QA teams can run as a kind of simple and quick testing process for catching major issues. So that's what a mature assembly line would look like. As I said, that's just an example, but it gives you a sense of how it might work. So now I'm gonna switch over and start looking at our ARC platform and what these things look like in the real world. So this is our ARC platform, it's a cloud-based platform. We call it a platform as opposed to a product because it actually contains and runs with lots of different products like I showed you there. But this is the cloud service, our client portal. The client portal is really where the various parts converge, okay? So your access, whoever's in charge managing the Accessibility on your sites and products, this is where all the data comes together no matter where the testing is being done. And it consists of various parts or cloud services as we call them. Workspaces is where we put our analysis, they're kind of flexible containers for that. We also have a knowledge base. And knowledge base contains articles on all aspects of Accessibility. You know, this is content that's being prepared and managed by TPGi and we're known, if we're known for anything, it's our technical expertise. And this actually our knowledge base is maintained by our knowledge center team who also maintain all our Accessibility testing rules and so on. So they do nothing else. They just maintain this on a constant basis. And every time we have a release, there's new or edited content in here. And this content flows throughout the ARC world, if you like. So you'll see it popping up in different contexts, in different tools. It's all the same content because this is really what ARC is about. It's one platform with a standardized set of rules and knowledge so that you get that consistency and line of sight that I was talking about. So no matter what tool you're using, in what environment, using the same rules and the same knowledge, okay? And guidance. Once you move beyond simply addressing issues and getting their prescriptions on fixing issues to actually embedding skills, then we have ARC tutor, which is our training material. This is very flexible. You can integrate the SCORM files into your learning management system or use them here. Again, we cover all aspects of Accessibility from PDF remediation, testing fundamentals, inclusive design principles, et cetera. It's all here. And finally the help desk. The help desk is where you connect directly to TPGi consultants and you can address particularly problematic or difficult questions. And you can build up a knowledge base of your stuff, okay? How you do tree controls or how you do complex controls. We'll provide specific advice on addressing those and you can then use that as a library for your teams for fixes. So we are basically concerned today with analysis and testing, which lives in the workspaces, but all of these things are tied together. So we talked about our testing onion, okay? And here are the different layers. We've got our domain scan, which is of this whole, we've got a demo site here, the awesome store we call it. And there's our domain scan, which is a scan of the entire store. We've got a couple of user flows, the user flow of the search and add to cart process, very key for a web store. And also key templates and components. And we also have an engagement done on particular components in the score. Again, engagements is our term if you like, for our manual expert audits. So the beauty of this is that can all be put together compared and contrast. And out of this you can actually compare, for instance, you can see here this user flow as a particularly heavy failure density if you like, a lot of issues here. So you can immediately start to prioritize items, you can see what your top WCAG priorities are across the board and those will tell you something about what's going on immediately. If you know your coding practices, you know your components and your teams, you can already start to see, okay, well if it's name, role, and value, that's because of X. Likewise, top assertions, assertions are failures of our rules. So they're closer to the actual technical problem that's related to the WCAG criteria. For example, you know, low link text and duplicate labels used, et cetera. So they're much more closer to the actual technical problem. And again, you'd probably tell a lot from these kinds of data straight away. So having a look at, for instance, the domain scan. So we start off during our discovery phase doing these broad scans. We don't necessarily want to capture and scan everything. What we want is to establish a baseline. We wanna know where we sit normally. So we're less interested in anomalies or outliers. We wanna get a steady steady picture. And the reason is, is we wanna look past the issues themselves, the symptoms to the actual causes. And we use, one of the measures that we use, as I said, is the WCAG density score. WCAG density score is an average of issues per page. So it tells you a bit about how issues are clustered, whether they're global items, maybe appearing in headers and footers, or they're appearing heavily on landing pages and front pages as you might see a lot on a store website. And we use it as a general drawing our automated scans to understand a little bit about how those issues are clustered. And we do our scans here on 100 URLs on a regular fortnightly basis, but you can do it on a monthly basis and so on. You can configure those scans however you want. And then we've got this performance visualization. And this is a really key, this trend line as we call it, because it's set to the date. So you can see for instance here in our example, we've had a fairly steady state and then it's gone up rather dramatically in a certain date and then it's gone up again. So you may not actually be interested particularly in what those issues are, but more interested in what happened then. Did we introduce a new component? Did we change some aspect of our design? Did we do something with our header and footer? What did we do on that date that caused that issue? And what does it reveal about our process? Did we not do enough Accessibility testing before release? Why? Can we change that to make sure that we don't get spikes anymore? So this is what I mean by using the automated scans most effectively to find process problems that are lying behind it. And then we've got our pie charts, we've got our issues by particular component. We also pull out our remediation priorities. So these three WCAG failures here make up 91% of the total WCAG failures that the scan has shown. Again, not a compliance tool in the sense that that's not 90% of your Accessibility by addressing those issues, but it does throw up what is probably very, very common issues throughout your site and it'll tell you a little bit more about how that site is doing as well. So if we have a look at how these break down, as I said, the assertions are a little more atomic than the WCAG criteria. For this WCAG criteria, name, role, and value, we've got a lot of different particular failures, bad area role, et cetera. Likewise, priority two link purpose, we've got this no link text. So let's just dig into this issue here. And you can see immediately look this trend line for this particular issue, which is no link text, we'll see that it matches the general trend line for the site. So this no link text issue is definitely related to that spike that we have, both spikes in fact. And we get a description here. I mentioned that the knowledge base is, appears throughout in context. Here it is. So for our no link text issue, we can see guidance on what that issue is and how to fix it and what it means for WCAG conformance. We also get information on the affected groups, the cognitively impaired, low vision, et cetera. You can even add your own commentary to keep track, that's gobbly goop, but to keep track of what you're doing with that issue. For instance, we're going to fix this issue in the next release, or we're not sure what to do with that issue right now. So you can track your own commentary on the item. So you can see this is all about management and oversight of your Accessibility. It's really about putting the tools in the hands to be able to manage the data effectively. And when we look at the breakdown of where the no link tech issue occurs, we'll see that it pretty consistently appears through lots of pages. So this suggests that it's in a global item, it's in a header or a footer. And if we dig into a particular page, again, we've got our trend line. So we're now looking at a page where that no link issue occurs. And if we go down and have a look at all the issues we found on that page, and we dig into no link text, you can see here the particular code where the issue occurs. So we dig down to that level, again, we've got knowledge base items to go with it so we can understand how to fix it. But if you are using this for managing and gaining your oversight of your program, you may not be terribly concerned about this level of data. This is really in the weeds. In fact, what you can do is go to your team and say to your team, look, we've got a problem on this page. You need to test this page or this template before it comes out and you can test it with the likes of the ARC toolkit. So as I said, the ARC toolkit is a browser extension, it runs in Chrome, Edge Chromium. We have a version coming for Firefox and Safari very soon. Very, very, very easy tool to use. So it doesn't get into all the intricacies of WCAG. Instead it organizes these issues by topic, and developers and content author would've understand and it identifies the issue. So we've got our no link text, and if we look at it here, it highlights for us on the page where the issue is, shows us the relevant code and gives us a brief description of the issue. So this obviously integrates with the developer's own developer tools in the browser so they can easily find issues, fix them, retest, and then if it's all addressed, release it into another testing phase and into production. So developers can use this tool to do that testing there. And then what you want obviously as a result of that is that when you are monitoring, then in here, in the ARC client portal, you can see that trend line go down. This is where you're in a good testing process. They're doing the testing, whether it be the API or some other tool or the toolkit, and you are watching progress here, okay? And as you do those broad level scans, you start to identify things that you want to look at more closely, as I mentioned. And that's where our user flows come in. So, looking at this example, obviously we have the search and add to cart one, but we also have this templates and components. So in this example, the organization has identified a bunch of templates and components that are causing particular issues. So we saw the header and the footer has a number of issues, so we've pulled them out, okay? And we've pulled them into a specific user flow. And you'll notice immediately that this WCAG density is higher. And that's because we've more or less boiled off the other stuff that we've tested. And we've just focused down on the worst performing components, most problematic components. So we've got a clearer view of where our problems are and we can focus on those. So we're just scanning particular components. As I said, user flows can cover interaction. So for instance, you can record using a Selenium script, you can record a process such as opening a dialogue box, you can analyze that dialogue box for Accessibility and close it again. So ARC can run in with through the headless Chrome can run that process for you. So you can test with user flows, you can test interactive processes, one page, single page apps, anything interactive you can test just by scripting either in ARC or with another tool and using Selenium scripts to capture that. So here we've got our header, we've pulled that out separately and we can scan that specifically and monitor that. And then if we wanna go deeper, we can take that same user flow. And this is one of the advantages here in ARC. We can just take that user flow and we're going to decide to do a manual test on that user flow and those components and we can simply pull them out separately and do our manual audit on that user flow. Same components, we can see our header there, we can be monitoring those components at the same time with our automated testing. The engagement or manual test obviously provides richer data. You're testing against all the WCAG criteria here. So when we get into it, we've got a lot more data about these specific issues. Okay, so for instance, if we look at no image role here, again, we've got descriptions, we've got the same knowledge base material provided in context, the commentary and so on. But we've also got things like complexity and severity that can be tracked by, although we do have a severity score for automated testing, but it's more easily tracked by a particular, you know, by an Accessibility expert. And we've got the code examples of where the actual instances of the issue issue occur. And we've got the auditor's notes as well. So we've got a richer set of data in here. We can also track in our engagements things like the JAWS speech for the component. So, this is our JAWS Inspect product that I mentioned. JAWS Inspect gives you a transcript of JAWS speech. So in this case, this is what JAWS says if you run the say all command in JAWS on this particular component, okay? So it reads in a linear way from top to bottom of the page and you can see all of the components there as it reads them. This is great by the way, for showing Accessibility issues to folks who may not know much about Accessibility, executive teams and so on, because it's just very clear. Issues are immediately apparent when it the, when the JAWS speech transcript for instance doesn't make any sense or things are missing. So, so you can get rich data with those engagements and then you can start to, you know, you can send that out to your teams and start remediating the issues that they find. The beauty of using the same components to then monitor is that you can, as the teams actually remediate that work, you can be monitoring that with the same user flow. So, you've got your trend line in here, you can see whether they're going in the right direction, okay, that trend line should be going down. If it's going up, then maybe they haven't understood the report from your manual audit correctly and you need to intervene and get them on the right track. It's going down correctly. You track that and eventually you should reach a point where the WCAG density is zero. And that might be a point where you organize a retest, okay? And go in and verify that the work was done correctly and everything is fixed on some subset of what you tested originally. And then in the long term, you want to be monitoring those components. And if there's a sudden spike, so hopefully this is down at zero, and if there's any spike, then that's time to jump right in and say, what are you doing? What's happened? What have you changed? Have you changed your processes? What have you done here? Again, focusing all the time on the processes and making sure the work is being done and the testing work is being done at the development end, at the QA testing end, not at the production end as you go along. So those are your different layers of analysis. And as I mentioned, you can use your monitoring tools, your automated testing to help you identify what you're going to do the manual testing on, okay? Because you can't do that everywhere. And likewise, what you learn from this manual testing will help you understand what you're looking at with your data. In other words, where you find you've got a problem in the header. If you're using those same components elsewhere, you are using the same header component, for instance on other apps, well you guess that you're going to have the same problems as you found during the audit. So you can direct some of that report, what you found in that report to those teams to fix those there. It'll tell you a lot more about what's going on and how your coding practices and so on so you can organize your testing accordingly. I mentioned, so that other layer, which is the user layer and I mentioned the JAWS connect service, and we can just see that here. So we are monitoring, as I said, the site here in our domain dashboard. And if I go to the usability feedback panel, we'll see the feedback. So, JAWS users, they come to this particular site, get a notification to say feedback is available, a new window opens, if they choose to give feedback, they can put their feedback in there, it is of course accessible. And then they can close that window and go back to what they were doing. It's very non-obtrusive to them. They don't have to install anything in there in JAWS. And this is the feedback. So a user here has given feedback on a form, and the form has unlabeled fields and the user can choose whether they want to provide any contact details and email that you get back to them. But one way or another, you can then go back to your dashboard and check if that labeling issue was showed up in your scanning or in your manual audit. And if not, why not? And if it did now it's probably a big priority. It's obviously affecting users, they're providing feedback on it. So that helps you prioritize, that layer of testing helps you prioritize there. So, I also mentioned the rollout. So you want to be able to get that, you do all that testing. Another part of that testing process is always going to be rolling out the data to actually, into the right hands, into the different roles. You've got your executives, you've got your developers, you've got your QA testers, you've got your content authors, whatever they, your user experience designers, they all need relevant information and they need it in the right place. ARC has a nice flexible integration with Zapier. Zapier is as you know it's a middleware application. It's been around for a long time and it connects thousands of apps, right? It's got the whole Microsoft suite, the whole Google Suite and Drupal, lots and lots of apps in there. Everything you can think of pretty much. And allows you to create workflows between them. So between ARC and Jira, for instance, as we have set up in this case. So that means that you can set up flexible, easy integrations, okay? One of the challenges we find, a lot of organizations come to us and say, hey, do you integrate with Jira? And we say, yes, we've got the Zapier thing. And their problem is their organization has lots of different versions of Jira and teams are using different versions. Some of it's on the cloud, some of it's on the desktop or on the network. They've got other bug tracking systems over here, different teams of different versions. So you need to be able to quickly switch between them. This is the challenge. And what the Zapier integration does is it provides that flexibility. You can set up your zaps very quickly in minutes, okay? So you can connect to different apps, set up your connection. So if you're working with a particular team at a particular time, just set up an integration and send them the data. For instance, just to give you a simple example, we might take some of these issues and decide, okay, we're going to address this no link text issue that we found. It's an error, it's high severity, let's address it. I can actually send this to Zapier to the Zap assertion command. I can also at the same time zap all the assertions, all the issues at once if I want to. And then we can see here's my Jira instance and here's the issue appearing here. So I've sent it directly to Jira. Oops, copy that again. I need just one click. And I've modified the Jira instance here to include things like WCAG criteria and I've included a link to the full issue. This is totally customizable in Zapier, you just map one set of data to another. So that data from ARC into the data from Jira or you'll issue or bug tracking template in Jira. And then you can send them to data and they can start to address the issues. So that gives you a lot of flexibility and power. You obviously have the API as well. The ARC API will allow you to take that data and send it anywhere. I have an example here of a spreadsheet, for instance. And this spreadsheet, this is just an Excel worksheet, but it is actually connected through the ARC API to ARC, to the ARC account. So that same ARC account we've been looking at here, our demo account. So it automatically pulls down this data and it's a very easy thing to set up and it runs on Excel on the desktop. So I've got this executive level report, which I can pull out and run at any stage I need. I've actually transformed some of this data. So I've got a little color coding here that says essentially if the WCAG identity score is below a certain level color it green, if it's over a certain level, make it red. I can customize that based on my own criteria, my own risk appetite if you like. And I can generate things like this bar chart to show, you know, which are the worst, which are the best sites, what should I prioritize. So you can use something like that. For instance, we have one customer who includes Accessibility data in their overall risk or GRC reports to their executives. So it's just one line item in their overall risk reports. And they can use the ARC API to just pull this data into their report and transform it using the metrics that they use internally. So no need to do some massive build of custom reports through the API. They can simply take that same data that's available in the dashboard and integrate it into executive style reports and so on. So just going back quickly to our mature Accessibility assembly line. You're hopeful getting to a state where, as I said, the Accessibility team are managing things, they're monitoring, they're tracking progress, prioritizing, setting policies. They're not engaged in chasing down individual issues or even doing a lot of testing, right? They've got the oversight, the line of sight, and they're getting that line of sight because the tools that are being used to test in the lower environments and the lower parts of the pipeline are using the same rules, the ARC rule set, in this case. The ARC toolkit, the ARC API and so on. So your teams can be doing most of the testing and you can roll out that testing gradually exposing them to more and more tests and so on. And they're learning more about that Accessibility as they go. They're doing that testing and there's several QA levels to check and make sure that things are accessible before they make it to production. And then if you've got a good test set, as I've shown you during the discoveries phase where you're identifying user flows, you're doing your manual testing, you'll have a very good view of your organization, all your resources and a good representative samples through our user flows, et cetera. You'll be able to see when things change and when things come up so that you can maintain the levels of Accessibility that you've worked so hard to achieve. And I just wanted to mention before we go onto questions that we also have another part of ARC called provider, which enables you to do your own manual testing. It includes actually a browser extension, which provides guided testing, including templates for your teams, even if they don't have a lot of experience to follow if you like test scripts and carry out manual testing and roll out manual testing across your organization, it's worth noting that. So I'll turn it over to questions, at which point I'm gonna start looking at what we have in the Q and A box here. - [Stefani] Yeah- - [Charlie] So first question, sorry Stefani, were you going to- - [Stefani] No. I was just gonna say there's quite a few questions in the queue. - [Charlie] Yeah, very good, thank you. So we have one question. Can you talk about Accessibility testing and monitoring in WordPress? Well, we don't, I mean obviously with through our API you could certainly do an integration there, but the other thing that you can use is obviously WordPress stages, like any content management system, stages and previews your pages. You can use the toolkit in the browser to test those effectively. And again, the ARC toolkit uses the same ARC rules as so doing the same tests. So you'll be able to see those results. And then obviously when your site gets into production, you can then use ARC to test the actual finished site and monitor it from there. Another question, how does the ARC toolkit cloud portal differ from the browser extension? So, well, it's not the ARC toolkit cloud portal. It's simply ARC in the cloud. So, the toolkit is obviously a browser extension. The toolkit does automatic tests, but it does the automated tests on a page by page basis. So you're testing whatever template you're working on. For example, in WordPress, you're doing a page, you test that page, it doesn't crawl, okay? The ARC domain scans that we saw there in the ARC portal, the client portal would crawl a hundred pages, a thousand pages at once and give you that high level data. So that's the chief difference. Obviously the portal has things like the knowledge base, the tutor. It provides the whole gamut of Accessibility tools that you need. The ARC toolkit is purely about doing the analysis. So it's really for your developers and content authors to test as they build. Another question, how is the failure density measured? So it's an average, there's a few complexities to it. But it is generally speaking an average of the number of issues per page, the number of failures per page, WCAG failures. So we're testing against WCAG a certain, you know, set of criteria that we can test automatically. And out of those we average out. We also use the priorities, priorities one and two to prioritize those, that density score. The, another question, my audience is cognitively disabled adults who are not visually disabled. They have a short attention span and get confused with too much simulation. I'm completely aware of WCAG rules on this topic, but remain confused. If non decorative images- We're getting a little bit into the weeds of specific criteria here, so I can't, in the context of what we're talking about here, I don't really wanna get into specific WCAG rules and what should or should not be done with those. We're really talking more generally about how to do overall Accessibility testing. You can talk to us obviously, and our experts about specific criteria and how you would address those or test those. Another question, how does the ARC tool aid Accessibility testing join the user research process? So, so we've got a number of different ways of doing that. Obviously, one thing to bear in mind is you typically want to use something like the ARC tool to address these, you find and address issues before you go into the user research phase. Because you want to eliminate things that are obvious and clear before presenting them to users. Address the things you can address. So you would want to do some testing and fixing before you go into user research. Then how these tools, what I would call in this context, technical testing tools as opposed to user testing tools. What they give you is you can translate the issues you find in user research into the technical issues you need to fix. To give you an example, if you were doing user research and your users found that they couldn't use a form because the labels weren't clear, or there were no labels that, for instance, assistive technology could pick up. So if unlabeled forms, they can't understand what to fill in in the form, this is an issue, okay, that you found in user testing. You could then use the likes of ARC to identify the lack of labels and where in the code that problem exists. In other words, why isn't that label showing up for the user? And then you can give your developers the code they need to fix, right? This is how you need to do the labels on that form to address that problem for the users. So that's one of the ways that you can use user testing and the technical testing in tandem, right? The technical testing, getting you to the core technical issue, the user testing helping you to identify the issue and how it affects the users. Another question, do the TPGi ARC tool work as well, helpfully on mobile as it does on tablets and laptops? Any difference? So yes, of course we test here in the tool, we test different modalities so we can test mobile layouts. For example, for in the browser, we are coming very shortly, we have a beta version of our tool for testing native apps. So we'll have an Android version out very soon imminently. And then we're working on a iOS version, so that will test native apps in the same way in terms of a set of rules testing your apps. But we can, we are also coming with a new version of ARC very soon, which will allow you to set the modalities in your browser. Set it to a particular screen size and then test it, et cetera. So we'll be able to cover all of those different scenarios. Do bear in mind that obviously an awful lot of these Accessibility criteria are the same when you're looking at a browser webpage or an application in the browse. Between the mobile and your web criteria a lot of these things are the same. If a website is in a development stage, is it appropriate, wise, helpful to run ARC tests before live publish? So yes, that's very much what I'm saying do. Absolutely test before live. You're not going to test your raw code because Accessibility testing is all about testing what the user, the UI, what the user interfaces with, but that you want to test as early as possible. You want to test that in your development environments with the toolkit, with the API integrated, however you wanna do that. But yes, you wanna push that testing upstream. Another question, can the ARC dashboard capture consolidated Accessibility testing results from mobile, web and kiosks in one portal? Now, at the moment it consolidates web. We do through manual testing on engagements. You can have data on mobile and kiosks where you do that. But for automated testing, we don't include in the dashboard mobile just yet, or kiosks just yet. So we've only got web. And for our manual testing, our engagements, we've got things like mobile and kiosks. Another question for the upcoming native app testing, is that all manual or a mix of automatic and manual testing? It's, I believe it's largely automated. It's largely automated testing for our native. But that's developing fast, so I'm sure there's going to be a mix of those manual and automated over time. Okay, I hope I have answered those questions. - [Stefani] Yeah, it looks like you've answered all of the questions. Appreciate that in just the nick of time. Thanks everybody for attending our webinar and again, if you're interested in speaking to us further or deeper about some of your specific needs, you can email us at IDA@TPGi.com and I'll put that in the chat. Thank you.