- [Kari] Good morning everyone. Just give us a few minutes letting people join the room before we make announcements and get started. So just be patient with us. Good morning, good afternoon. We're gonna wait one more minute, so just hang tight. We'll get started shortly. Thank you everyone for joining us today. Again, my name is Kari Kernen. and I am the sales development manager here at TPGi. I just wanna go through a few housekeeping items before we get started today. The webinar is being recorded and will be made available within a few days after the webinar. You'll be able to access those on our website. If you have any questions during the webinar today, please utilize the Q&A portion and not chat to ask those questions. At the end of the webinar, Charlie will try to get to as many questions as possible. If for any reason we're unable to connect with you or get to your question, we will respond and get back to you afterwards. And as always, if anyone is in need of any Accessibility support, Accessibility training, et cetera, feel free to reach out to us at IDA. That's I-D-A @TPGi.com. And with that I want to introduce Charlie, let him get started with his webinar today of using screen reader testing tools to evaluate the accessibility of a user journey, part one. Charlie, I'll let you take it from here. - [Charlie] Thanks Kari. And yeah, that is quite, quite a lot. A mouthful, that title. So my name is Charlie Pike, I'm the Director of Platform success here at TPGi. I've been in the Accessibility business, if you like, for a long time, since about 2003 or so, and was one of the found partners actually of the original Paciello Group, the TPG and TPGi way back then. And I've been involved in all aspects of Accessibility, from user research to project management, to user testing, to audits and all the rest of it including our products and our platforms here at TPGi. And what I'm gonna talk to you about today is about using user journeys to do your screen reader testing or to look at another way how to use screen reader testing tools, specifically in this case JAWS Inspect, which is our screen reader testing tool here at TPGi, and to use user journeys in conjunction with them to create, if you like, a test process that's robust but quick and that focuses on tasks rather than just compliance. So all of those good things. If you have attended one of our JAWS Inspect webinars before, we'll cover a lot of the same ground just to introduce people to the product and so on. But I'll also be looking at some new things that we haven't discussed before. This is part one. So we're looking at having this as a theme, user journeys and evaluating user journeys and talking a bit more about test processes and approaches to testing 'cause it's more complex than it seems on the surface. And do please hold onto your questions. By all means, put them in the Q&A panel as they occur to you. I won't address them to the end. I will try and leave a good amount of time at the end of the session to go through those questions and answer them for you. So just to look at the agenda, so obviously an introduction specifically into what the tools are, what our screen reader testing tools, what is JAWS and JAWS Inspect, et cetera. Then we'll look specifically at user journeys, what they are, the different kinds of uses for them because it's you know, it's not necessarily a commonly understood term. And then we'll go into a demonstration of testing in action where you can see the product again and get some suggestions about types of tests that you can do, et cetera. So we'll be about a 50/50 split between introducing the topic and going through some slides and showing you the product in action as we go. So first of all, what is JAWS? So JAWS essentially stands for job access with speech. It is the world's most popular Windows screen reader. It is a tool that our wider company Vispero make and sell. And it is, you know, it's been around for a long time. It's very, very well established, very mature product used in hundreds of countries and it really enables a blind and visually impaired user to read the text that is display on the computer screen. Okay? And read is a key word here because that's where JAWS Inspect comes in. So in terms of when you're doing testing with screen readers, you're always going to include JAWS, okay? There are obviously lots of other screen readers out there, including screen readers that are built into operating systems. But JAWS would have to be fundamentally part of any test set because of its popularity and ubiquity, if you like, around the world. So why test with JAWS at all? Test compatibility with screen readers is an obvious reason. You want to make sure that whatever you do, your websites, your applications are compatible with screen readers and work with them correctively. Evaluate the user experience. In other words, go past guidelines to grasp Accessibility. So we've talking about a bit about that. So obviously TPGi, we, day by day we're doing audits, we're doing Accessibility testing and we tend to focus on compliance with likes of WCAG guidelines, but also technical compliance, using the right code, using the right attributes to elements to make sure the assistive technologies can work with your applications and so on. But there's another part to this, which is simply understanding how efficient and easy your software or your website is to use with the assistive technologies, okay? And that's more than just narrowly compliance. It's much more about really assessing and testing with the assistive technology to make sure it is. And user challenges are easy to understand with assistive technology. So it's a little bit more difficult if you're new to Accessibility and you just come from the point of view of guidelines or Section 508 to something similar, to actually understand what's the underlying issue, what are the problems that users face. Okay? And the other thing about it is that when you test with such technologies, it is more, it is more easy to understand what those issues are versus the technical or code issues that might be involved. And something may be technically accessible, but is it usable? At the end of the day, the ultimate arbiter of the Accessibility is going to be how usable it is with your assistive technology. So JAWS is obviously the screen reader that's being used around the world by thousands of users. JAWS Inspect, which is a product that we brought out in about 2017, vastly simplifies Accessibility and JAWS compatibility testing. So it's really aimed not at the end users who use JAWS day by day to their the web access, their work, et cetera. JAWS Inspect is aimed specifically at people who want to test compatibility with JAWS, want to test AT. So instead of speech, what you have with JAWS Inspect is actually a text transcript, which gives you both the time to properly assess whether the text that JAWS announces is correct for the element that you're testing. And also to give you the ability to share that. Okay? So for somebody who's testing just with JAWS and listening to the audio that's being read out, very difficult to share that information, what they hear with developers and so on if they find a problem. Whereas JAWS Inspect makes that very easy because you've got that text. The text is simply easier to deal with when it comes to finding issues, diagnose the issues and sharing them. So that's really what JAWS Inspect is for. The benefits, it simplifies JAWS use for testers. Okay? JAWS is a complex product. It's got a lot of features for end users. JAWS Inspect is much simpler product from that point of view. It takes away a lot of that end user complexity and gives you just a testing-focused interface. It gives you a text transcript of JAWS output for use in bulk tracking and compliance programs. It helps with the identification of issues, okay? It will show you, I'll show you examples when I do my demonstration of just how JAWS Inspect really, really helps you to identify issues when you can spot them. It organizes things in terms of elements and certain topics and makes it much easier to see what is correct and what is not correct. And it's rapid rollout with limited training required. So again, you don't have to learn a complex set of keyboard shortcuts or controls menus, et cetera. For JAWS Inspect, it's a really very, very simple tool again, because we want people to be able to roll it out and pick it up very quickly. It demonstrates the impact of Accessibility on users as I was talking about earlier. So here's just a screenshot I have here from the Amazon website of an image of a product and the title on it is Devices Deals, but that's an image. And to the left of that we have the actual text that JAWS speaks. So Amazon device deals link graphic, okay? This is essentially, so if you were using JAWS you would just hear that audio Amazon device deals link graphic. But with JAWS Inspect you get the text which shows you exactly what JAWS would announce for that particular image and also what helper text JAWS provides to the user. In other words, link graphic. So this is not something that's written by the developer into that image, but rather it's provided by JAWS to the user to give them some context for what it is they've found, in this case a graphic. In this case a graphic, which is also a link. So a clear visual illustration of what draws users experience on your site. And no need for Accessibility expertise to read and understand the reports. And this is key, okay? You can very easily understand whether that text Amazon device deals correctly, accurately and concisely conveys to the user what the image is and what purpose it has. Okay? You can see that without having to check the alt attribute or what kind of code is behind it, you can see what the issue is very clearly as well if that text is not helpful. So when and how in your overall Accessibility test process do you test with JAWS? So I always compare Accessibility testing to this onion, this onion that got layers and layers to it. As I said at the start, it's not always as straightforward as it would seem. The reality in Accessibility testing is you've got both automated and manual testing. So automated I mean that you're scanning a website or an application running through a set of Accessibility rules and the system is coming back with results. Either there's an error or something is correctly tagged, in which case there's no error. But only so many of, for example, the WCAG criteria can be tested accurately using automation. Okay? And lots of vendors will argue about different levels and so on. ARC product, our ARC platform has its own rule set and a lot of these rule sets are very similar and you're going to capture, you know, who knows every vendor's got a different number, but at the end of the day you're simply not going to capture all of the Accessibility issues with automated testing. So that automation is very good for what we call domain scans for broad level scans where you can understand what's going on from one site compared with another, where your high priorities might lie, what's the most common issues across a vast number of, you know, pages and so on. So that is used in monitoring, it's really for getting a benchmark of where you sit across all of your various resources, your websites, your applications and so on. And what your priorities are for running Accessibility program. But to understand deeper what actual levels of Accessibility exist in a particular application or site, then you need to go to a layer of what we call user flows. And user flows would be more like if you were a hotel website, it might be your booking process in that site. It's some key user story as we'll be talking about today. Which is essential to your business where you definitely want your users to be able to complete. So again, hotel website, you want them to be able to book. Shopping website, you want them to be able to add items to their cart, pay for them, buy them essentially. So you might prioritize those things at the very start if you're starting out an Accessibility program to just make sure that those flows are key, okay? And you can monitor those in the same way with automation. But you're also going to have to do what we call manual expert audits from time to time. And those manual expert audits are complete compliance audits. This is what TPGi do all the time every day we have teams working on these audits. Essentially we're taking all of the, for example, WCAG criteria and we are testing an application against them, okay? But we may well use user flows as a basis for that. So we may select a particular user journey such as that booking flow and test along that journey again, so we can identify, we can be very task-based. We can identify what are the key tasks we want to make sure are absolutely accessible from the get go. And then finally that nugget in the middle, that gold in the center if you like, is the likes of AT testing, user feedback, user research, okay? Getting the user's view, as I said, the ultimate arbiter of just how accessible a site or application is, is going to be actually testing with the AT and testing with the users. Doing that user research because that's going to tell you really is it efficient. You could be compliant with all of the WCAG guidelines and still have an application that is very difficult to use. To give you a simple example, you might have alternate text descriptions nicely in there for all your non-text elements, but those text descriptions might be too verbose or might be on things that don't need them, decorative images and so on. And so you might look technically compliant but you might have an awful lot of inefficiencies if you like in that interface. And we'll be looking at some today when we do the demonstration. So that brings us to the importance of user journeys. If you're doing all of that testing at different stages, for example in your pipeline, different teams doing different levels of testing, user journeys are going to be a key factor 'cause you're going to have to be selective about what you focus on. For instance, you may do an audit every now and again, but you may do your monitoring every month and monitor on a broad level 'cause it's automated. But you're only going to do your audits every now and again and you're going to do, you know, frequent QA testing, which is not gonna be testing on everything. So this is really where user journeys come in. So user journeys give you a user-centered design. Okay? Again, not focusing so much on compliance, but focusing on the things that are going to be key for your users. Better identify user needs. So it'll automatically bring you closer to your users and a better understanding of your users by doing this work. Putting together your user journeys, putting together your personas, et cetera, going to help you better understand what your users are trying to do on your site. And build useful software and websites. What we see a lot here in TPGi is very often because user is kind of almost secondary to the whole development process. Very often companies and organizations are building software often lots of software that actually has no purpose. It doesn't really meet a user need. And that's because users are not the center most focus of what they're doing. It's driven by the technical teams themselves about what people can do versus what they should do. So user focus is always going to be key in terms of what we build from the very start. So where does user journey come in terms of JAWS testing? So it can focus on core tasks. So again, you won't be able to test everything all of the time. These tests are very intensive. So focus in on those core tasks first, task completion and efficiency, not compliance. Okay? You may, well in compliance testing, spend an awful lot of time evaluating the Accessibility of something that uses hardly ever visit or use or is not key to them. So this really is a task completion. It's all about making sure that they can actually finish that shopping cart process 'cause that's what you need them to do. It gives you a cross section of UI and content. We have one, a customer of our ARC platform, a university who set up an automated user flow. In other words they're evaluating it with automated testing that goes across their whole university experience for students. So they've mapped out a flow where the using student logs into their university portal, goes and checks their profile and their email, then goes on to their courseware and checks any assignments that they have, any grades and so on. And then goes on to another application and so on. And there's a cross section. I mean who knows how many screens, potential screens there might be across the whole university, hundreds of apps and sites. There's no way to test them all, but this is a great way to cut right across them, okay? And where they see problems at some step there. For example, if they found issues in the courseware, that might be a time to go in and do a full audit of that particular piece of courseware. But this cross section that is focused on the user's experience is going to be much more successful in terms of when you're evaluating and fixing things. Because at the end of the day, these are the things that matter most to those students. It's a very good way of looking at your Accessibility, especially when you've got a lot of stuff too much really to get your arms around. So building user journeys. So there are lots of ways of building user journeys. And the first thing to say about this is of course if you've got an existing UX practice, user experience practice in your organization, they may already have done this for you. So when it comes to doing your AT testing, you may already have existing resources to work with, which is great if you've already got personas, you've already got junior earnings mapped out and that's the key place to go to. So I'm not going to cover necessarily the whole process of setting these up here, it's not really for this audience, but just to give you a view. So obviously personas are a first part of any user journey. You first need to know your users and you would build these personas typically if possible, out of actual user search, out of questionnaires, out of focus groups, out of talking to users and understanding from that you can get some commonality and some good understanding of typical types of use cases. And you draw up these people, they, you know, in many ways they become, even if they're amalgams of several people, they become real people to your team, their particular needs. What are they using technologically? That's obviously very important for your AT testing. What are their abilities and familiarity? What is their attitude? In this case, we have a user who is quite a power user, is prepared to persist digital native early adopter persists until she gets it. But then she has her own particular pain points, unclear navigation, lack of useful, useful descriptions for products and images. That's a particular bug bear. Okay? So she's persistent, but there are certain things that are going to frustrate her if she's using your application. So you build out personas and those personas must include AT users. Okay? So if you have existing personas but you don't have any around at users, then you need to add those. And the typical things that you have in personas, different organizations of different ways of doing these things and different levels of detail. But you want obviously a background to the person, some goals and motivations, some pain points and the technologies that they're using or they're comfortable with using. And that, as I said, will be key when it comes to the actual doing the user journeys. There's another aspect related aspect of user journeys, which is actually in UX design and particularly around specification building for software. And that's when your teams are actually deciding the features. And if they are, you know, if they're good at what they're doing and they're very user focused, they will build out these user journeys. And here's an example of a user journey from a product. So this is, and this is a case of ad uses and assigned roles, user activity. So this is adding users and roles to a particular application and this is the process. So you can see create new role is the first step. And out of that there are various capabilities in that step. You need to be able to view a list of roles, edit the role, archive, un-archive the role, delete the role, okay? And then you move on to invite a new user. And for that you need to be able to view the list of users, edit users. So in software development and building specifications, this is a very good way of looking at user journeys and what is needed therefore to support those user journeys. So you're already starting on a task-based basis. You take your persona and say that user needs to be able to do these things and in order to do that they must have these various capabilities available to them. It's a great way of building out a list of requirements for your software. Again, if your teams are doing that, and that's a very good source for the user journeys you're going going to use in your AT testing to start with, to build them out. So list core tasks to perform, capture functionality. And then here's looking at a full on user journey. So this kind of user journey that I have here. So I have an image of an online shopping customer journey map. And we'll be looking at online shopping as our example today. And in this customer journey map, you've got a sort of full user journey map that this is something that both UX folks and marketing folks and customer service teams would use to track the core tasks that a user would perform on the site. And they also record the feelings of the user at the various steps. Okay? And you're trying to gauge there how effective each of those are and where are the pain points and how can you fix them as you go along. And you might be wondering what has this got to do with at testing? But essentially you wanna be able to follow a similar route. Again, if your organization have these, this is the place to start. Because again, you want to evaluate the Accessibility based on how easy or difficult it is, how frustrating or effective it is for the user to carry out these tasks on your site. And that includes where they might have come from before they came to your site or your application. So looking at an example, simplified version of that that you might use in your AT testing. Again, we've got the same persona Victoria, a scenario where she needs to buy new sneakers for her son. She's price sensitive, but prefer sites that provide detailed information about their products. You can have a very simple scenario. Expectations are we find a large range, easy to browse, ability to compare brands, clear pricing, okay, that's what Victoria wants when she carries out this task. And we've broken it up into stages, browses the site, these are obvious shopping stages. Browses a site, evaluates products, pays, okay? And under each of those we have specific activities, checks out the deals, she browses categories, selects a product, adds the products to cart, completes the checkout process. And then for each of those activities, you have actions, okay? Browsing the homepage, select the shop sale, browse sale item, select main from shop, menu, browse main category and so on. This is very simplified, but this becomes the basis of a test script, okay? And your actual user journey then becomes perhaps the place where you record the results. For instance, in the notes field here where you know how you're doing, this is the kind of tests that you can do on a regular basis if you have a QA process where before each release you're doing testing, okay? And obviously you're doing tests, you're doing all kinds , of technical testing on your code, et cetera. You're probably doing technical accessibility testing to make sure that things are compliant. You can do this kind of QA testing as well where you're doing a high level test on your key user flows to make sure that the new code is not breaking the Accessibility and the efficiency of that interface in any way. So your user journey becomes something that you can test on a regular basis and your AT testing is what you do at that stage. So flipping over now, oops, sorry, to have a look at this process in action. So obviously this is very high level and an actual test script would be a lot more detailed, but I just wanted to give you an idea of how these things all work together. So now I'm gonna show you JAWS Inspect in action. So this is a sample shopping site. So we're gonna have a look at testing a user journey with JAWS Inspect. JAWS Inspect has very little interface I mentioned earlier, sorry, I mentioned earlier that we focused it on testers and a lot of the complexity of the end user product is therefore hidden. In fact, JAWS runs in the background with JAWS Inspect, okay? And you can actually run full JAWS in the background and I'll actually do that today. But the interface is minimal. So for instance, if I want to change the settings in JAWS Inspect, I actually select the Windows tray icon here and right click on the JAWS Inspect icon and I can see my settings. Okay, so here's the main interface of JAWS Inspect, but then where it works in general is in your web browser. So I have my browser open here and if I hit the control key and right click the browser window, I get a JAWS Inspect menu. So that's, again, that's the interface. Very, very simple interface. Okay, if I want to run, so I've got several reports here and I'll just show an example of one just to give you a sense of how the product works. So if I pull up a simple example page with a whole load of images on it and again control and right click and open the menu and I'm gonna run a full page report. So now JAWS Inspector's running, what's happening is JAWS is actually reading through this page from top to bottom, all of the content on the page. We can't hear JAWS obviously 'cause the audio is switched off, and JAWS Inspect is generating reports. So I have a new window open here and we have a JAWS Inspect report and the report is basically a transcript of everything that JAWS announced when it went through that page, okay? And we've got details that we need for book tracking, the likes of what browser it was, what version of JAWS, et cetera. And then we have the various output of JAWS listed under here. So if we look at the graphics section for instance, we'll see exactly what JAWS said as an encountered the graphics on the page. So we've got for instance, home visited link graphic. I mentioned earlier that JAWS provides this helper text, which in JAWS Inspect, we show in a different font and in italic and this is provided by JAWS itself, whereas the title of the image or the alternate text as the case may be is provided by the developer. Okay? So home visited link graphic describes that image, okay? Then we've got the next image shopicon.png visited link graphic. So you can immediately see there that whereas this is the correct and helpful text, shopicon.png is not the direct and helpful text. And that has come essentially what JAWS does frequently, if there's no text description for an image, it will go and try and find something either nearby or from the actual name of the file in this case shopicon.png. So you can see an issue there. So if you were just listening to JAWS audio and testing with JAWS, which of course you could do, you'd hear these things, but it would be difficult to actually go back and back and constantly direct JAWS back to hear if you don't hear correctly the first time, et cetera. JAWS Inspect gives you the time with this text transcript to, to diagnose issues, see what's correct, what's not. For example here we've got text now by JAWS that is just, you know, nonsense. This is some kind of auto-generated ID or file name and it's not telling the user anything, okay? So you can, this is what just one of the reports that JAWS Inspect produces. But it's a very good way to understand how usable and understandable your interface is for users. For example, we've got shoe graphic, shoe graphic, shoe graphic. So there's no differentiation there. This is really telling the user very little about those particular items. Okay? I mentioned earlier that you really don't need a good familiarity with the likes of the WCAD guidelines and so on to understand these reports. That's why. And and to understand what the issue is for the user. In other words, you can see immediately how that would affect a screen reader user. They don't know what that image is, therefore why are they going to select it or how are they going to get in and look at that product. So it's very clear, it's a very simple way to test and you can focus in on things like graphics, headings, links, et cetera very effectively. So that's a typical kind of report that we use with JAWS Inspect. However, for our user journeys, we're going to focus in on particular tool that we have here. So again, I'm gonna pull up my control right click and pull up my JAWS Inspect menu in the browser and I wanna see the speech viewer, okay? The speech viewer is much closer to I guess the direct JAWS experience in this case. It's a live transcript of JAWS speech. So as you interact with the application or with the site, it's going to feed you back live what JAWS says along the way. Now what I'm going to do is I'm actually going to start JAWS itself in the background so that I have access to all of the various, and I'm gonna show, I'm gonna show them all in operation. So I'm gonna quiet, I'm gonna clear my log here, I'm gonna quiet JAWS. So it's, it's gonna run in the background but quietly and I'm going to start testing my flow. So here you would probably have a script based on one of the user journeys that we saw saw there. So in this case the user journey involving, let's close that down. Involving the shopping cart, browsing the item, finding the item, et cetera. Just clear this, all right, it's always difficult to do these things while demonstrating. So we're going to run through the steps of that user journey and see what our output is. So straight away when I put focus on the top of the page, I see Home ARC demo site, Microsoft Edge, which is exactly what JAWS would announce. And we're in the banner region. So the developer has split up their page into regions, which automatically makes it a little bit easier for the JAWS user to understand the structure and get around. And we've got our first link, which is visited heading level one link AS okay, which is that logo item there. And we're going to start going through tapping through the interface. And as we do, there we go. Heading level one. There's awesome store link. I hope everybody can see this. I know that text is quite small, but essentially this is my transcript for the items I am focusing on as I go through the menu. Here's home link, shop link, and notice that as I tab, I'm having to tab through all of the items in the sub menu, they're being announced correctly, but I have to go through them all to get to the next spot. So the developer might consider a different keyboard interaction there where the arrow keys go through the menu, but the tab keys go from top level item to top level item. And then we've got the more menu and we go through those items and then focus as I tab through the interface, focus goes back to the Home link graphic. So clearly there's an issue there, we were already on that at the start. Now it's returned to it. There's clearly got two links in the same place. Okay? The same for the shopicon.png that we looked at earlier. It's not helpful alternate text to, there's no alternate text there for the user, but also we've already been on the shop menu and opened it. So this is a redundant link. The kind of thing that might not show up in your Accessibility technical testing because there's an item that has, you know that certainly in the case of the home mic icon, there's an image with an alternate text. So technically accessible, but this is an unnecessary link. Okay? So this is the kind of thing you find out when you do this kind of testing. We've got a login button, that's correct. You've got a cart with one items button, that's good. It's conveying all the information we need. We've got a search box, search button and then we're down into the content, shop now link. Okay? And then we're looking at the actual products. We've got a carousel control and it says previous product button. So again that seems to be nice. So we user is browsing through. So again, imagine you're here, you are basically following a script where you're carrying out a set of fixed tasks based on that user journey, okay? And here we've got images, but the images don't have very good alternate text. I'm a product link graphic, I'm a product price. So there's repetition going on there. We've got, I'm a product link graphic and then we've got, I'm a product price. You might want to consider the efficiency of that. The user is having to hear the same thing. Again, I'm a product. So these kinds of efficiencies are the kinds of things you can find, particularly if you've, if you've fixed a lot of the Accessibility and really you've improved your site to a high level, then you're really down to things like this, the wording, the veracity of your links, et cetera that might be affecting your users. So we'll go back to the top of the page here and we'll go into the shop menu and you can see as we go through, you'll see the items being highlighted. We wanna look for men's sneakers in our process. So I'm gonna hit that now. Go to that page and load that page. Okay, so as we go through here, now I can use, because I have JAWS on, I'm going to hit the H key and go to the first heading. Or do I have JAWS on? Yeah, I'll shut it down. One second. So yes, I have JAWS in the background, so I'm now going to hit the H key and it's going to take me to heading level two end sneakers. So JAWS users can use the H key to cycle through the headings on the page. Lemme just drop this. Oops, I feared that might happen. One second now. This is the fun of testing with JAWS. I'm doing webinars JAWS, let's start this up again. Start our JAWS Inspect again. Second. Right. JAWS up in the background. Okay, we'll go back to our menu. So as I said, using the H key, we can go to the first item. First heading, lemme just open our speech view again. Okay. So heading level two is our first item and we are on the filter menu. So we're gonna look at the filter menu and see how that behaves. And we'll see here we've got category as the first item and it says category button expanded. So this is a tree control, it opens and closes and reveals nodes or branches underneath. And the end user, the JAWS user needs to understand what state that is in. In other words, if it's collapsed, then they're going to tab right past the options below it. If it's expanded, then they're going to be able to open up and interact with the items below it in the category area. Okay? So that information is very important and it is conveyed correctly by the developer here in this case. We've got category button expanded. If I collapse this control then you can see now, if we interact with it again, it's now, I'll just scroll down so you can see that correctly. It's now collapsed. Category button collapsed. So it's providing the important information to the user about that control and how to interact with it. I'll come back down, I'll press the H key again to come back to that area. Likewise, if we select the price item and it's button collapsed and if I then tab in there, I can see 19.75 as the price that set as the sort of minimum price on this particular filter. And to increase or decrease, use the arrow key. So I'm going to use the right arrow key here or the up arrow key, sorry to increase the minimum price. And there it announces the new price of 29.76 or the new minimum price. And it tells me nine items matched your search criteria. So it gives me an update on the search results based on my filter. Okay, so this is good. This essentially is what the user needs to know to run through this application. And as I further interact with that, the JAWS is going to update the user on any changes to the filters. Okay, likewise, we've got colors, okay and black checkbox, sorry, I always have to adjust this to make sure you can see we've got another, we've got a size options and now we've expanded that size section. So I'll just scroll down and here we've got a series of checkbox. So what do you need to know? What does the user need to know about checkbox to make sure that it is, that they can work with it? They need to know first of all that it is a checkbox. They need to know the state of it, whether it's checked or not checked. Okay? And they need to know the label. What are they checking? It's in this case it's size seven. Okay, so all of this information is conveyed correctly. We have seven checkbox not checked so I know what state it's in. I know it hasn't been checked as part of the filter. So now if I hit the space bar, it should update me and indeed it now updates me to say that the item is checked. Okay, so this is working correctly and I'm just going to clear the filters now so that we can see all the various data. And I'm gonna continue with my testing. So now we have a sort control and we're gonna open this up. So I'm going to hit the arrow key and now the menu is available to me. So it says sort by menu one of six to move through items, press up or down arrow so I can use my down arrow and I see newest, which is two of six, price low to high, which is three of six. So all of this is correct, it's conveyed correctly and it's modal. So as I use the arrow keys, I'm going to go cycle around all the options until for instance, I hit the escape key and come out of there. Okay? And here we are to a sneaker product. So now we're going to select the leaker product. So again, as we've come across with this website, there is an issue with both repetitive and non-descriptive images particularly. So that's something you're going to note down as you do your testing. And we'll go in and have a look at the product. So you can see we're not testing by any means. All the screens in this website, we are focusing in on a specific user journey. But these are, we are capturing as we go the key elements of the UI. When TPGi do their manual audits, we never test, you know, all of the screens in an application because it's too intensive work, right? It takes typically, you know, anywhere between three and four hours to test a particular component or screen against all the various WCAG criteria. So we test always something that we call a sample. And we test representative samples of the UI for example, example of how we do a dialogue box, example of how we do a button, et cetera. And use those to represent the whole. So you want to capture the UI. So a good practice when you're doing these user journeys and designing these user journeys is to make sure that they include the different, all the different cases if you like, of the UI. And hit my H key again and jump down to a nearest heading. And then I'm going to shift tab up to have a look at these key controls in the process. So again, I'm capturing the controls that are appearing throughout. We've got a select box, we've got a color picker if you like, which is in fact a checkbox control. We've got this side of this widget where we're doing the selection of numbers, quantity, and we've got our buttons. Okay? So we're capturing representative UI along the way. And again, one of the advantages of doing this type of testing is you can do regular QA testing as you go. That's quick. You might for instance have a QA process where you test things like headings, alt text, the name, role, and state of controls like this on a regular basis with each release very quickly on your user journey. Whereas you might do a more thorough complete Accessibility testing on a, if you like, a broader cadence every quarter, every couple of months or whatever. This can be very effective and quick type of testing. So again, let's go through our menu, let's select items as we go. Let's increase the quantity there. And again, to set a value, use the arrow keys and now we're going to add that to our cart. We're running through the whole cart process here and now we're in the cart. Note that the focus then moves as soon as we add to the cart. The focus moves to the cart dialogue box. So it's announced correctly as a dialogue by JAWS and our focus is there and that means we can focus in on the close cart widget button. Nice and descriptive. Again, we've got an image there with a description but it might be a bit verbose. And we've got our widget here where we can select or change quantities and I can do that with the keyboard. Keyboard testing is always gonna be a part of this. And keyboard testing is your baseline for Accessibility because keyboard affects so many different assistive technologies and users. So we can change quantities and so on. We can close that dialogue if we want, but we can go on and view the card. Okay, so that's how it runs. You go through the whole process, as I said, you try to capture representative UI all the time. You are watching to see is the information correctly conveyed to JAWS, is it sufficient, can they use it, is it efficient for the user? And then when you're done you can pause, you can pause the speech viewer and you can go and save all of that into a transcript. As it should have done there. Let's try that again. I'm not quite sure where that's not working straight away. But you can, you can save all of this into a, into a CSV file or into a JSON file for sharing with a book tracking system. And it'll essentially give you all of this text that you can open up in a spreadsheet or as I said, share it out with the likes of Jira et cetera. My system is just running slow here. And then, you can, you know, point out to your developers exactly what line of code an issue occurred in, for example, where we had that alternate text. You can send that directly to your developers and they can see where the issue is and what images have that particular issue. You can also run this particular speech viewer in a kind of headless mode. So you can run it without this interface in the background. And the output again will be a CSV file or a JSON file and you can specify the format and then you can review the output of JAWS as you go along. So you can test your user journeys using the likes of Selenium for test automation and you can capture the JAWS output as you go in the background so that you can always include it. So these user journeys become great ways of doing your constant regular testing, capturing your transcripts and then quickly reviewing them to find out whether there are issues or not. But you can imagine if you've tested this same journey a number of times, then you can just look at the transcript and see are there any differences from what you had added before or if there are any new components. So that's kind of how it works. How JAWS and spec can work with user journeys and how user journeys can be effective in this process. I'm now gonna jump over and have a look at some of your questions 'cause we've only got a few minutes to go and see if I can answer them. So first question, how do we determine what images are brand supporting images worth having descriptions, hero images, stock photos versus images that are purely decorative and not needing alt text? Well actually, I mean it's a good question and of course, there are no hard and fast rules. Our own Steve Faulkner made a very good guide for the W3C, which quite a long guide on different kinds of use cases for images, actually, accessibility for images and alternative text is far more complex. One of the most technically simple and yet actually complex issues around Accessibility. What is the right alt text for the right image? Using the likes of JAWS Inspect will help a lot because it will help you understand what the user needs to know. When you read, for instance, I showed you that report, the full page report with all the different alt text. So you be able to see there is this actually useful looking at the image with the alt text beside it, is this useful alt text? Does the user need to know that as you go through with the speech thrower and try out the page? Are you coming back to this image? If it's like an image that's on all the pages, is it then too verbose? Do we need to hear this again and again and again? It's these kinds of things that you'll learn by testing with the likes of JAWS Inspect and of course doing user research and trying out with users. But yes, I mean some things are on the border between decorative and useful and you've really got to decide in those cases what we put, but always try to be concise. Question here, does it also Inspect PDFs? The answer is yes. JAWS Inspect is unique amongst our Accessibility tools in that it's a desktop product installed on the desktop. It uses, as I showed you JAWS in the background. So anything that JAWS interacts with on the desktop, whether it's PDF or it's Microsoft Word, PowerPoint, whatever it is, Inspect using the speech viewer there. Inspect will give you the transcript so you can use it to test PDFs, you can use it to test your Word documents, whatever. It can test all of those desktop formats with the speech viewer. Another question, can you play a part of what JAWS would say for those of us who are blind? Yeah, that's tricky. I mean you could, it's actually one of the uses of JAWS Inspect initially was for JAWS users who were testing their sites for their companies and testing their products, but would find it very difficult to convey to users what JAWS was saying, right? They heard something, they knew it was an issue, but how do you actually tell your developers, hey, I heard this. When they had a transcript that solved the problem, they were able to share that transcript with the users. But it is, when I'm demoing, it's obviously difficult for me to do both showing what JAWS is actually saying versus what JAWS Inspect is giving as a transcript. Another question, is the tutor context part of the readout, use arrows keys exclusive to JAWS not part of the markup? That's correct. The tutor content is provided by JAWS, not the content by the author. So obviously JAWS bases its content for example, when it tells you it's a linked graphic based on the attributes that are marked up on the item. So if it's correctly, we saw the example of the dialogue box, the shopping cart dialogue box, right? That was announced by JAWS as a dialogue to the user. The the developer didn't put a title element or something in there that said that's dialogue, but they did in that case, use the correct area attribute to tell the assistive technology that that was a dialogue box. Okay? It could be marked up as anything, it could be just a div container. But once you've got the attributes right, then JAWS would be able to tell the user that's a dialogue. Okay? But that shooter text comes from and how, how the user with its guidance on how the user can use the arrow keys or the space bar, the enter key that comes from JAWS. But JAWS needs to know what it's dealing with if you like. It needs to know what the control is. So that needs to be provided by the developer. Okay, we'll give you a moment more to see if there are any other questions. No, it looks like we're nicely finished just before the end of time. So yes, that's, I hope that was helpful. Or question here, what is AT testing, assistive technology testing, essentially, assistive technology testing, I mean by AT testing. Another question I assume there is no way to use JAWS Inspect on Mac. Well actually I'm showing it to you on Mac right now. So I was using a virtual PC to show you that. We are, however, planning. There are plans in motion to deliver this as a service JAWS Inspect and that would mean that it's totally platform agnostic. You'd be able to grab that JAWS transcripts no matter what platform you were on. Question invites us part one. So when is part two? We haven't decided that yet. And we would be keen to get feedback from anybody who's attended today in terms of what they would like to cover under the theme of user journeys with JAWS Inspect. I'm thinking we're going to go deeper into other types of user journeys and different types of tests that we could run, but happy and very open to any suggestions about what people feel they need to cover and what they need to understand better. - [Kari] Looks like that's it for our questions. I am adding the IDA email again into chat. If you have suggestions moving forward for this series of what you would like to see, feel free to reach out to that email address and we can then discuss 'em internally and work on getting that scheduled. And keep an eye out for our newsletters, for upcoming webinars, et cetera to know when that part two will be released. Again, this webinar is recorded. Recording and the slides will be sent out again, usually within one to two days after the webinar is over. If you have any additional questions, you can also reach out to us at the IDA@TPGi.com email address. Charlie, thanks again for the presentation, and everyone else, thanks for joining. We look forward to seeing you next time. Bye. - [Charlie] Thank you all. Bye now.