- Welcome to the "State of Accessibility" podcast from TPGi on LinkedIn Live. I am Mark Miller, and this is my co-host, Dr. David Sloan, chief accessibility officer for TPGi, co-author of "What Every Engineer Should Know About Digital Accessibility," and a user research and usability strategy specialist. - And Mark is the sales director for TPGi and a member of the W3C Web Accessibility Initiative, Accessibility Maturity Model Task Force. So I guess this month we should start off with an apology. We are sorry that January's podcast was rather abruptly ended without notice, just while we were in full flow talking about the European Accessibility Act. Turns out we had a webinar scheduled straight after using the same Zoom meeting room, and somebody arrived early to start the webinar, which caused the podcast to stop. So that's a good lesson for us to learn for next time. - I think I was in the middle of some soliloquy, David, so when it just cut me off, I thought maybe it was on purpose. - Not at all. - No, that's true, it was unfortunate. - And we weren't able to find time to resume the podcast. It's been a busy time since then, but I'm pretty sure we'll be returning to the European Accessibility Act and its impact on the Accessibility community later this year. - For sure. Well, thanks for that, David, for our second podcast of 2025, which is what we're on now, believe it or not, we're going to return to two more abbreviations that are really the focus of a lot of chat that we hear right now. One is AI, which I love to say backwards, and AT, which stands for assistive technology, for those of you guys who are wondering what that acronym is. AI we hear about all the time. It's in every context out there that you can think of. So I'm sure people are used to hearing that, but we're gonna talk about assistive technology as well, and particularly how the two connect. And to help us talk about how artificial is impacting assistive technology and digital Accessibility, we have a special guest with us, Ryan Jones. Ryan is Vispero's Vice President of Software and Project Management, and he himself is a daily screen reader user. Welcome, Ryan. - Hey guys, thank you so much. I'm happy to be here today. - Well, we are absolutely happy to have you and we really appreciate you bringing your expertise to this conversation. When David and I were wondering who we wanted to pull in to talk about these things, you were the first person we thought of, so thank you for joining us. So let's dive right into it, Ryan. The first question that I have is, how is AI changing how assistive technology behaves? - Yeah, I mean, this is something we struggle with and grapple with every day. So in my role at Vispero, I oversee all of our assistive technology software, product management, engineering, support, training, as well as our enterprise accessibility software development. And so this is questions that we're dealing with every single day. How is AI impacting assistive technology? And the answer is, we're only starting to figure this out. We're only seeing the tip of the iceberg right now. But what I have seen is that AI, in the last 12, really 18 to 24 months, AI has opened up some doors that we've not been able to open in the last 20 years with assistive technology. And so if we think about screen reading in particular, some of the long-term barriers that that we faced as people who are blind or low vision using screen reading and screen magnification are access and interpretation of visual information. Whether that's an image that I'm looking at, let's say I'm shopping on a website or looking at something that's an image or a chart or a graph or a screenshot that someone might send across in an email, there's all kinds kind of graphically communicated information that traditionally screen readers have had a lot of challenges in interpreting and understanding. And we've had technology like optical character recognition, where you can basically interpret text that's on the screen and be able to read it. But what you can't do with OCR is interpret the visual presentation or formatting of that text. And you certainly can't describe what an image looks like. And so AI has really broken that barrier down for us. And you know, in JAWS in particular, about a year ago, in March of 2024, we released AI technology to describe images in JAWS. So if you're using JAWS on a website and you're coming across an image, so whether it's, maybe I'm shopping on Amazon or some other retailer's website, and I want to know what the sweater actually looks like, right? Instead of just the alt text or alternative text that's provided by the webpage, tells me that it's a gray cardigan sweater, but I wanna know, what do the buttons look like? What do the sleeves look like? I wanna get more details. And AI is letting us get very detailed descriptions of images like that. And so that's been a barrier that we've all faced, and really that barrier has been destroyed in the last year to two years. And so we're seeing AI coming in and helping us break down these challenges that we've faced for a long time, and the real trick is how do we leverage AI in the right way to do that? And how do we target those barriers? You know, what are the big barriers that we face? Can AI help us break those down? Images were a big one. One other quick note, the other one I'd say is training and learnability of assistive technology has always been a challenge, especially for people who may start using assistive technology later in life. Maybe they've lost their vision or are losing vision later in life, they've never used assistive technology, they've never been trained on it. They're now learning to use the computer in a different way. AI is really good at synthesis of large amounts of information and describing and turning questions into very well-crafted answers. And so we're seeing a great use of AI in helping people learn how to better use their assistive technology. We did this in JAWS a few months ago. We released something called FS Companion where people can ask questions about how to use JAWS or how to use Office or the browser. And rather than going and digging through documentation, you can just now ask general questions and then ask follow up questions as it gives you answers. So these are just common challenges that we've seen that AI is helping us break down. So I think it's been very practical, the way that AI has been benefiting us in assistive technology. - The visual comment that you made, I think sometimes for me, it helps to sort of relate things that happen in the digital world to things that happen in the real world, right? I dunno if the internet reflects real life or real life reflects the internet, or something in there. But I had a colleague of ours, Ryan, when we were at a, we were all off site, she is a screen reader user as well, and blind, low vision, and her, the problem that she had is when she got into the shower at the hotel, there were three bottles stuck to the wall. One was shampoo, one was conditioner, and one was body wash. And she has no way of understanding what those are. But she used, in real life, a similar technology to what you are talking about is now integrated within the JAWS screen reader. It's a similar AI function where she essentially took a picture of it, and AI was able to say, "This is shampoo, this is is conditioner, and this is body wash." And I don't know, other than having somebody come in and explaining that to her, how she would've handled that on her own prior to that type of ability. And I just imagine in the vastness of the web, that's gonna be a huge, huge advantage to be able to do that. - It is. It's bringing independence back to people where we had no independence. And I've dealt with that exact same scenario many, many times. In fact, another just practical one I used, I travel a lot for work, so I'm in Ubers and Lyfts and vehicles, and so I use AI now on my phone. When the vehicle pulls up, I can ask it, "What kind of car is that?" And yesterday it said, "It's a blue Honda Odyssey," and I knew I was looking for a blue Honda Odyssey. So right there, I already had a lot of evidence that that's my vehicle. And then of course I confirmed with the driver, but that gave me a lot of knowledge right off the bat that this is most likely the vehicle that I need to find. - That's brilliant. - Yeah, I love the two parallel examples you've given. One is making AT, in this case screen readers, more powerful and providing people with more independence, but it's also giving people the tools to use existing functionality of AT, in a more effective way, including the AT working with a workplace application, you know, whatever it may be. And you know, I think that helps to address another of the sort of digital accessibility divides where people have assistive technology but don't have the skills to use it to its full potential. And I feel like that that's sometimes part of AI that we forget about. It's a way to help us use what we already have in more effective ways. So I really love that example you gave there. - Yeah, I mean, it's funny, like even those of us who've been using, I've been using JAWS, for example, for over 25 years. A lot of our other developers and testers have been using AT products for a long time. Even we are learning from our own AI, right? So like, I've been asking it certain questions and getting answers, and I'm learning things about JAWS I didn't even know. And so are some of our engineers. So like, the vastness of knowledge that it can bring together is really powerful. The key to all this, though, that we've had to grapple with is getting the right data into the AI, right? Garbage in, garbage out. That's kind of the programming term we always have used. You have to have the right knowledge and information available to the AI, otherwise it just makes stuff up, right? Everyone's seen this with the mainstream large language models where they hallucinate, they make up answers. And so we have to deal with that too. And the good thing that we've done for so many years is produce a lot of content about how to use screen reading software and JAWS, zoom text and so on. And the great thing is we're seeing a huge payoff on that now because we can feed it our information, which we know is right, because we wrote it, and that's used to generate the answer. So the AI is not always right, even still, but we're getting much better accuracy than if you just go out to ChatGPT or Gemini, which is using the entirety of the internet, which is full of good things, but also full of incorrect information. And that's something from an AT perspective that we have to think about with AI. What's the risk if the AI is wrong? You know, what's the risk if the AI tells me what's on this PowerPoint slide that I'm looking at, and it's the wrong thing, and then I get up to deliver a presentation and I start spouting off the wrong information in front of a board meeting, for example? So there's a risk and a balance here that we're always looking at. - When you talk to JAWS and ask JAWS questions, you're specifically talking about FS Companion, right? - Yeah, yeah, exactly. - And that's something people can actually, there's a beta version of that that people, if you're listening right now, that people can play with, is that true? - Yeah, it's fully functional. If you go to fscompanion.ai, or it's actually built into JAWS as well. But I always just tell people, go to the website. You can use it on your phone, your computer, whatever, fscompanion.ai, and just start asking it things about JAWS or how to use Office. The great thing is it's gonna give you answers from the perspective of a screen reader. If I ask it, "How do I hide a column in Excel?" It's not gonna start telling me what to click on. You know, "Click on the top of the column," or, "Click the green button." It's gonna give me keyboard commands because it knows, it's telling it for the perspective of someone who's a screen reader user. So it's giving you actual commands and things that you can do as a screen reader, which is really important for us. - We have folks using that, Ryan, too, that are using JAWS Inspect to do screen reader testing. They're using that FS Companion to ask it questions to understand how a screen reader is experiencing things as well. So it's actually going just even beyond JAWS. - Yeah, exactly. - So Ryan, you touched on something I wanted to explore about a little more detail. You touched about on one of the risks with AI is the potential inaccuracies of the responses it provides. I'd like to explore some of the other risks that we know about with AI, and especially generative AI based on large language models. You have the concerns with embedding ableist biases in the content that we learn from, as well as security and privacy concerns. You know, if you're asking an AI to describe an image that may be sensitive or may not be something that you would like other people to know you're asking AI about, how have you addressed those risks? You know, I can't, I guess it must be challenging when you want to adopt new technology to make the product better, but at at the same time, you have many thousands of users who need to be reassured that the AI being introduced to AT like JAWS is being done so in a responsible way. How have you addressed those risks? - Yeah, these are really good questions too. So I'll start with sort of the AI and the bias that AI sometimes has. And I think we're still exploring how we can impact that, right? So we're a consumer, even our products are consumers of mainstream AI products. You know, we don't have the capacity at Vispero to build a large language model, and so we use models that are already out there. And as part of that, we try to research how they're trained. Are they trained on a variety of scenarios and a cross section of people? So I'll give you a quick example. A couple years ago, we introduced Face in View, which is a technology in JAWS that helps you, uses your camera and helps you make sure that you're lined up, that you're looking at the camera in the right way, that your face is centered in the view, so for people who are on web-based meetings. We used a model for that, an AI model. But when we researched it to make sure, is it trained on people with various eye conditions? Because some people with certain eye conditions, their eyes don't look the same as someone without that eye condition. So we actually validated that that model had been trained on people with a lot of different scenarios, whether it was, you know, some people wear sunglasses because of light sensitivity. Can the AI model find your face when you're wearing sunglasses and it can't see the pupils of your eyes, for example? Different body types, different ethnicities have different eye configurations and placements. And so like, that was a scenario where we really had to make sure, because we know our users fit all those different scenarios, and does the AI account for that? And in this case it did. So that is something we have to try to investigate. I think we're all still learning how AI, the biasness that comes into it. We're fortunate in that the areas that we're using AI are maybe a little bit less subjective than things like AI for HR, where AI's being used to recommend job candidates, for example. We're really not using it in those ways where biasness can have a massive impact on things. You know, when we get into describing images and answering questions and those areas, we don't see a huge impact around it right now. But we do recognize that that is something that we have to keep considering. I think the bigger question that we deal with and that we hear about from customers is around the security and privacy aspect. And when we introduced a Picture Smart AI feature last year that describes images, that was one of the first questions that people were bringing up to us when we were developing it. "How do you deal with security?" And so the stance that we took is, we are not using any free AI models for this. Because the free AI models, often the reason, one of the reasons they're free is because they take everything that you send to it and they use it to train the model to help the model get better. Which is great, but for a commercial screen reader that is being used by people all over the world and by large and small enterprises and government agencies, that's not something that they're interested in. And so we make sure that we're using paid versions of these models that, and those companies, you know, OpenAI, Anthropic, and other ones, all of their paying customers want privacy and assurance that their data is secure and is not being used to train the models. And so we have those agreements in place so that anytime you're using our AI, it's not being used to train the models. So if you're just getting a picture described, it's not being stored anywhere. It's not being saved for later investigation. It's only used at that moment in time and then that image is gone and it's not living somewhere else for long-term purposes. So we've been having a lot of discussions with enterprises. Some are adopting this and recognizing that, you know, there's security in place and that they're willing to accept that, and some are not. And I think the enterprise and government agency levels are still all figuring this out as well. What risk are they willing to accept around AI? The good thing that we're seeing is that from the user perspective, people are recognizing how powerful the AI is and how it's breaking down those barriers, and they're advocating inside of their organizations to allow these kinds of technologies to be used. And that's bubbling up to us, because, you know, IT security professionals are coming to us and saying, "Hey, our JAWS users really want this really bad because they see how powerful it is. Can you have the conversation with us about protection of our data and how it's used and if it's stored?" And then we can help assure them of the guidelines and the procedures that are put in place to safeguard it. Whereas if the users weren't advocating for that, they probably wouldn't have brought this up to us, and their IT security would've said, "Nope, sorry, it's AI-based. We're not using it." But we're actually seeing good, in a lot of cases, some good cooperation between end users and their enterprise-level IT folks. And so I'm thankful for that, and I encourage other users of AT, you know, if you're having challenges or you feel like you're having challenges using AI at work, bring it up. Bring it up to accommodations team, IT teams, and let those conversations start happening so people can be better educated about how assistive technology and AI can really enhance productivity for people. - My, that was a lot. - I know, right? - It was good, though. - Something I deal with all the time. - It was good. No, well, you bring up a lot of good points, Ryan, too. And I think the overarching thing I'm hearing here is that one of the things we have to be careful with when we see a technology like AI and we start incorporating it into things that people are using every day and all that is that technology tends to run ahead of everything else. It runs ahead of security, it runs ahead of the law, it runs ahead of everything else. So to your point, you know, it requires organizations to be very thoughtful and deliberate about using that technology and how they use it and how they develop with it, you know, to all your points about how carefully you would look into the technology to, you know, check for biases and the way that it might evaluate somebody's face, like David said. So I'm really wondering, like, what do you see as coming next? So with the increasing number of large language models and generative IA tools like DeepSeek, for example, what does that mean for AT development in the future? - If I had the answer to the question, I would feel much better. I think the answer, the truthful answer is we don't fully know. Now, we do know some things, but we don't know everything. So things that I think we do know more of, one of the challenges with AI is much of it has to be done off device. In other words, information is sent to AI on a server somewhere, and it's processed. Like, that's how ChatGPT, DeepSeek, Google Gemini, all these main large language models typically run off device. So your information's sent, it's processed, and then the results are sent back. We are seeing much, I think within the next couple, two, three, four years, our PCs, our phones, our tablets, the devices that we use, many of them will support on-device AI. We're already seeing that now, but that's gonna proliferate a lot more over the next couple of years. And the great thing about that, from an AT perspective, is now I'm not depending on an internet connection to describe the image, for example, or to tell me how to use the AT product that I'm using, like what FS Companion does. I don't need internet now. From a security perspective, my image that I'm having described or whatever it is that I'm doing is not being sent to the third party. So the security privacy aspect gets neutralized to some degree. So I think there's some really great things coming with on-device AI. Now, obviously that takes a long time to proliferate down to hardware. Because not everyone's gonna go out and buy a new PC next month when the latest onboard AI chip comes out. So this is years of proliferation, I think, that happens. But we're starting to see the beginning stages of that now. I think that's clearly going to be a big benefit to assistive technology users. The rest of it, it's really hard to tell. It's happening so fast. I mean, literally when I look at news a couple times a week, I'm seeing something new that sparks my interest that, should we be looking at this? Should we be working on this? There's AI technology that starts to automate your use of a computer. The term that you might hear is agentic technology, where you have agents working on your behalf. So I now might tell my computer agent, "Go set up an email to Mark and tell him that he does a great job on the podcast, and send one to David and tell him he does good, but the last episode got cut off." You know, like you versus me going and doing 25 steps to do all those things, I now give a command, and my agent goes and works on my behalf. So that's coming. There's no question of that. That's really not even related to AT, but that's going to affect and positively impact those of us who depend on AT, because it helps us be more productive. So that's a couple of examples that I can see coming, but there's a lot of blurriness in the future because things are happening so quickly. - I'm very happy with AT if it's going to let me off the hook and blame David. - Right? I knew you'd liked that. - That that works well for me. No, I think that that's, and I assume, like, you know, just as you're talking, I'm thinking about, like, walking into a store and I now see laptops that have, you know, that advertise Snapdragon, for example, processors which are ARM, if I have this right, and this is- - That's right, yeah. - Are ARM-based, and those are designed, question mark, those are designed to be able to run AI on board? - Yeah, yeah, they are. - They are. Okay, so it's, again, it's all catching up, right? Like, we've gotta have the hardware catch up with the capability of the software so that things like it running on board can happen. And then of course people have to go out and buy all that stuff and be using all that stuff. So I can see there's a lag, you know, a lag that's just natural that has to occur. But it's really exciting to think about not having to connect to the internet, particularly for someone who's using it for assistive technology to access these things and the security things that it solves and all of that. It's really pretty exciting to think about. - Yeah, and it's funny, because, you know, over the last years we've seen more proliferation to less powerful hardware, because everything's processed in the cloud, right? So PCs actually have less power than they've used to. And now we're seeing it flip the other way where you want more power on your PC, on your phone, because now you want AI locally and not in the cloud. So we're we're flip-flopping back to where we were 10, 15 years ago right now. - Yeah, and that's, you know, I listen to all this and I think, you know, the impact on global energy consumption, you know, increasing demands for processor power to generate large language models, you know. I just hope that this trend towards on device is also somehow addressing the energy concerns that go alongside AI development. So I guess we'll keep tracking that as well. - Yeah, that's a good point. - Yeah, and it tends to be a bit of a pendulum, right? Like most things do. Like if you, early days of computing, I can remember everything was sort of server-based, right? And you had terminals, and then everything went to PC, and then we started, you know, working on virtual machines. And, you know, it's not exactly the same thing again, but it sort of switches, starts to swing back and forth, like where's the processing done. And I would imagine, David, that if we start needing more computing power, that somebody's gonna come along and say, "How can we make this more efficient?" And I think, actually, the ARM processors are very efficient too. Is that not- - That's one of their benefits. They're much more efficient from a power consumption perspective. I mean, you're getting mass boosts in battery life, for example. So there's a lot of hardware things happening at the same time that we have all these software things and it's all kind of coming together right now. - Brilliant. Well, is there anything else, Ryan, we're getting close to time here. Or David, did you have another question for Ryan? - Well, I was just thinking about the impact on the other side of the coin. You know, we work, in Vispero, we have the assistive technology side of things and we also have the digital accessibility consultancy and support that the effort to create digital resources that are usable by people use AT, and when AT functionality changes, then there is going to be an impact in how we design digital resources to work well with that AT. And maybe this is a topic for an another podcast, you know, if a screen reader can do a better job of writing a text alternative for an image than a human could do, does that mean that that no longer becomes a requirement? You know, I can see both sides of that argument. But what I do know is that certainly for some rich images, an AI tool can generate a really good description of that image. Now, that's a contentious point. But it seems like just the feedback of the functionality that that JAWS has in terms of image description, people are really valuing what it provides, even though there is obviously a chance that something may be inaccurately described. If that gets better, then that reduces the burden, maybe especially on legacy images that don't, they've been online and still don't have alt text for them. So, but yeah, that's- - Well, it also transfers, to your point, David, it transfers the burden or the choice or however you want to think of things like verbosity onto the user. So now a screen reader user can actually say, "This is how much I wanna know about this image." Instead of some developer or content creator deciding this is what should be put in there for alternative text. Which everybody's doing the best they can. There's nothing wrong with that in today's world, but it actually, just thinking through your example completely, it could be even a better scenario, right? I dunno, how do you feel about that, Ryan? You probably- - Yeah, I mean, putting the power in the user's hands to get the information they want is key. And that's really what this is allowing us to do. Yeah, it is great to have a professionally-written alternative description of something, but that's only covering the one scenario that that alternative text covers. What if I wanna know something different about that image for whatever reason? And so having the AI there to now let me start customizing what I want to know is really powerful. Whereas in the alternative description gives me the base, I know generally what this is, and now I can explore it more. Or if I'm using assistive technology that doesn't have access to the AI, I still need a way. Not everyone has access to this, right? And so you have people using legacy AT and things that don't have the AI yet, and that will continue for years to come. And so you really do need both. And AI is helping us meet in the middle a little bit more. And again, as you said, we could have a whole separate discussion on this topic. But I see AI as helping us decrease the divide between assistive technology and digital accessibility. There's always a gap there, there always will be. AI is helping AT move forward to close that gap, and it's also helping the digital accessibility world to scale accessibility a little bit better than maybe was done before. But that's just kind of my general broad stroke view of it. - And I think that's really important to emphasize that we can't get too excited about what the cutting edge could do for people who have the means to access that. It, you know, it's not like we can just abandon certain accessibility concepts because oh, AI fills in those gaps now. Not everyone has access to those AI solutions and may not do for a long time yet. - Yeah, exactly. - It's exciting though. - Absolutely. - Really exciting. And I do, I mean, just overall, the concept of transferring sort of the power, if you will, back to the user, if is the big net of what AI does for assistive technology, that's gonna be a positive. Because of everything that Ryan said, like being able to say, "This is what I wanna know about this image." And I assume that with Picture Smart in JAWS right now, can you push against it? Can you say, "Hey-" - Yeah, absolutely. - "Does this image contain, you know, the number two somewhere?" Because for some reason you're looking for the number two. You can, I mean, that to me is brilliant if it can do all that. - Yeah, I mean, so you can ask it questions, you can ask follow up, you can argue with it if you're thinking, "I don't know if that image really contains that." You ask it, and it'll say, "Yeah, it does, and here's why I think it does." Or sometimes when the AI's wrong, it'll say, "Nope, you're right. It doesn't actually have that in the image." So you can have full conversation. It's like talking to a picture or a chart or whatever. I mean, when you think of presentations, like, think about education. I know so many students are using this dealing with charts and graphs and math or economics or other social studies, things that they didn't have access to this before and they had to depend on a person to describe it. And they were subject, then, to the bias of that person. Now, they can interrogate and ask questions and they don't feel like they're waste, you know, taking someone else's time. They're not being a hindrance to a peer or a teacher in their mind. They're freely able to interact with these things. That's really, I mean, I've seen so many younger people benefiting from this, and it's really just blown their mind of how they can participate with their peers now in discussions around charts and graphs and images and things. - I think you also just said, Ryan, that Picture Smart could save your marriage, because you can go argue with it instead of arguing with your spouse. Is that what you just said? - Well, so it's funny, because shopping is one of those things that, you know, people who are visually impaired, when you're going to buy clothes, right? And the alternative text says it's a pair of pants, or whatever. If I'm gonna buy something for my wife, I want to know what it actually looks like. I don't wanna know that it's just a, you know, black pair of pants, right? Or whatever the thing is, I use the example all the time where I was buying a coffee maker a couple years ago and I wanted to make sure it had buttons. I didn't want one with a touchscreen. And so I used our AI to ask it questions. "Does this have buttons or is it a touchscreen?" And it said, "Yeah, this thing has five buttons on it." And like, those kinds of practical examples are just, they're just practical things that change people's lives and make people more productive and independent. - And I love the way that it will help people understand how to interact with images. You talked about asking images questions. You know, that's a skill that people can develop and really get the maximum out of something that was available before, but was, you know, gave a fraction of the potential information, perhaps because there was one alt attribute that said something pretty simple, and a very short summary now encouraging people to use an AI tool to question, interrogate an image and gather really rich information from it. I think that's really cool. - Yeah. - It is, well, and the educational implications that Ryan said, just for students to be able to, because I think when you're in those years of your life, you're naturally curious, too. So who wants to just settle for what somebody decided you need to know about the image? I'm sure students love to be able to dig into exactly what the graph is saying or exactly what the image is, or whatever else that I probably can't even think of. We're a little bit over time, so I want to find out if Ryan, you have anything final you wanted to add, and David, if you had anything final you wanted to ask before we officially wrap up? - No, I think this was great. I mean, there's so much happening. So I think, kind of, it's akin to this, like, a rollercoaster. We've gone up the hill, we saw the hill, you know, when you hear the click sounds when you're on a rollercoaster and you know you're going up, that was about 12 months ago. We've gone over the top and now I think we're coming down that first hill and we're still accelerating. I suspect there's gonna be loops, there's gonna be rolls ahead, probably more hills up and down. But like, we're on the journey now, which I think is really exciting, and also a little terrifying too, to be honest. Because we have to figure out where this goes and we have to make sure that we're supporting people and making ourselves more independent. But it's a fun journey and I'm really excited that we're actually on it and it's moving now. - I love that analogy. And that's that noise I can hear, all that whooping and hollering- - That's right. - On the AI and AT rollercoaster. - [Ryan] Yeah. - Well, yeah, I mean, I guess- - Arms up in the air. - [Ryan] Yeah, that's right, no hands, no hands. - So I know, I mean, there's gonna be a lot more conversation about AI and its impact on AT and on digital accessibility, and we're gonna be returning to this topic later in the year in the podcast. But Ryan, thank you so much for your time and insights today. It was great to talk to you and I'm sure we'll want to have you back on to talk more later in the year. - Excellent, thank you both. I appreciate it. - Yeah, thank you so much, Ryan. It really was a fascinating discussion, and I just, I love hearing from both the perspectives you have from an AT user yourself and from an expert in this field who's investigating things and looking at these things. So thank you, thank you so much. Well, now you know the state of accessibility. I am Mark Miller, thanking David Sloan and Ryan Jones, and reminding you that the state of accessibility is always changing, so please help us effect change.