- [Mike] Thank you for joining us today. We'll begin momentarily. All right, good morning, good afternoon everyone. My name is Mike Mooney, I am the Digital Marketing Manager at TPGi. Excited for today's session, A Webinar of Anxiety, Developments in accessibility for people with anxiety and panic disorders with our guest, our presenter today, Dr. David Swallow. Before we get started, I just wanna go through a few housekeeping items. This session is being recorded and we will email everyone the recording after the event. We do have live captions available, feel free to use those as needed. And lastly, if we have time towards the end of the session, we will, well, Dr. Swallow will answer Q&A at the end of the session so please use the Q&A box provided by Zoom to submit your questions so we can keep track of them on our end. And if anyone needs any accessibility support, training, or usability testing, please reach out, follow up with us and we can talk about and discuss those options with you. And with that, I will let Dr. Swallow get started, David. - Okay. Thanks, Mike. So welcome to A Webinar of Anxiety. I'm David Swallow, I'm Principal UX Consultant at TPGi, and today I'm gonna be giving you an overview of developments in accessibility for people with anxiety and panic disorders. And I'm just gonna knock my camera off just cause I get a bit of slow down and see where it's going. So awareness of this topic has grown in recent years and high pressure sales tactics, and deceptive design patterns have caught the attention of regulators and lawmakers and researchers. Banks have introduced new measures to support vulnerable customers. Social media companies have recognized the impact of their services on users' mental health, but has this made any difference? Is the web a less stressful place? Let's find out. I just wanted to make a quick note about my choice of language in this webinar. You might notice I use the term deceptive patterns rather than the perhaps more familiar dark patterns. And while the latter is commonly used and has been for years, the term also reinforces the idea that dark is bad which is problematic and something that the industry as a whole is striving to address. Even Harry Brignull who coined the term dark patterns has renamed his idea as deceptive design. So that's what I mean by deceptive design patterns. Okay before we go any further, I'll just explain what I mean by anxiety and panic disorders. So for this, we turn to the diagnostic and statistical manual or DSM-5, which is the industry handbook for classifying mental disorders. So anxiety disorders comprise range of mental illnesses that are characterized by excessive feelings of fear, apprehension, and dread. So for example, you've got social anxiety disorder, which is the intense fear of being embarrassed or humiliated or judged negatively by others in a social or work setting. You got claustrophobia, which is the fear of confined spaces. Agoraphobia, the fear of being in a situation that would be difficult to escape from. Health anxiety, which is where people have a preoccupation with the idea that they have or will have a physical illness. And then panic disorders describe sudden, frequent, and intense feelings of panic or fear sometimes for no clear reason. And people often have more than one anxiety disorder and they may also have all the psychiatric conditions such as depression and the most common combination, at least in the UK is mixed anxiety and depression disorders. So that's a very brief, very brief overview of the types of disorders we're talking about. It's in no way exhaustive and in no way am I any kind of medical professional. Now, anxiety disorders are the most common mental illness in the US, they affect 40 million adult in the US alone or 18.1% of the population every year. Now, I previously approached this topic from a medical perspective, focusing on people with clinically diagnosed anxiety, but it soon became clear from the feedback that I received on blog posts and previous talks that anxiety is something that impacts pretty much everyone just to an extent. And many of the issues that I raised struck a chord with a broad cross section of people suggesting that you don't need a clinical diagnosis of anxiety for these things to make you anxious. In her book, "Anxiety For Beginners, A Personal Investigation" Eleanor Morgan writes, "If someone tells you they never experience anxiety, they're either A, lying or B, a sociopath, really." An the latest version of the diagnostic and statistical manual of mental disorders, antisocial personality disorder with psychopathic features is described as being characterized by a lack of anxiety or fear. So just something to bear in mind. So all this is to say this isn't exclusively for people with clinically diagnosed anxiety and panic disorders. So in a way, this is for everyone. So almost four years ago, when I first became interested in this topic, I took the very scientific approach of asking Twitter. And I said, "Generally speaking, what features of websites and apps make you feel anxious or stressed?" And this generated a surprising number of responses and made me realize that and perhaps I touched upon something that impacts many people. And then after asking about it on discussion forums and message boards, social media, and talking to people with anxiety and panic disorders, several common themes emerged, which I wrote about in a series of blog posts and spoke about on podcasts and talked about in conferences and meetups. And I've generally just been trying to bang the drum about the last few years. Now, during this time I've been keeping tabs on the topic and monitoring any developments, and there have been various initiatives to try and address some of these issues. And I also still keep periodically checking in on Twitter. So now four years on, I thought it would be a good time to provide an update on these developments and revisit these themes. So we'll take a look through the four themes I identified consider whether there's still a problem and look at what, if anything has been done to address them. And I'm not suggesting that these are the only four things that trigger anxiety, they're just a snapshot based on the most commonly cited causes of anxiety from a small sample of people. And also they're not universal anxiety triggers, anxiety is very personal and subjective. That said, the first theme urgency was cited by a lot of people I spoke to and was really what sparks my investigation in the first place. So anything that provokes a sense of urgency or scarcity was a commonly cited source of anxiety. And one of the most common ways of provoking a sense of an urgency or scarcity is through a countdown timer and persuasive notification. So anyone looking a holiday if you still do that anymore will have encountered persuasive notifications. So things like "Hurry, only two tickets left." Or "Book now as six other people are viewing this hotel." And we've all fumbled to find our credit card details as a ominous timer counts down the number of minutes left to complete a transaction. So the web is awash with these so-called deceptive patterns designed to convert visitors and part them from their money. And it can be a source of irritation or even stress for many people, but it can be complete show stoppers for people with anxiety or panic disorders. But are they still a problem today? Yes, they certainly seem to be. But I recently asked again on Twitter about features of websites and apps that makes them anxious or stressed. Numerous responses referred to time limits when booking, buying tickets, unreasonable time limits on form submissions, 10 other users are looking at this offer. And it wasn't just countdown timers, people also mentioned videos that play automatically on Netflix, animations and auto playing anything, moving carousels, anything moving that I didn't say to move, be it videos or gifs or whatever. And notifications and the pressure to respond are a common source of anxiety for many people. It's basically anything that creates a sense of urgency that makes many people feel hurried and harassed. Now, instilling a sense of urgency and scarcity as a sales tactic is undoubtedly a common deceptive pattern that still persists in many websites. And there are now even browser extensions that are designed precisely for stripping out these persuasive notifications and creating a less stressful experience. So there's a one called ShutUpBooking.com, another one called No Stress Booking, and this one Booking.com De-Stressor, which claims to improve these user experience by not only removing persuasive notifications, but also rewording certain messages so that they sound less anxiety-inducing. So interestingly, the author does caution about throwing the baby out with the bath water, noting that some of Booking.com's messages are useful if a bit alarmist. And then he then explores how to allow the helpful notices to get through while filtering out the unhelpful ones. Now, this is an interesting point to make and raises the question of whether such tactics are not all bad. So is a little anxiety ever a good thing? And this is a question that's Fox regulators and lawmakers and researchers in recent years have tried to regulate legislative deceptive patterns. For instance, the high pressure sales tactics of booking websites have caught the attention on the Competition and Markets Authority, CMA in the UK. And ruling that such practices are misleading and aggressive and wholly unacceptable, the CMA has brought enforcement against six leading websites in this sector. Not that this seems to have had much of an effect. Anecdotally, I'd certainly still encountered these tactics. There's also been articles such as this one, suggesting hotel booking websites is still duping customers despite the CMA clump down. Now, a company in the ruling by the CMA was a set of principles and these clarify the CMAs position on what online hotel booking companies need to do to ensure that they comply with consumer law. Now, there's lots of things in stock to but it describes how persuasive notifications are permissible under certain conditions. So for instance, principle 29 says statements about popularity and availability must A, be clear, B, disclose the assumptions, limitations, and qualifications that are relevant to the statement, and C, be substantiated by the hotel booking websites data. So again, this seems to be an acknowledgement that little anxiety can be a good thing, or at least an acceptable side effect of certain design decisions. So these high pressure sales tactics and deceptive patterns undoubtedly still exist and are still a huge source of anxiety. But what if there is a genuine reason for creating that urgency? What if the number of rooms left is genuinely small? What if this information will be genuinely useful to users looking to book? And it really must be based on actual data, otherwise people might start getting suspicious. So someone called Ophir Harpaz, a security researcher and blogger was in the process of booking a flight and encountered a persuasive notification that said 38 people are looking at this flight. So thinking this seems a lot, Ophir decided to check how they came up with 38. So they went into dev tools to inspect the element containing this furious number and found that its class name was view_notification_random. Then they explored the algorithm behind this and discovered that this site was simply choosing a random number between 28 and 45, simply with the intention of nudging people towards booking their flight faster. So is a little urgency a good thing? I'd still recommend that we remove unnecessary time limits and countdown timers and give users enough time to comfortably complete their tasks, but providing it's presented clearly is based on actual data and provides a useful service to users then a little urgency might not be such a bad thing. So let's take a look at the next thing which was unpredictability. So the unpredictable nature of certain websites and apps was another commonly cited source of apprehension, and this manifests in a variety of ways. So one example is when interfaces themselves don't do what you expect them to do. So I previously cited the example of Instagram, where unlike most of the websites, double tapping on an image unexpectedly likes the post rather than zooming the image, then you have the added anxiety of trying to unlike the post and wondering whether the owners be notified. So this issue of accidental likes is definitely still a problem today. As one Twitter user commented, "I have a hand tremor just enough so this happens a couple of times per day. I didn't mean to like the post announcing your tragic news, I was just trying to scroll past it." One respondent fear that they might accidentally share something or unknowingly allow an app or a site to post to Facebook on their behalf. Another respondent mentioned the infinitely scrolling newsfeed of Twitter and said, "I'm always worried that I'll accidentally tap the screen, causing it to go back to the top of the page and I'll never be able to find the post I was in the middle of reading. This happens at least a couple of times a day." Several people said they were often reluctant to click on links, so the very building block of the web, either because they might not be able to return to the previous place using the browser back button or simply because they didn't know what will happen when they click the link or where it'll take them. So unpredictable user interfaces is one aspect of this. Another aspect is online forms. So we'll all be familiar I think with trying to decipher the double speak of opt in or opt out marketing check boxes. So tick this checkbox if you do not want to receive our newsletter. And this on the slide is a currently live form from an insurance provider in the UK. And there's a disclosure widget that says, "Tell us if you don't want to hear from us." And hidden within this, it says, "If you would prefer not to receive communications about our products and promotions, please indicate below. You can always tell us if you change your mind." And then it gives you checkbox options for email post, et cetera. So the uncertainty and unpredictability of this sneaky deceptive pattern continues to be a common source of anxiety. Now, the good news is such shady practices should have been outlawed at least in Europe by the European General Data Protection Regulation, GDPR. So GDPR is a legal framework, sets guidelines for the collection and processing of personal information from individuals who live in the EU and amongst various other requirements, the GDPR requires consent to be opt in, they need to seek explicit consent to use their information. So in theory, there should be no more "Please tick this box if you do not wish to receive further information from us." type situations. Now I say in theory, because the GDPR is taking a while to have an impact. The law has certainly increased data protection awareness, but big issues remain over implementation and enforcement. There's a huge backlog of unaddressed complaints and also big tech is reluctant to ditch these data harvesting business models, and actually greater consent has resulted in greater anxiety for some. So even if you only spend a tiny amount of time online, it's impossible to escape cookie consent notices. So GDPR was meant to make web tracking easy for everyone to understand and give us more control over how our data is handled. Four years after its arrival, cookie consent notices are an absolute blight on the web. Many respondent cite cookie pop-ups as a source of irritation and stress. One said, "I wonder how people feel about consent requests. They are often in a language many won't understand as they're vague, just some worry that they've consented to too much and often they've not even been implemented in an accessible way." One respondent said on Twitter, "Many of them obscure the content and can't be clicked with my accessibility tooling. So I guess I'm not reading that article or using that website." And similar tactics apply when it comes to purchasing or signing up for a newsletter or a premium option or subscription. Guilting users into opting into something is referred to by the deceptive design website as confirm shaming. So the option to decline, whatever it is, is typically worded in such a way as to shame the user into compliance. It's also styled to be practically invisible in comparison to the massive confirmation button. And if you go to confirmshaming.tumblr.com, you'll find some hilarious examples of this practice, and several people included, my mother have been in touch. Tell me about the anxiety-inducing practices of certain large retailer, particularly in their burning desire to get you to sign up to their membership scheme at any cost. Now, these practices defy many established UX principles. They failed to make important information clear, whether you've actually signed upon not and what you're actually paying for. Crucially, they fail to explain to users what will happen, leaving them confused about next steps and uncertain about the consequences of their actions. And it's this confusion and uncertainty that's a huge trigger of anxiety. Now as with the hotel booking websites, these practices haven't escaped the attention of the authorities and following complaints in UK's advertising watch, the Advertising Standards Authority has made several rulings. Deeming such practices as unclear and misleading and insisting that sign-up options must be presented clearly in future. And the European Parliament Committee on Internal Markets and Consumer Protection, IMCO, recently held a specific hearing on the risk of all from the use of deceptive patterns. And this was a very, very interesting session, which highlighted the difficulties that lawmakers are facing in trying to regulate deceptive patterns. So the tricky thing is that deceptive patterns are so varied. So they often fall into different legislative areas, making regulation very difficult. So someone can be considered misleading advertising practices, others relate to the unintended disclosure of data, and this makes it difficult to create an all encompassing legislation. Different interface design strategies have different degrees of intrusiveness and different degrees of harm. So how do you quantify this? How much nagging to sign up to your premium service is too much? And also new deceptive patterns are being developed all the time. So defining them too precisely in legislation poses a danger of the legislation quickly getting out of date, but providing too looser definition could result in legislation being circumvented, especially given the deep pockets of big tech who are all too willing to find loopholes and work arounds. And there's also that fear of throwing the baby out with the bathwater again, and banning potentially use patterns as with the deceptive notifications. So the unpredictability of websites and apps is undoubtedly a big problem that needs addressing, but how do you define such patterns in a way that doesn't impinge upon things that help users and how do we legislate and regulate without becoming too intrusive? So lawmakers acutely aware of how GDPR is proven difficult to enforce and annoyed everyone with cookie pop-ups. So moving on to the next thing which is powerlessness. And this goes hand in hand with the unpredictability of websites and apps and it's the sense of powerlessness that they can provoke. So hiding key information such as contact details or account deactivation instructions in difficult to reach corners or websites can result in users feeling helpless. And similarly, targeted advertising is another commonly cited source of anxiety, and many respondents feel powerless to stop the intrusion of those ads. So a classic example of hiding account deactivation instructions is Facebook, right? It's buried deep within the website. And even if you do manage to find the option and try to deactivate your account, it brings up photos from friends accounts that try to guilt you into staying. So again, it's that confirm shaming deceptive pattern at play, don't you want the benefits of staying in touch with your friends? Do you hate your friends or something? And it's not just Facebook, lots of sites do it. I recently tried to cancel my HelloFresh recipe box subscription thing, and had to click through about half a dozen screens of "Are you sure you don't want to stay? What about this?" And so on. Putting options out of reach in this way is a prime example of friction, and friction in UX design is anything that prevents or gets in the way of users accomplishing a task. And powerlessness can often stem from unnecessary friction and this is a common source in anxiety. Now, I've previously recommended that we remove this friction in user interfaces and put important information up front and make it easier for users to make contact in a way that suits them. But what if there is a reasonable justification for introducing friction? Giving control to users might actually be as much about applying friction as removing it. So last time I talked about how designers at the UK online bank, Monzo have thought about deliberately applying positive friction to financial transactions to better support customers with mental illnesses, and to avoid people with bipolar disorder make unnecessary frivolous purchases late at night, Monzo prototyped method of delaying transactions until the following morning, allowing time for users to reflect on the purchases. And this approach is something that's really snowballed in the last couple of years, particularly to support customers in vulnerable circumstances, such as having a gambling addiction. And several banks in the UK have introduced gambling blocks, allowing customers to block spending with recognized gambling merchants. And to overcome the impulse to turn off the gambling blocks, some banks add even more friction by requiring need to talk to their customer support or the other delay. So these are all applications of positive friction or deliberate powerlessness to ultimately improve user experience. And just as a side note, for advice on how the Web Content Accessibility Guidelines can support people with gambling addictions, I can highly recommend a blog post by my colleague, Liz Certa, which I haven't put the URL to on there but if you search for problem gamblers TPGi, you'll find that. And something I also mentioned last time was targeted advertising. And this is something that continues to be a commonly cited source of anxiety. So one respondent described targeted ads as "The bane of my life", and they noted that ads on YouTube are "Probably the worst offenders for me, especially suddenly being randomly assaulted by them at intervals throughout an interesting video. I find many ads that seem to be forced on us against our will kind of invasion of our personal space." Yet, targeted advertising is not universally loafed. A powerful article by Payal Aurora called the biggest myths about the next billion internet users examines the digital habits and behaviors of people in developing countries and busts some common myths and misconceptions, one of these centers around privacy. So a quote from the article says, "I found young people from the slums of Ludhiana and Hyderabad, generally enthusiastic about Facebook's highly targeted advertisements. They felt special. They felt recognized. They felt respected." One person remarked that targeted ads made his time online more efficient, crucial when you have a limited data plan. So is a little powerlessness a good thing? I think it depends really on the intentions behind the friction. So reducing the likelihood of user error or protecting vulnerable customers is one thing, but if it's done simply to track users or maintain engagement, or to funnel them into a more vulnerable contact method, then it may not be so acceptable. So while I'd still recommend that we strive to remove unnecessary friction in design, it's not just a case of making a blanket decision to create completely friction-free experiences. I suggest that we also consider where it might be appropriate to apply friction as well. And these are just minor speed bumps to user journeys that are offset by the reassurance and comfort they provide to anxious users. Which brings me on to my next and final theme, which is sensationalism. And this relates not to UI design, but to content. So we're living in an era of so-called fake news and media sensationalism, where information is deliberately distorted to push agendas and generate business. And this is something that a number of respondents mentioned has been a particular source of anxiety. So relying on sensationalism and speculation and exaggeration as a sales tactic is undoubtedly a common deceptive pattern. And this style of reporting has certainly not gone away. And it's particularly the case for medical information where irresponsible or careless reporting can have serious consequences for anyone with health anxiety. So one respondent said, "I hate reading acute story about dogs, and then randomly gets slapped with a link to an article entitled 'Putin threatens to Nuke.' for example. This is one of the main ways that I accidentally stumble across events, which trigger my anxiety in recent times." And another respondent describe how websites take advantage of people not being able to resist the lure of reading articles which our anxiety triggers due to the morbid fascination element. That's how these websites operate. They know the anxious people find it hard to resist triggers such as breaking news. More often than not breaking news is pretty routine news anyway, it used to be called news. Now everything is sensationalized dramatic effect. And a very topical example is the coronavirus pandemic. Now this at least in the UK press, and I'm sure elsewhere has been reported in very sensational and alarming tones. So from the early days of the pandemic, we saw headlines such as UK on killer virus alerts and world war flu, which do absolutely nothing to help the situation. Well, it might shift more newspapers or gain more clicks. All it does is raise the anxiety of people who are sensitive to these things and erode trust in the media. So the social media giants, Twitter, Facebook, Google, et cetera, attempts to fight the spread of misinformation in part by trying to promote authoritative sources. So if you search for COVID or coronavirus on Twitter, you met with a banner that reads, "Know the facts" with a link to your country's health services. And all this, not to say that there's no cause of concern, absolutely not. You know, millions of people have died from the coronavirus, but reporting things in this way just raises anxiety and causes panic unnecessarily. And another place we see sensationalism is in social media. Now, social platforms have been engineered to incentivize building a following and going viral, but what earns likes and follows more often than not is what provokes or surprises. So rather than encourage actual discussion, the like and share model encourages provocation and antagonism. And when this is coupled with algorithms that serve up more of the same, what we see represents not a balanced take on a topic, but an increasingly narrow and increasingly sensational slice of it, but it doesn't have to be this way. We're already seeing the beginnings of a movement in a different direction. So Instagram, for instance, is experimenting with hiding its like counter and even its follower counts. There's a trial that's underway that removes the iconic hearts from post leaving followers with no idea of how many or how few likes an update has received. So it's a simple tweak, but it shifts the focus from a popularity contest to hopefully a more meaningful engagement and it discourages sensationalism and the pursuit of vanity veterans. And as I mentioned sensationalist reporting particularly in coverage of health and medical issues is definitely a problem. But what if there's a genuine reason for creating that sensationalism? What if we need that sense of alarm to spur it into action that we might not otherwise take? And what if sensationalist reporting or advertising conveys an important message that has a lasting and positive impact? Now it's undoubtedly a complex issue to address particularly as even reputable sources can appear to get it wrong. And I used this example last year, which is an NHS Be Clear on Cancer poster campaign from 2015, which one respondent took issue with. So it has a picture of a doctor holding a sign that says, "Just tell me." And it says, "If you've have blood in your poo or looser poo for three weeks, your doctor wants to know. It could be the early signs of bowel cancer. Finding it early makes it more treatable and could save your life." Now, according to the respondent, this was an example of overemphasizing the worst case scenario. Now, delivering accurate and timely medical information without causing unnecessary panic and anxiety is a difficult balancing act. Alarming headlines may be distressing for some users, but that must be weighed against the raised awareness and the number of lives potentially saved. So the Be Clear on Cancer Campaign is a few years old now, and there's been considerable evaluation of it. And the evaluation has found a statistically significant increase in awareness of the campaign message and a statistically significant increase in urgent GP referrals for suspected cancer. So as alarming and sensationalists are these posters might be, this campaign has raised awareness and ultimately saved lives. So in the majority of cases, I think there is no place for sensationalism. And similarly, I think that shift in focus away from likes and follow accounts and the sensationalism that these metrics drive is a positive development for mental health, but it's important we don't shy away from difficult subjects that people might or want to hear about. I think provided it's handled in a sensible, responsible, and accurate way, and there's a genuine reason for it, then some degree of sensationalism is perhaps appropriate. And as with each of these other things, it boils down to the intentions behind it really, if it's ultimately providing benefit to users and perhaps a little sensationalism is a good thing. So I think it's fair to say that anxiety is still a problem on the web. So as one respondent puts it, "I don't think that such things have improved. In fact, they seem to have upped the ante over the past few years in my opinion. I can often sense something rather Orwellian about a lot of it." But with regards to what we should do about it, as we've seen, it's not always clear cut, simply attempting to ban the selective patterns could result in restricting potentially useful functionality and attempting to define deceptive patterns in a way that is meaningful and future proof is fraught with complexity. So what can we do about it? Well, we could just do nothing. And this is exactly the position taken by one person who got in touch with me and they argued that much of what I was recommending was general good advice, but it might be problematic for people with clinical anxiety. So deliberately removing anxiety triggers and websites they argued is essentially encouraging avoidance, which according to the cycle of anxiety may result in short term relief from anxiety but ultimately leads to long term anxiety growth. So instead, they suggested that the triggers should remain and that people with anxiety should be encouraged to confront their triggers. Now I'll admit, this seemed counterintuitive to me as surely anxiety triggers on the web don't have to exist at all. They don't have to become triggers if they don't exist. And designers and developers are responsible for shaping user experiences from scratch and should surely be looking to remove or avoid any kind of barrier rather than deliberately leaving them in. That said, I'm no expert here and I'd hate to be recommending actions that conflict with established medical guidance. So I'm keen to get people's views on this issue. Does removing anxiety triggers on websites essentially encourage avoidance? Please drop something in the chat. But if we were to do something about anxiety, then what? So following established guidance is always a good approach. And in my blog post and talks, I've referenced various resources including the Web Content Accessibility Guidelines. So despite not explicitly mentioning anxiety and panic disorders, WCAG includes a number of success criteria that address many aspects of cognitive accessibility. For example, there are techniques to turn off time limits and to encourage clarity in forms. TPGi's Inclusive Design Principles are particularly relevant and can help to avoid anxiety-inducing practices and design. For example, the principal give control, encourages the notion that people should be able to access and interact with content in their preferred way. And the UK Home Office team produced a series of do's and dont's posters, sort of inclusive design best practices for different user groups. One of which was designing for users with anxiety. So using these resources will go a long way towards creating a less stressful experiences. However, with the exception of that do's and don't poster these resources weren't written with anxiety and panic disorders in mind. In fact, the diverse nature of cognitive and learning disabilities has always been a bit of a challenge for accessibility resources such as WCAG. Now looking to address the W3C WAI cognitive and learning disabilities accessibility task force, which is known as COGA. So the task force is working to produce techniques and understanding and guidance documents that addresses the cognitive space, including for anxiety and panic disorders. And one of its publications is a comprehensive resource that provides tailored design patterns and technical guidelines known as Making Content Usable for People with Cognitive and Learning Disabilities. So for example, you've got objective one, which is help users understand what things are and how to use them. Objective three is use clear and understandable content. And many of these have clear parallels with the issues that I've mentioned today. But I should stress that making content usable is currently supplemental guidance that's beyond the requirements of WCAG so following the guidance is not required for WCAG conformance, but it will increase accessibility for people with cognitive and learning disabilities. Something I've mentioned before but I'll mention again is design patterns for mental health. And this is a public domain pattern library that aims to improve the design of online mental health products and services. So essentially it's a set of evidence-based positive design practices designed to counteract web's deceptive patterns. And this is a resource that's been around since 2018, but thanks to an injection of funding that's recently been updated. Like any of these resources, the patterns are a work in progress and they're always looking for people to contribute 'cause you might have noticed by the big contribute button in the head of there. So if you're interested in that, do get involved. And I was something that I'm consistently asked about when I have given these talks is whether it would be possible to design some sort of tool or plug-in to tackle anxiety. And I know researchers who are exploring the idea of a calm mode for web browsers that is less anxiety-inducing. Some people use reading mode, reader view in browsers for this purpose, as it strips away a website's formatting and empty space and ads and all the other anxiety-inducing, to help you concentrate on the text. One person I spoke to relies upon ad block is in a similar way to help reduce potential triggers. There's distraction-free word processes such as Calmly, which strips away all but the essentials of writing. There's also calm modes in video games which promote a less stressful experience. That's a whole different talk. I've even heard of cars that have a calm mode, which turns off everything except the speedo to avoid nagging distractions and allows you to focus on driving. But one person wondered whether you could have something similar to the prefers-reduced-motion, CSS media query, which is a way of signaling to browsers to minimize the amount of non-essential motion for the benefit of people with motion sickness and vestibular disorders. So yeah, something similar to that for anxiety. Now I'll admit that a prefers-reduced-anxiety or prefers-reduced-tension is a really appealing idea. But my response is usually the same. Where do you start with that? Getting companies to comply would be difficult. So deceptive patterns at least in the short term are simply too profitable and many companies care little about how anxious or stressed their websites make you feel. Plus, there's so many triggers for anxiety and different triggers affect people in different ways. How would you tackle them all? The booking.com plug-ins that I mentioned at the start could be seen as a first step, but that targets a very specific instance on a very specific website. You could take the approach of these calm modes in kind of stripping away everything. There's a slight sense of throwing the baby out with the bath water, plus some anxiety triggers can't be taken away so easily. And that takes us back to the problem that legislators are having. How do you pass legislation that realistically addresses all current and future deceptive patterns and which does so without removing potentially useful functionality? You could certainly legislate against some of the worst deceptive patterns with some very specific language and precise definitions, but there's no way you're gonna catch them all. So instead of trying to legislate for every permutation, it might be worth encouraging companies to abide by principle-based regulations and one possibility, which was raised by deceptive design expert, Harry Brignull at the European Parliament Discussion I mentioned earlier was to adopt a similar approach to that used in the UK finance industry. So rather than detailed guidance or rules, companies instead are legally obliged to follow a set of principles on Treating Customers Fairly, almost like a Code of Conduct. And UK folks may be familiar with the key facts document that you receive when you're purchasing a new financial product and that gives you a clear breakdown of all the information you need to know. That's an outcome of the Treating Customers Fairly Approach. And any firm who is found not to be treating their customers fairly can find themselves in very hot water and subject to eye watering financial penalties. So perhaps, a similar approach could be taken towards deceptive patterns and anxiety-inducing practices rather than using that whackamole approach of trying to precisely define deceptive patterns, instead enshrine an ethical, responsible behavior in legislation and give companies the awareness to be able to make informed choices about the patterns that they employ. So in summary, anxiety is undoubtedly still a problem on the web. And we've looked at the theme of urgency, which highlights the risk of throwing away potentially useful functionality in pursuits of deceptive patterns. We've looked at unpredictability, which demonstrates the difficulty of defining deceptive patterns in legislation. We've looked at powerlessness, which suggests that making a web a less stressful place maybe as much about applying friction as taking it away. And we've looked at sensationalism, which cautions against removing anxiety triggers at the expense of communicating information effectively. Just highlight what a tricky area this is to tackle and why lawmakers and regulators and researchers are struggling to get through it. As for what we can do about it, as I say, one approach could be to do nothing and encourage users to confront their fears. Existing accessibility resources such as WCAG go some way to creating less stressful experiences and upcoming resources such as making content usable might take this further. We might also tackle deceptive patterns on a case by case basis through various calm modes and browser extensions and plug-ins. And whilst lawmakers might attempt to tackle some of the most egregious anxiety-inducing patterns through tightly defined laws, a more fruitful approach might be to encourage companies through legislation if necessary to do the right thing and treat users fairly. So in our industry, we've got these concepts of accessibility and inclusivity, and we strive to go beyond the limitations of accessibility conformance with the intent of designing a more inclusive and equitable experience for people with disabilities. So when it comes to anxiety-inducing and deceptive patterns, perhaps we need to focus less on techniques to avoid these dodgy practices and more on encouraging companies to treat users fairly and with respect, 'cause after all creating an environment in which users and customers want to stay and engage with your product is surely a more positive, long term approach than relying on short term, sneaky tricks and deceptive design to fool them into staying. And I think common to each of these themes was a need for user research and user-centered design. It's not simply about removing these potential anxiety triggers. It's about considering the risks and consequences of these triggers and corresponding these to the needs of your users. So as designers and developers need to make important ethical decisions about whether to include aspects of UI and UX design that may well trigger feelings of anxiety, but ultimately benefits users. Okay, thank you for listening. I think we've got time for a few questions if there are any, or you can find me on Twitter @davidofyork or LinkedIn. I think you find people on LinkedIn. - [Mike] Awesome. Thanks, David. Yeah, there's a lot of chatter going on in the chat. So those really good conversations that you've stimulated here. One of the questions that I pulled out was most of big tech seems to only be concerned about making money, how do we shift their views and have them start making meaningful updates to their software? - [David] Yeah, that's true. I mean that's the same conclusion I had. I think it's gonna be difficult. And I think it, as I was saying towards the end there, I think it's gonna need to be legal regulations to enforce this. And as I say, things like the Treating Customers Fairly, that is a legal requirement for financial companies. So something like that to just to motivate that good behavior I think is the way forward there. It's definitely tricky. - [Mike] Yeah. Awesome. Got another question. The first person says, "Great presentation. I'm interested in how these ideas apply in the context of higher ed." - [David] In the context of what, sorry? - [Mike] Of higher education. - [David] Yeah. I mean, a lot of the deceptive patterns are kind of commercially-based cause they're usually trying to sell you something. So I guess it's a slightly different approach in higher ed. No, I'll have to think about that. I'm not really sure where you'd see such things 'cause the motivations are different really with the, like for example, the urgency and the countdown timers and creating that hurried and harassed experience. Yeah, I'm not sure where that would apply really in higher ed 'cause as I say, it's commercially-driven, most of these things. Does the person have any ideas of where that they might have encountered those? - [Mike] Well, one I can think of off the top of my head is maybe like they only have capacity to accept X amount of students. - Yeah. Right. - Yeah. So like that could be like one scenario. - [David] Yeah. But I guess as I was saying with that, there's limitations, if that's true and not based on data then knowing that would be more useful, surely than hiding that information. - [Mike] Awesome. I don't see any other questions coming through the Q&A, so this might be a good time to wrap up. Actually, there's one question right here that just came through. Do you find a direct link to whether more accessible websites and platforms have any less anxiety triggers or do these triggers stand independently than the marketing strategy used? - [David] Just sorry, just repeat that again. - [Mike] Yeah, no, sorry. I'm not sure if the quality is that great, but do you find a direct link to whether... Excuse me, let me start over. Do you find a direct link to whether more accessible websites/platforms have any less anxiety triggers? - [David] Oh, I see. Yeah. I've not an answer to that, I'm afraid no. I get the idea. Yeah, whether I would assume if the websites or the platforms are following those resources I mentioned like WCAG, if they are genuinely accessible websites and platforms, then as I said before, a lot of those criteria from WCAG do benefit anxiety sufferers, they're less employing those deceptive tactics. I would imagine so that that is the case but I haven't noticed anything yet. - [Mike] Fair enough. Awesome. Well, really appreciate the time, David. Again, everyone, thank you for joining us today. We'll follow up with an email with the recording and any resources mentioned today. And if you do have any questions, feel free to reach out and we'll be happy to help. And have a great day. Thanks a lot, David. - [David] Thank you. - [Mike] All right, bye.