The Road to Accessible Drag and Drop (Part 2)

Previously on The Road to Accessible Drag and Drop (Part 1), I described a wishlist of capabilities and features that accessible drag and drop must have, or should have, or would be nice.

How much of that was achievable?

Must have

All done.

Should have

Done with one exception:

  • Non-standard state information is conveyed through accessible descriptions, due to the lack of corresponding ARIA semantics (such as how many items were dropped).
Would be nice

Done with one exception, and a note:

  • Live regions are used for VoiceOver, because it doesn’t support dynamically updated accessible descriptions.
  • I did need a few sort-of-hacks here and there, such as using timers to control the order and redundancy of description announcements. Techniques like this are reasoned and well tested, I don’t consider them brittle, but they definitely are sort-of-hacks. (You can take your own view, once you’ve seen some examples.)

You’re welcome to use or adapt the script (by attribution-sharealike) and I’ll document all the required markup, custom attributes, styling, configuration and internationalization, in the concluding article.

But for today, I’m going to talk about how it works.

There’s a great deal to unpack here, so bear with me as I try to explain my approach, starting with the underlying concepts.

  1. First principles of drag and drop
  2. Core semantics
  3. Selection vs navigation
  4. Multiple selection models
  5. Drag and drop actions
  6. Non-standard state information

  7. Accessible container labels

  8. Top one, nice one, get sorted
  9. Exit pursued by a bear
  10. Wrapping up

First principles of drag and drop

To make drag and drop — or any complex widget — more widely accessible, it helps to step back and think of it in terms of what it achieves, rather than what it does. Start with the end-result, not the mechanics, especially when there aren’t any usable semantics to accessibly describe those mechanics.

ARIA 1.0 defined two attributes that were intended to support accessible drag and drop, namely aria-grabbed and aria-dropeffect. But these were deprecated in ARIA 1.1, and no assistive technologies ever fully implemented them anyway. There’s no point considering them any further.

But actually, that isn’t the massive stumbling-block it might appear to be, because drag and drop itself is not what it appears to be. The primary interaction here isn’t dragging and dropping, that’s only the last stage of a series of interactions, where the drag and drop itself is essentially arbitrary.

It’s really just about moving things, conceptually no different from cut and paste, for which the primary interaction is selecting things to be moved.

So when it comes to making drag and drop accessible to assistive technologies, the first and most significant question is notHow do we denote that items are draggable? — the first question is — How do we denote that items can be selected?

Table of contents

Core semantics

ARIA provides very strong and well-understood semantics for selectable items, using role="listbox". Each item in a listbox uses role="option", and its selected state can be conveyed with aria-selected or aria-checked.

That’s exactly what we need, so the widget’s basic role model is an ARIA listbox pattern:

<ol role="listbox" aria-roledescription="listbox drag and drop">
    <li role="option" aria-checked="true">Selected item</li>
    <li role="option" aria-checked="false">Unselected item</li>

My initial design used aria-selected, and that’s still supported. But I changed the default to aria-checked, because I found it to have better overall support and consistency between different assistive technologies.

Note how the aria-roledescription includes the role itself.

JAWS and NVDA announce role descriptions instead of announcing the role, and I first assumed that was a good thing, so I just used "drag and drop".

But then I was advised by a colleague, who’s a regular screen reader user, that it’s better not to override the role, because they rely on that information to know how to interact with it.

(Native interaction hints, such as using arrow keys to navigate, are only provided at beginner verbosity levels, which many regular screen reader users will have turned off; they know how to navigate a listbox, they just need to know that it’s a listbox.)

Combining the two provides that cue, while also describing the extra functionality (and the syntax is exposed as an i18n string).

Another possibility was an ARIA grid pattern, creating a linear structure by having a single cell inside each draggable row (or a single row of draggable cells). That would work in JAWS and NVDA, but not in VoiceOver, because it doesn’t support interactive grids. It recognizes the role, and can navigate the cells with native keystrokes, but it doesn’t dispatch any keyboard events. Without keyboard events, scripted interactivity is impossible.

VoiceOver’s idiosyncratic support for ARIA widget roles severely limits the range of usable semantics. But the listbox pattern is well supported (on its own terms at least), as are radio groups (for limiting selection to single items):

<div role="radiogroup">
    <span role="radio" aria-checked="true">Selected item</span>
    <span role="radio" aria-checked="false">Unselected item</span>

To support navigation and selection, the container itself is focusable and handles all the keyboard events, while the current item is identified by aria-activedescendant:

<ol role="listbox" tabindex="0" aria-activedescendant="item1">
    <li role="option" aria-checked="false"
        id="item1" class="activedescendant">First item</li>
    <li role="option" aria-checked="false" id="item2">Second item</li>

It was a 50/50 choice between this, or using tabindex="-1" on the options.

If only the listbox itself is focusable, it’s easier to handle the flow of events, especially co-dependent events such as mousedown that triggers focus. There are many different events involved, and they have to be finely-tuned. But mostly, it just felt conceptually right, that only the listbox itself would be focusable, since the listbox is the widget.

However this approach has some complications of its own. There’s no native scroll-into-view behavior, as there is with focused elements, so it has to be manually implemented with viewport positions and scrollIntoView(). We also need to maintain an .activedescendant class, since there’s no CSS selector that means “the element whose ID matches a parent attribute value”. Even :has() can’t do that, short of writing a separate selector for each possible combination.

I wish we had regex-matching attribute selectors, then it would be trivial:

[aria-activedescendant%="([^\"]+)"] [id%="$1"] { ... }

Table of contents

Selection vs navigation

The difference between selection and navigation depends on the chosen semantics — I designed the script to adapt its behavior according to the roles and states that are used. Although the listbox pattern allows for either of two basic models:

  • Navigation and selection are separate actions, using ARROW KEYS and SPACE respectively.
  • Items are selected automatically as you navigate, using ARROW KEYS alone.

It seemed to me that auto-selection would be unnecessarily limiting, since it makes non-contiguous selection impossible; that such a model should only be used for single-selection radio groups, where it’s the expected behavior.

But as it happened, VoiceOver didn’t give me the choice.

MacOS/VoiceOver (used with Safari) implements an auto-selection and navigation model for listboxes. When a listbox has focus, ARROW KEYS natively move and select the active descendant, ignoring any actual attribute values if they contradict it, and VoiceOver’s native descriptions are coded to this behavior. For example, manual selection using SPACE can still be implemented, but VoiceOver doesn’t announce the change, because it considers the item to be already selected. As far as I know, the only way to get VoiceOver’s announcements to match the interactions, is to make the interactions match its pronouncements.

What happened to ARIA doesn’t do anything?

I wasn’t willing to impose those limitations on everyone, so I implemented both models, and then forked by vendor detection and/or explicit item roles:

  • For Safari only, follow VoiceOver’s auto-selection model.
  • For all other browsers, selection and navigation are separate actions;
  • Except with radio groups, which use auto-selection for everyone.

Since it isn’t possible to detect VoiceOver explicitly, I could only make this work by detecting Safari (which is reliably identifiable from its navigator.vendor string). This means that the auto-selection model applies to vanilla Safari as well, and only applies to VoiceOver if it’s being used with Safari.

I hate writing browser conditions, but I don’t see any alternative here, short of just not supporting VoiceOver, and that’s not an option.

Table of contents

Multiple selection models

Multiple selection for mouse users typically works like this:

  • CLICK to select single items (which also resets any existing selections);
  • CTRL + CLICK to select multiple non-contiguous items;
  • SHIFT + CLICK to select contiguous items (between two clicks).

One possibility for keyboard selection is to follow the same basic model, replacing CLICK with SPACE. However that doesn’t translate well in practice, because of the difference in modifier keys between Windows and Mac.

The CONTROL key in MacOS is functionally different from the CTRL key in Windows (primarily used to trigger right-click with a single-button mouse or trackpad). The Mac equivalent of CTRL is COMMAND, and therefore the non-contiguous keystroke would be COMMAND + SPACE. However that keystroke can’t be used, because it’s bound to a system action and doesn’t even dispatch an event. The other two modifiers aren’t useful here either — SHIFT + SPACE implies contiguous selection, and CONTROL + SPACE isn’t a recognizable keystroke for Mac users.

So it seemed more practical for keyboard users to support non-contiguous selection by default (without a modifier), while mouse users retain their typical model. Touch interaction also needs special consideration, since touch users generally don’t have a keyboard. To reconcile these differences, I implemented a selection model that varies by modality:

  • Keyboard and touch uses non-contiguous multiple selection by default;
  • Mouse uses single selection select by default, and non-contiguous selection via CTRL/COMMAND.

This is the model that’s used when the state is aria-selected.

But the inconsistency never sat quite right with me, and this is one of the reasons why I prefer aria-checked. Checked semantics imply non-contiguous selection by default — you don’t need to hold down a modifier to click multiple checkboxes — so using this state means that mouse users should have the same selection model as keyboard and touch users.

This more-consistent model is used when the state is aria-checked, and I think it’s the best choice:

  • All interactions use non-contiguous multiple selection by default.

It also reduces the possibility of users accidentally losing their selections, and should be easier for those who struggle with combined keyboard + pointer actions (or can’t perform them at all). Combined actions are still required to make contiguous selections, but that functionality is not essential:

  • Contiguous selection with a keyboard uses SHIFT + ARROW KEYS;
  • Contiguous selection with a mouse uses SHIFT + CLICK;
  • But contiguous selection is not available for touch-only interaction.
  • Selection can be locked to single items using role="radio".

Then in all cases:

  • Pointer users can clear selections with CLICK outside the containers;
  • Keyboard users can do the same with ESCAPE.

I also added secondary keystrokes for users who might prefer them:

  • CTRL/COMMAND + X to select non-contiguous items;
  • CTRL/COMMAND + A to select all items within a container.

They were just an afterthought grown by suggestion, but then I got some really interesting feedback from a regular JAWS user. She told me that CTRL + X was the first thing she tried, and loved how she could conceptualize the widget as cut and paste between lists.

That’s how I grokked the analogy.

Table of contents

Drag and drop actions

Now we have accessible selection, the final steps are the vaunted “drag” and “drop”.

For many mouse and touch users, the Draggable API provides this functionality, using familiar dragging or longpress actions. But for keyboard and screen reader users, and also for those who rely on single pointer interaction, we need something else.

It doesn’t have to be sophisticated — the simpler the better — and it doesn’t need to emulate visual movements. Remembering it’s the end-result that matters, not the mechanics; the mechanics just need to be something that’s easy to use.

We can get some pointers (aha) by referring to 2.5.7 Dragging Movements:

All functionality that uses a dragging movement for operation can be achieved by a single pointer without dragging […]

The simplest solution in this case is to implement point and click:

  • CLICK to select items;
  • (then) CLICK anywhere inside a target container to drop the items there.

This handles all single pointer input, including simulated clicks generated by voice control and other assistive tech. And it readily translates to keyboard input, with nicely straightforward keys:

  • SPACE to select items (or CTRL/COMMAND + X);
  • (then) TAB to a target container;
  • (then) ENTER to drop the items there (or CTRL/COMMAND + V).


Almost too easy … because I found this so much simpler and quicker to use than actual dragging and dropping. I was almost tempted not to bother with those mechanics, since we really don’t need them.

And we really don’t, which is a beautiful irony. But I never actually considered not including them, since many users will expect and require that familiarity.

Do you see what I mean though, about the arbitrariness of those mechanics? We now have three different terms we can use to describe the widget’s functionality — “drag and drop”, “cut and paste”, “point and click” — and they’re all basically the same thing.

Table of contents

Non-standard state information

The navigation and selection states are already described by screen readers, because they’re defined with known ARIA attributes. But to provide good usability for screen reader users, there are two additional states, and two interaction hints, that I felt should be conveyed:

Additional state information
  • When items are selected, or when navigating between them, the number of selected items should be announced.
  • When a drop action completes, the number of dropped items should be announced.
Interaction hints
  • When no selections have been made, focusing a container should provide an interaction hint, like To choose items press Space.
  • When selections have been made, focusing an available target container should also provide an interaction hint, like To drop items press Enter.

All that information can be conveyed with accessible descriptions, since dynamic description updates are announced, if they apply to the focused or active element. This behavior is similar to an ARIA status region, except that it’s contextual, and the order of announcement is more predictable, and it’s ultimately more reliable.

Updating accessible descriptions is a persistent change, and users can have them announced on-demand, for as long as they exist.

The announcement of a live region is a one-shot deal, and screen readers can discard it, if it’s superseded by higher-priority output at the moment of announcement. An example of this is typing feedback (the announcement of letters and words as you type), which always takes priority; a live region that updates while the user is typing, may never be announced.

Unfortunately, dynamic description announcements are not supported by VoiceOver, but we can handle both situations with mostly the same code. Each container has two hidden elements that are used as aria-describedby references:

<div role="listbox" id="container1" aria-describedby="container1-desc">

    <span id="container1-desc" hidden>To choose items press Space.</span>
    <span id="container1-items-desc" hidden>2 items checked.</span>

    <ol role="none">
        <li role="option" aria-checked="true"
            aria-describedby="container1-items-desc">First item</li>

        <li role="option" aria-checked="true"
            aria-describedby="container1-items-desc">Second item</li>

Elements referenced by aria-describedby don’t have to present in the accessibility tree, they can be [hidden] or display: none. This is particularly useful for descriptions that only make sense in context, since the elements are not independently readable.

Note the structural change with <div role="listbox"> instead of <ol>, because HTML lists are generally expected to only have list-item children. We don’t specifically need to use an HTML list at all, but it provides some graceful degradation, and it’s neater.

Using role="none" ensures there are no contradictory semantics, but the fact that the options are no longer direct children of the listbox, means the relationships have to be explicitly defined using aria-owns.

<div role="listbox" aria-owns="item1 item2 etc">
    <ol role="none">
        <li role="option" id="item1">First item</li>
        <li role="option" id="item2">Second item</li>

The script takes cares of this during initialization or container update — all applicable elements are assigned a generated ID, if they don’t already have one, while aria-owns and aria-activedescendant are compiled on the fly.

The x items checked message should be announced when it changes, or when navigating between selected items, but it shouldn’t be announced when navigating items that are not already selected. That was my conclusion during testing, when announcing the selections on every item became repetitive and annoying; and it isn’t really necessary. Announcing this only for selected items reduces the output verbosity, while still giving users an on-demand method for getting that information.

This is achieved by clearing the description element in advance of any navigation change, then updating it only for selected items, or when the selection state changes.

Which also removes the possibility of stale descriptions being announced. For example, if you selected a third item, then navigated back to an already-selected item, the description would still say 2 items checked, before the value was updated, resulting in dual-announcement of different values. Clearing the value before changing activedescendant avoids that problem:

//when activedescendant changes

//(then) if the current element is already selected
setTimeout(() => describeItems(), 250);

//(then) if the selection state changes
setTimeout(() => describeItems(), 250);

That timeout is one of the sort-of-hacks I mentioned earlier. The typical announcement order for descriptions is after the accessible name and state, but if we update both the description and state at the same time, then it might be announced first. An asynchronous delay ensures that it’s added to the end of that announcement.

The timer length (250ms) comes from trial-and-error testing I did a couple of years ago, to approximate the update frequency of a screen reader’s virtual buffer (its snapshot of the accessibility tree, which always updates more frequently than that, with a comfortable margin, in my experience).

The same principle applies to dynamically-created live regions. If aria-live (or a corresponding role) isn’t already defined on the element, and only added immediately before the text is updated, then it won’t be announced. But if you delay that by 250ms, then it will be (at least, as reliably as it ever would be).

Tricks like this are only viable if you continually test how it actually behaves in a range of different screen readers and you get testing feedback from regular screen reader users. I cannot stress this point enough.

Here’s an example of how that comes out in JAWS, which also illustrates the core semantics, states, and interaction hints:

Screen recording with JAWS

The colors in that demo come from the “Night sky” contrast theme in Windows 11, which is exposed to CSS via the forced-colors media query. Visual states and focus appearance are defined with system colors, such as Highlight and AccentColor.

Coming back to Safari then, the same basic process is used, except that the description elements are converted to live regions, which produce almost identical behavior, at least in VoiceOver, most of the time. Using "polite" not "assertive" ensures that it doesn’t interrupt the name and state announcement (which assertive would):

<span id="container1-desc" aria-live="polite">To choose items press Space.</span>
<span id="container1-items-desc" aria-live="polite">2 items checked.</span>

It’s not as good as description updates, but it’s satisfactory.

Note how the elements are no longer [hidden], since hidden live elements are not announced. Live regions must be present in the accessibility tree, so these are visually-hidden instead.

Accessible descriptions for the containers themselves work the same way. Each container has three possible states:

  1. No items are selected, and all items are enabled.
  2. Items are selected, and only items in the same container are enabled, while all the other containers become available targets with disabled items.
  3. Immediately following a drop action, all items are enabled again, and focus is now on the target container.

Each of those states has a corresponding description:

  1. To choose items press Space.
  2. To drop items press Enter.
  3. x items dropped.

When a container is an available drop target, its items are aria-disabled, because items can only be selected in one container at any one time, and the script doesn’t support specific insertion points. Originally I made them aria-hidden, but feedback from a screen reader user suggested that items should only be disabled. Because this makes it possible to review the existing content, as part of deciding which target to drop items into.

Table of contents

Accessible container labels

The container elements must have accessible labels, and they should be visible.

Depending on how the widget is used, a visible label might be required by 3.3.2 Labels or Instructions. But even if it isn’t, visible labels provide better usability and accessibility.

To implement these labels I’ve used heading elements, which are associated with the container via aria-labelledby:

<div role="listbox" aria-labelledby="container2-label">
    <h3 id="container2-label">Bushes</h3>

It’s very important to note here that unassociated static markup will not be available to JAWS or NVDA users. Read-cursor navigation keys are not available inside a listbox, because a listbox isn’t expected to contain anything but interactive "option" elements. It’s only the programmatic association that makes the heading accessible.

The choice of heading elements is not entirely uncontroversial. Although it’s perfectly valid HTML, and it’s technically valid ARIA, but its heading role is not exposed in the context of the listbox.

Yet its role is exposed as part of the overall page structure, which means it can be used as a navigation shortcut. It also provides a pointer action, since all the events are handled at the container level, a click on the heading is treated the same as a click on the container itself. This all adds up to a number of features:

  • JAWS, NVDA and MacOS/VoiceOver users can navigate or list the containers using heading navigation keys (e.g., H or JAWS + F6).
  • iOS/VoiceOver and Android/TalkBack users can navigate with configurable down-swipe gestures (e.g., by setting the rotor in iOS/VO to “Headings”).
  • Voice control users can speak the heading text as a command (e.g., Click Bushes).

Voice commands will work with any element though, not just headings. Here’s an example of how that responds in MacOS/Voice Control, which also illustrates some other ways that spoken commands can be used:

Screen recording with Voice Control

For iOS/VoiceOver, the presence of a labelling element is crucial to its ability to navigate empty containers. This version of VoiceOver defaults to navigating by text content, not by containers, and in that navigation mode it can’t reach any element that has no visible content. Since users can empty a container by moving all the items out of it, this would leave VoiceOver users without an intuitive method for dropping new items back into it.

Considerations like this are why the labelling elements are required (though they’re not required to be headings), rather than allowing for the use of aria-label. The script will remove that if it’s used on a container, and will throw an exception if the labelling element is missing.

Table of contents

Top one, nice one, get sorted

I didn’t think it would be possible to implement sort functionality, while still avoiding inner buttons or drag handles. As Darin Senneff comprehensively demonstrated, using up/down buttons is the most accessible way to make the individual items reorderable.

But then, as I watched the kettle boil one pensive evening, I was reflecting on how the script already implements internal sorting. When items are selected, they’re pushed onto an array, which ends up in selection order. But before the items are moved, they’re sorted back into their original node order, because that seemed to me like the most intuitive behavior.

If we sort before node insertion, and we have the order of selection, then maybe we could use that order of selection to provide a sorting function, simply by not sorting! All we need to do is expose a way for users to trigger it. We don’t really need to emulate the visual model at all.

So to perform a sort:

  1. Select items in the order you want them;
  2. (then) Click a Sort button, or use a keystroke, to apply that order.

You could also do that one by one, progressively sorting by moving each one to the end.

This is not a familiar way of implementing sort, and maybe it could only have particular use-cases, I’m not really sure. I’ve personally found it really easy to use, once you get used to it, and it has the benefit of supporting any input mode, right out of the gate.

Though it is extra cognitive load, since it’s more of an abstraction than direct reordering. However that can be improved by displaying order numbers during the selection phase.

Here’s some example markup, showing the button and the order numbers:

<div role="listbox" tabindex="0">
    <span id="container1-items-desc" hidden>2 items checked.</span>

    <span role="button" aria-label="Sort by chosen order."
          tabindex="0" aria-describedby="container1-items-desc"></span>

    <ol role="none">
        <li role="option" aria-checked="false"
        <li role="option" aria-checked="true"
            aria-describedby="container1-items-desc container1-item2-number">
            <u aria-hidden="true" id="container1-item2-number">#1</u>
        <li role="option" aria-checked="true"
            aria-describedby="container1-items-desc container1-item3-number">
            <u aria-hidden="true" id="container1-item3-number">#2</u>

Numbers are appended to selected items, counting in selection order (using <u> because they may constitute an unarticulated annotation, although that comic contrivance doesn’t convey any actual semantics).

Screen readers announce the # at the start as number, which helps to clarify its meaning, as well as providing visual affordance. (At least in English, for other languages the format is exposed as an i18n string.)

Those numbers are aria-hidden for a very specific workaround.

Adding accessible text would change the item’s accessible name, which triggers its re-announcement. However, that text is added at the same time as aria-checked is updated, which also triggers re-announcement, so the whole element would be announced twice.

I don’t think that’s a screen reader bug, it’s the correct behavior, though it’s rather counter-productive in this case. But if the element is aria-hidden then it doesn’t change the accessible name, and can still be associated as part of the accessible description. The number will then be announced at the end, for example, Sycamore, checked, three of three, 2 items checked, number 2.

Quite a lot of the item’s information is described in numbers, so adding that # provides a useful disambiguation, especially since it comes directly after the number of selections.

While there are no selections, the button is unavailable and visually dimmed. The dimmed effect is applied with opacity, rather than color changes, because that’s still effective in forced-color modes. The icon denotes numerical sorting, and to keep the markup cleaner, it’s encoded as a data URL and rendered with CSS masking. This makes it possible to effectively inherit the text color:

    background: currentColor;
    -webkit-mask-image: url('data:image/svg+xml;utf8,<svg aria-hidden="true" viewBox="0 0 24 28">...</svg>');
    mask-image: url('...');

Technically it should be base64 or URL-encoded, but in practice, that doesn’t seem to be necessary.

Using aria-hidden ensures that it’s not announced as a graphic, and the button already has an accessible name. I didn’t actually test without this, so it might not be necessary, but just in case.

Table of contents

Exit pursued by a bear

Allow me to draw your attention to a final contentious choice — the focusable button is nested inside another focusable element:

<div role="listbox" tabindex="0">
    <span role="button" tabindex="0"></span>

Having nested interactive elements is a shiny red-flag in accessibility testing, and is generally a bad idea because their actions or descriptions might conflict. But it seems to work okay in this case, and I think that’s for two particular reasons:

  • Neither of the elements has a native action or meaning, so there’s no inherent conflict (which is why it’s deliberately not a real <button>).
  • The scripted actions are evaluated by container and event target, so the script can always differentiate them, and ensure that they never conflict (e.g., triggering the button with SPACE doesn’t cause item selection).

But this was a pragmatic choice, because I couldn’t think of a better way.

If the button is outside the listbox, then that places constraints on the semantic structure around them. For example, there would need to be something like a "group" element to wrap and associate each button and listbox together, which increases the overall complexity for users and for authors.

Although it’s not strictly necessary to make the button focusable at all. The button has three different handlers, of which the last is not functionally essential:

  • Pointer-dragging users can DROP items directly onto it.
  • Single pointer users can CLICK it.
  • Keyboard users can TAB to the button and press it with ENTER or SPACE.

Keyboard users also have a direct keyboard shortcut — CTRL/COMMAND + S — so the button itself doesn’t actually need to be in the TAB order. But the keyboard shortcut is not particularly obvious or discoverable, users would need to already know it’s there. And while the button’s appearance provides a visual hint that sorting is available, it provides that in the form of what is clearly a button, and keyboard users will doubtless expect to be able to TAB to that.

But maybe there’s another approach that hasn’t occurred to me. I welcome suggestions and comments on this, or anything else that’s lurking in the woods.

Table of contents

Wrapping up

I could write another ten-thousand words, but that’s probably enough for today, except to offer particular thanks to Hans Hillen, Isabel Holdsworth, and Adrian Roselli, for their useful ideas and feedback during development.

Here are the demo links again:

The JavaScript class is extensively commented, every little thing is documented, so you can dive into that if you want (a lot) more detailed information.

Tomorrow’s concluding article is reference documentation, with everything you need to configure and use the script.

Table of contents

Notable Success Criteria

There are many WCAG Success Criteria (SC) that apply to drag and drop, including ubiquitous concerns like Use of Color, Reflow, and Focus Visible.

But the following SCs are of particular interest:

Further reading

Image credit: ucumari photography.

Categories: KnowledgeBase Content, Technical, User Experience (UX)

About James Edwards

I’m a web accessibility consultant with around 20 years experience. I develop, research and write about all aspects of accessible front-end development, with a particular specialism in accessible JavaScript. I can also turn my hand to PHP and MySQL when it’s needed. I started my career as an HTML coder, then as a JavaScript developer, but the more I learned, the more I realised just how important it is to consider accessibility. It’s the basic foundation of web development and a fundamental design principle of the web itself. If information is not accessible, then what’s the point of any of it? Coding is mechanics, but accessibility is people, and it’s people that actually matter.