HTML5 and alt: The editors new clothes

The HTML5 editor has recently stated in his defence of the alt being optional:

“We truly do believe in research, hard data, and analysis, rather than hypotheticals; and we truly do believe that evidence suggests that what we are arguing for is going to improve the accessibility of the Web.”

Problem is, no “research, hard data, and analysis” has been provided.

If the editor has such detailed research, please provide it so that the members of the HTML working group and those groups within the W3C WAI that have a stake in this issue, can use the “research, hard data, and analysis” to inform their decision.

Show us the goods

To put the matter in perspective:

What we don’t need from the editor is more Google code statistics and a bit of pseudo scientific prose, dressing the statistics up as facts to support his argument. What is required from the editor to back up his claims? A proper scientific study that is based on scientific method. Research with firm aims and objectives stated up front, with an agreed methodology.

Scientific Method

For the sake of clarity, I have reproduced some information about the steps involved in the scientific method:

The scientific method has four steps

  1. Observation and description of a phenomenon or group of phenomena.
  2. Formulation of an hypothesis to explain the phenomena. In physics, the hypothesis often takes the form of a causal mechanism or a mathematical relation.
  3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations.
  4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments.

If the experiments bear out the hypothesis it may come to be regarded as a theory or law of nature (more on the concepts of hypothesis, model, theory and law below). If the experiments do not bear out the hypothesis, it must be rejected or modified. What is key in the description of the scientific method just given is the predictive power (the ability to get more out of the theory than you put in; see Barrow, 1991) of the hypothesis or theory, as tested by experiment. It is often said in science that theories can never be proved, only disproved. There is always the possibility that a new observation or a new experiment will conflict with a long-standing theory.

Conclusion

If a scientific study with firm aims and objectives stated up front, and an agreed methodology is not forthcoming, we are in the position of reliance on expert opinion, rationale argument and the hope of concensus within the HTML WG or if not a vote on the issue. Whatever route is taken, let’s get this issue sorted so we can move on to other important accessibility issues within HTML5.

Further Reading

Categories: Technical
Tags:

About Steve Faulkner

Steve was the Chief Accessibility Officer at TPGi before he left in October 2023. He joined TPGi in 2006 and was previously a Senior Web Accessibility Consultant at vision australia. Steve is a member of several groups, including the W3C Web Platforms Working Group and the W3C ARIA Working Group. He is an editor of several specifications at the W3C including ARIA in HTML and HTML Accessibility API Mappings 1.0. He also develops and maintains HTML5accessibility and the JAWS bug tracker/standards support.

Comments

an excellent post, steven…

one crucial defect of the methodologies so far proposed for researching the use of ALT “in the wild” is that they exclude a very important factor in the presence or absence of ALT text: namely, the capabilities of the document’s generator; whether driven by a server-side template or an authoring tool, a user may not have the option, nor the cognizance of, a method of providing alt text to an image

using the criteria outlined for the proposed methodology of ALT usage, will reveal more about the capacities of authoring tools and the uses authors make of them, then it will about use of alt text “in the wild”. therefore, it is imperative that, if such a survey is conducted, to check not only for the presence, absence and values defined for ALT, but also must:

  • sniff for any declarative statement in the document source, such as meta type="generator"
  • identify the shortcomings of any template-driven pages (such as at Flickr, FaceBook, MySpace, MyOpera, etc.)

any survey of ALT usage must take into account the capabilities of the tool used to create the documents being analyzed; otherwise it is an exercise in “unnatural selection” which will inevitably skew the results towards a desired aim: “proof” that ALT is so poorly implemented and used by authors that those of us who cannot process images would be “better off” not having ALT text required for all images, which is akin to stating, “we don’t need an access ramp or accessible entrance to this building — if we’re lucky, no one will even notice the building as they pass, except “those” for whom the building has been endowed with a magical meaning through the sense of site, so that only those who can perceive it, can access it… and if they do demand access to the building? well, there’s always the delivery entrance out back — “sorry about the lack of elevators once you’re inside — just be thankful you’re inside”

Jim J. Jewett says:

What is the study supposed to determine?

The main argument against requiring it is

“Sometimes it isn’t available. Tools will cheat and make something up.”

There are times when a validity requirement will cause the tool to try harder at getting a good alt. (But note that alt is already required, and that doesn’t seem to happen often.)

There are also times when a validity requirement will cause the tool to cheat. I have certainly seen many sites consistent with that, though I can’t say whether it was intentional cheating (for validity) vs general sloppyness. This makes the situation even worse, because now you can no longer trust even the nominally explict alt=”” claims that an image is just decorative.

Are you really looking for a study on whether tools are more likely to work harder at doing the right thing, vs cheating with a loophole?

Joe Clark says:

Yes, unlabelled or improperly-labelled images will of course “improve the accessibility of the Web.”

Curiously enough, it will also make it easier for automated tools like Google’s to pass equally automated validation tests.