I think I have soured on automated #accessibility checkers in the past few months (and especially linters, like the jsx a11y one). I have done audits where there are telltale signs of blanket fixes to satisfy a checker. Not only a fault of the tools of course, but also processes.
For example, tabindex="0" without interactive roles, null alt, or even alt that is meant for computers rather than humans, such as alt="organisation-building-circle" (?). role="button" on elements that do not handle Enter or Space, and other similar patterns.
What is frustrating is that this make auditing harder! You can no longer run a check to get a gut feeling of what patterns of failure exist, and give guidance.

Also explaining, say, a 4.1.2 Name, Role, Value failure is much involved than explaining 2.1.1 Keyboard, ime.
So, what do you do? I think checkers that give context about what failures are and why they matter have a better chance of succeeding. Fwiw, axe and Accessibility Insights both have references and links in their reports. Linters, not so much.
If you are working with accessibility in an organisation, it is not sufficient to throw a checker at each project; it might be dangerous even. I see this in presentations and meetings, where "make sure to test automatically" is left as-is. Interpreting the results matters a lot!
I think it is useful to acknowledge that yes, accessibility is material work, and takes active effort. I'm increasingly convinced that someone on a team should take charge of accessibility tooling, and as experts we must help them interpret and make prioritise checker results.
This might be obvious to some of you, that's fine :) I have had this conversation every week for the past month, so I thought it worth sharing.
You can follow @isfotis.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.