A11y-CUA Dataset: Characterizing the Accessibility Gap in Computer Use Agents
New research exposes a massive accessibility crisis in AI computer assistants.
Deep Dive
A new study reveals a critical accessibility gap in AI Computer Use Agents (CUAs). While these agents successfully completed 78.3% of tasks for sighted users, their performance crashed to just 28.3% for users relying on screen magnifiers and 41.67% for keyboard-only navigation. The research, based on a 40.4-hour dataset of 158,325 interaction events, shows current AI agents fail to mirror the nuanced interaction styles of blind and low-vision users.
Why It Matters
This exposes a major flaw in AI development, locking out millions of users from the benefits of automation.