Towards a tool for keystroke level modeling of skilled screen reading
Abstract
Designers often have no access to individuals who use screen reading software, and may have little understanding of how their design choices impact these users. We explore here whether cognitive models of auditory interaction could provide insight into screen reader usability. By comparing human data with a toolgenerated model of a practiced task performed using a screen reader, we identify several requirements for such models and tools. Most important is the need to represent parallel execution of hearing with thinking and acting. Rules for placement of cognitive operators that were developed for visual user interfaces may not be applicable in the auditory domain. Other mismatches between the data and the model were attributed to the extremely fast listening rate and differences between the typing patterns of screen reader usage and the model's assumptions. This work informs the development of more accurate models of auditory interaction. Tools incorporating such models could help designers create user interfaces that are well tuned for screen reader users, without the need for modeling expertise. © 2010 ACM.