Research Team: Mark S Baldwin, Jennifer Mankoff, Gillian R. Hayes
The Graphical User Interface is inspired by the real world physical
affordances that we interact with every day. Components such as buttons, scroll bars, tabs and windows have been visually encoded to provide familiarity and context to simplify the computing experience.
For nonvisual computer users, those that have visual impairments such as low vision or blindness, this information is largely unavailable. While screen readers make access to computers possible, they do so at the cost of double translation: the information that is structured for visual display must first be processed and re-translated into a stream of text that can then be converted into speech or braille. During this process the information encoded by the visual metaphors is either wrangled into the auditory channel or lost entirely. As a result, visually impaired users have been unable to benefit from the primary mechanism developed to make computers more usable.
This work is a first step towards creating a direct manipulation experience in which auditory output, keyboard input, and tangible interaction work together to form a fluid multimodal environment that moves nonvisual computing beyond linear translation.