Nonvisual computer interaction has largely centered around a keyboard input and audio output paradigm. Dependency on audio as the primary form of information consumption, risks cognitive overload for the indviduals that use these interfaces. This project explores novel ways of communicating these audio only outputs through nonvisual and nonauditory modalities.
Establishing new multimodal ways to interact with computers will lower the barrier to digital access for people with a range of visual impairments.
People: Mark Baldwin, Yuang Li (URA), Ziyu Yi (URA)