Abstract
Computer interfaces traditionally depend on visual feedback to provide information to users, with large, high-resolution screens the norm. Other sensory modalities, such as haptics and audio, have great potential to enrich the interaction between user and device to enable new types of interaction for new user groups in new contexts. This chapter provides an overview of research in the use of these non-visual modalities for interaction, showing how new output modalities can be used in the user interface to different devices. The modalities that will be discussed include:
Haptics: tactons (vibrotactile feedback), thermal (warming and cooling feedback), force feedback, and deformable devices;
Non-Speech Audio: auditory icons, Earcons, musicons, sonification, and spatial audio output.
One motivation for using multiple modalities in a user interface is that interaction can be distributed across the different senses or control capabilities of the person using it. If one modality is fully utilized or unavailable (e.g., due to sensory or situational impairment), then another can be exploited to ensure the interaction succeeds. For example, when walking and using a mobile phone, a user needs to focus their visual attention on the environment to avoid bumping into other people. A complex visual interface on the phone may make this difficult. However, haptic or audio feedback would allow them to use their phone and navigate the world at the same time.
This chapter does not present background on multisensory perception and multimodal action, but for insights on that topic see Chapter 2. Chapter 3 also specifically discuss multisensory haptic interaction and the process of designing for it. As a complement, this chapter presents a range of applications where multimodal feedback that involves haptics or non-speech audio can provide usability benefits, motivated by Wickens' Multiple Resources Theory [Wickens 2002]. The premise of this theory is that tasks can be performed better and with fewer cognitive resources when they are distributed across modalities. For example, when driving, which is a largely visual task, route guidance is better presented through sound rather than a visual display, as that would compete with the driving for visual cognitive resources. Making calls or texting while driving, both manual tasks, would be more difficult to perform compared to voice dialing, as speech and manual input involve different modalities. For user interface design, it is important to distribute different tasks across modalities to ensure the user is not overloaded so that interaction can succeed.