Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Behavioral studies of multisensory integration and cross-modal spatial attention have identified many potential benefits of using interfaces that engage more than just a single sense in complex operating environments. Particularly relevant in terms of application, the latest research highlights that: 1) multimodal signals can be used to reorient spatial attention effectively under conditions of high operator workload in which unimodal signals may be ineffective; 2) multimodal signals are less likely to be masked in noisy environments; and 3) there are natural links between specific signals and particular behavioral responses (e.g., head turning). However, taking advantage of these potential benefits requires that interface designers take into account the limitations of the human operator. In particular, multimodal interfaces should normally be designed so as to minimize any spatial incongruence between component warning signals presented in different sensory modalities that relate to the same event. Building on this rapidly growing cognitive neuroscience knowledge base, the last decade has witnessed the development of a number of highly effective multimodal interfaces for driving, aviation, the military, medicine, and sports. © 1963-2012 IEEE.

Original publication

DOI

10.1109/JPROC.2012.2225811

Type

Journal article

Journal

Proceedings of the IEEE

Publication Date

10/01/2013

Volume

101

Pages

2113 - 2122