Crossmodal attention and multisensory integration: Implications for multimodal interface design
Spence C.
One of the most important findings to emerge from the field of cognitive psychology in recent years has been the discovery that humans have a very limited ability to process incoming sensory information. In fact, contrary to many of the most influential human operator models, the latest research has shown that humans use the same limited pool of attentional resources to process the inputs arriving from each of their senses (e.g., hearing, vision, touch, smell, etc). His research calls for a radical new way of examining and understanding the senses, which has major implications for the way we design everything from household products to multimodal user interfaces. Instead, interface designers should realize that the decision to stimulate more senses actually reflects a trade-off between the benefits of utilizing additional senses and the costs associated with dividing attention between different sensory modalities. In this presentation, I will discuss some of the problems associated with dividing attention between eye and ear, as illustrated by talking on a mobile phone while driving. Charles has published more than 70 articles in scientific journals over the past decade. I hope to demonstrate that a better understanding of the senses and, especially the links between the senses that have been highlighted by recent cognitive neuroscience research, will enable interface designers to develop multimodal interfaces that more effectively stimulate the user's senses.