Searching for audiovisual correspondence in multiple speaker scenarios

TitleSearching for audiovisual correspondence in multiple speaker scenarios
Publication TypeJournal Article
Year of Publication2011
AuthorsAlsius A, Soto-Faraco S
JournalExperimental Brain Research
Date Published09/2011
KeywordsAudiovisual speech perception, Auditory search, multisensory integration, Spatial attention, Visual search

A critical question in multisensory processing is how the constant information flow that arrives to our different senses is organized in coherent representations. Some authors claim that pre-attentive detection of inter-sensory correlations supports crossmodal binding, whereas other findings indicate that attention plays a crucial role. We used visual and auditory search tasks for speaking faces to address the role of selective spatial attention in audiovisual binding. Search efficiency amongst faces for the match with a voice declined with the number of faces being monitored concurrently, consistent with an attentive search mechanism. In contrast, search amongst auditory speech streams for the match with a face was independent of the number of streams being monitored concurrently, as long as localization was not required. We suggest that the fundamental differences in the way in which auditory and visual information is encoded play a limiting role in crossmodal binding. Based on these unisensory limitations, we provide a unified explanation for several previous apparently contradictory findings.