首页 | 官方网站   微博 | 高级检索  
     


Rapid invisible frequency tagging reveals nonlinear integration of auditory and visual information
Authors:Linda Drijvers  Ole Jensen  Eelke Spaak
Affiliation:1. Donders Institute for Brain, Cognition, and Behaviour, Centre for Cognition, Montessorilaan 3, Radboud University, Nijmegen HR, The Netherlands ; 2. Max Planck Institute for Psycholinguistics, Nijmegen XD, The Netherlands ; 3. School of Psychology, Centre for Human Brain Health, University of Birmingham, Birmingham United Kingdom ; 4. Donders Institute for Brain, Cognition, and Behaviour, Centre for Cognitive Neuroimaging, Kapittelweg 29, Radboud University, Nijmegen EN, The Netherlands
Abstract:During communication in real‐life settings, the brain integrates information from auditory and visual modalities to form a unified percept of our environment. In the current magnetoencephalography (MEG) study, we used rapid invisible frequency tagging (RIFT) to generate steady‐state evoked fields and investigated the integration of audiovisual information in a semantic context. We presented participants with videos of an actress uttering action verbs (auditory; tagged at 61 Hz) accompanied by a gesture (visual; tagged at 68 Hz, using a projector with a 1,440 Hz refresh rate). Integration difficulty was manipulated by lower‐order auditory factors (clear/degraded speech) and higher‐order visual factors (congruent/incongruent gesture). We identified MEG spectral peaks at the individual (61/68 Hz) tagging frequencies. We furthermore observed a peak at the intermodulation frequency of the auditory and visually tagged signals (fvisual − fauditory = 7 Hz), specifically when lower‐order integration was easiest because signal quality was optimal. This intermodulation peak is a signature of nonlinear audiovisual integration, and was strongest in left inferior frontal gyrus and left temporal regions; areas known to be involved in speech‐gesture integration. The enhanced power at the intermodulation frequency thus reflects the ease of lower‐order audiovisual integration and demonstrates that speech‐gesture information interacts in higher‐order language areas. Furthermore, we provide a proof‐of‐principle of the use of RIFT to study the integration of audiovisual stimuli, in relation to, for instance, semantic context.
Keywords:ASSR   audiovisual integration   frequency tagging   gesture   intermodulation frequency   magnetoencephalography   multimodal integration   oscillations   speech   SSVEP
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号