Abstract
Visual information (lip movements) significantly contributes to speech comprehension raising the question for the neural implementation of audiovisual (AV) integration during speech processing. To replicate and extend earlier neuroimaging findings, we compared two different analysis approaches in a slow event-related fMRI study of healthy native speakers of German who were exposed to AV speech stimuli (disyllabic nouns) with audio and visual signals being either congruent or incongruent. First, data was subjected to whole brain general linear model analysis after transformation of all individual data sets into standard space. Second, a region of interest (ROI) approach based on individual anatomy was used with ROI defined in areas identified previously as being important for AV processing. Standard space analysis revealed a widespread cortical network including the posterior part of the left superior temporal sulcus, Broca's region and its right hemispheric counterpart showing increased activity for incongruent stimuli. The ROI approach allowed to identify differences in activity between Brodmann areas 44 and 45, within Broca's area for incongruent stimulation, and also allowed to study activity of subdivisions of superior temporal regions. The complementary strengths and weaknesses of the two analysis approaches are discussed.
Original language | English |
---|---|
Journal | Human Brain Mapping |
Volume | 30 |
Issue number | 7 |
Pages (from-to) | 1990-1999 |
Number of pages | 10 |
ISSN | 1065-9471 |
DOIs | |
Publication status | Published - 01.07.2009 |
Research Areas and Centers
- Academic Focus: Center for Brain, Behavior and Metabolism (CBBM)