Audiovisual integration during speech comprehension: An fMRI study comparing ROI-based and whole brain analyses

Gregor R. Szycik, Henk Jansma, Thomas F. Münte

31 Citations (Scopus)


Visual information (lip movements) significantly contributes to speech comprehension raising the question for the neural implementation of audiovisual (AV) integration during speech processing. To replicate and extend earlier neuroimaging findings, we compared two different analysis approaches in a slow event-related fMRI study of healthy native speakers of German who were exposed to AV speech stimuli (disyllabic nouns) with audio and visual signals being either congruent or incongruent. First, data was subjected to whole brain general linear model analysis after transformation of all individual data sets into standard space. Second, a region of interest (ROI) approach based on individual anatomy was used with ROI defined in areas identified previously as being important for AV processing. Standard space analysis revealed a widespread cortical network including the posterior part of the left superior temporal sulcus, Broca's region and its right hemispheric counterpart showing increased activity for incongruent stimuli. The ROI approach allowed to identify differences in activity between Brodmann areas 44 and 45, within Broca's area for incongruent stimulation, and also allowed to study activity of subdivisions of superior temporal regions. The complementary strengths and weaknesses of the two analysis approaches are discussed.

Original languageEnglish
JournalHuman Brain Mapping
Issue number7
Pages (from-to)1990-1999
Number of pages10
Publication statusPublished - 01.07.2009

Research Areas and Centers

  • Academic Focus: Center for Brain, Behavior and Metabolism (CBBM)


Dive into the research topics of 'Audiovisual integration during speech comprehension: An fMRI study comparing ROI-based and whole brain analyses'. Together they form a unique fingerprint.

Cite this