Zur Hauptnavigation wechseln Zur Suche wechseln Zum Hauptinhalt wechseln

Speech comprehension aided by multiple modalities: Behavioural and neural interactions

Carolyn McGettigan*, Andrew Faulkner, Irene Altarelli, Jonas Obleser, Harriet Baverstock, Sophie K. Scott

*Korrespondierende/r Autor/-in für diese Arbeit

Abstract

Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.

OriginalspracheEnglisch
ZeitschriftNeuropsychologia
Jahrgang50
Ausgabenummer5
Seiten (von - bis)762-776
Seitenumfang15
ISSN0028-3932
DOIs
PublikationsstatusVeröffentlicht - 04.2012

Fördermittel

C.M. and S.K.S. are funded by Wellcome Trust Grant WT074414MA awarded to S.K.S. J.O. is funded by the Max Planck Society . The authors would like to thank Kate Wakeling for assistance in stimulus preparation and the staff at the Birkbeck-UCL Centre for Neuroimaging for technical advice and support.

UN SDGs

Dieser Output leistet einen Beitrag zu folgendem(n) Ziel(en) für nachhaltige Entwicklung

  1. SDG 3 – Gesundheit und Wohlergehen
    SDG 3 – Gesundheit und Wohlergehen
  2. SDG 5 – Gender Equality
    SDG 5 – Gender Equality
  3. SDG 10 – Weniger Ungleichheiten
    SDG 10 – Weniger Ungleichheiten

Strategische Forschungsbereiche und Zentren

  • Forschungsschwerpunkt: Gehirn, Hormone, Verhalten - Center for Brain, Behavior and Metabolism (CBBM)

Fingerprint

Untersuchen Sie die Forschungsthemen von „Speech comprehension aided by multiple modalities: Behavioural and neural interactions“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren