A network analysis of audiovisual affective speech perception

H. Jansma, A. Roebroeck, T. F. Münte*

*Corresponding author for this work
6 Citations (Scopus)

Abstract

In this study we were interested in the neural system supporting the audiovisual (AV) integration of emotional expression and emotional prosody. To this end normal participants were exposed to short videos of a computer-animated face voicing emotionally positive or negative words with the appropriate prosody. Facial expression of the face was either neutral or emotionally appropriate. To reveal the neural network involved in affective AV integration, standard univariate analysis of functional magnetic resonance (fMRI) data was followed by a random-effects Granger causality mapping (RFX-GCM). The regions that distinguished emotional from neutral facial expressions in the univariate analysis were taken as seed regions. In trials showing emotional expressions compared to neutral trials univariate analysis showed activation primarily in bilateral amygdala, fusiform gyrus, middle temporal gyrus/superior temporal sulcus and inferior occipital gyrus. When employing either the left amygdala or the right amygdala as a seed region in RFX-GCM we found connectivity with the right hemispheric fusiform gyrus, with the indication that the fusiform gyrus sends information to the Amygdala. These results led to a working model for face perception in general and for AV-affective integration in particular which is an elaborated adaptation of existing models.

Original languageEnglish
JournalNeuroscience
Volume256
Pages (from-to)230-241
Number of pages12
ISSN0306-4522
DOIs
Publication statusPublished - 03.01.2014

Fingerprint

Dive into the research topics of 'A network analysis of audiovisual affective speech perception'. Together they form a unique fingerprint.

Cite this