Files

Abstract

The ability to correctly interpret emotional signals from others is crucial for successful social interaction. Previous neuroimaging studies showed that voice-sensitive auditory areas [1-3] activate to a broad spectrum of vocally expressed emotions more than to neutral speech melody (prosody). However, this enhanced response occurs irrespective of the specific emotion category, making it impossible to distinguish different vocal emotions with conventional analyses [4-8]. Here, we presented pseudowords spoken in five prosodic categories (anger, sadness, neutral, relief, joy) during event-related functional magnetic resonance imaging (fMRI), then employed multivariate pattern analysis [9, 10] to discriminate between these categories on the basis of the spatial response pattern within the auditory cortex. Our results demonstrate successful decoding of vocal emotions from fMRI responses in bilateral voice-sensitive areas, which could not be obtained by using averaged response amplitudes only. Pairwise comparisons showed that each category could be classified against all other alternatives, indicating for each emotion a specific spatial signature that generalized across speakers. These results demonstrate for the first time that emotional information is represented by distinct spatial patterns that can be decoded from brain activity in modality-specific cortical areas.

Details

Actions

Preview