Emotion information recovery potential of wav2vec2 network fine-tuned for speech recognition task
Fine-tuning has become a norm to achieve state-of-the-art performance when employing pre-trained networks like foundation models. These models are typically pre-trained on large-scale unannotated data using self-supervised learning (SSL) methods. The SSL-based pre-training on large-scale data enables the network to learn the inherent structure/properties of the data, providing it with capabilities in generalization and knowledge transfer for various downstream tasks. However, when fine-tuned for a specific task, these models become task-specific. Finetuning may cause distortions in the patterns learned by the network during pre-training. In this work, we investigate these distortions by analyzing the network's information recovery capabilities by designing a study where speech emotion recognition is the target task and automatic speech recognition is an intermediary task. We show that the network recovers the task-specific information but with a shift in the decisions also through attention analysis, we demonstrate some layers do not recover the information fully.
2-s2.0-105009695309
École Polytechnique Fédérale de Lausanne
Institut Dalle Molle D'intelligence Artificielle Perceptive
2025
979-8-3503-6874-1
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
ICASSP 2025 | Hyderabad, India | 2025-04-06 - 2025-04-11 | |