Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
 
conference paper not in proceedings

On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines

Mosbach, Marius
•
Andriushchenko, Maksym  
•
Klakow, Dietrich
2021
9th International Conference on Learning Representations

Fine-tuning pre-trained transformer-based language models such as BERT has become a common practice dominating leaderboards across various NLP benchmarks. Despite the strong empirical performance of fine-tuned models, fine-tuning is an unstable process: training the same model with multiple random seeds can result in a large variance of the task performance. Previous literature (Devlin et al., 2019; Lee et al., 2020; Dodge et al., 2020) identified two potential reasons for the observed instability: catastrophic forgetting and small size of the fine-tuning datasets. In this paper, we show that both hypotheses fail to explain the fine-tuning instability. We analyze BERT, RoBERTa, and ALBERT, fine-tuned on three commonly used datasets from the GLUE benchmark, and show that the observed instability is caused by optimization difficulties that lead to vanishing gradients. Additionally, we show that the remaining variance of the downstream task performance can be attributed to differences in generalization where fine-tuned models with the same training loss exhibit noticeably different test performance. Based on our analysis, we present a simple but strong baseline that makes fine-tuning BERT-based models significantly more stable than the previously proposed approaches. Code to reproduce our results is available online: this https URL.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

On the Stability of Fine-tuning BERT- Misconceptions, Explanations, and Strong Baselines.pdf

Type

Preprint

Version

Submitted version (Preprint)

Access type

openaccess

License Condition

MIT License

Size

1.55 MB

Format

Adobe PDF

Checksum (MD5)

9bf527ce7c64303ddc2c642d3fcb50e2

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés