Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Leveraging Gradient Information for Out-of-Domain Performance Estimations
 
conference paper

Leveraging Gradient Information for Out-of-Domain Performance Estimations

Khramtsova, Ekaterina
•
Baktashmotlagh, Mahsa
•
Zuccon, Guido
Show more
Ribeiro, Rita P.
•
Soares, Carlos
Show more
October 3, 2025
Machine Learning and Knowledge Discovery in Databases. Research Track - European Conference, ECML PKDD 2025, Proceedings
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases

One of the limitations of applying machine learning methods in real-world scenarios is the existence of a domain shift between the source (i.e., training) and target (i.e., test) datasets, which typically entails a significant performance drop. This is further complicated by the lack of annotated data in the target domain, making it impossible to quantitatively assess the model performance. As such, there is a pressing need for methods able to estimate a model’s performance on unlabeled target data. Most of the existing approaches addressing this train a linear performance predictor, taking as input either an activation-based or a performance-based metric. As we will show, however, the accuracy of such predictors strongly depends on the domain shift. Recent research highlights the significance of network weights in understanding model generalizability. The early work of [46] proposes a method to predict out-of-distribution error by comparing the weights of the original model and fine-tuned model on the target data. However, this process is computationally demanding, especially for large models and input sizes. To address this, we propose an efficient approach for assessing a model’s performance on target datasets by leveraging the gradients and Hessian of a model as indicators of weight differences. Our approach builds on the idea that lower norms of gradient and Hessian matrices signify a flatter training landscape and better adaptability to new data. Our extensive experiments on standard object recognition benchmarks, using diverse network architectures, demonstrate the benefits of our method, outperforming both activation-based and performance-based baselines by a large margin. It also outperforms [46]’s weight-based approach in efficiency by avoiding parameter updates and effectively estimates out-of-domain performance. Our code is available in the following repository: https://github.com/khramtsova/hessian_performance_estimator/.

  • Details
  • Metrics
Type
conference paper
DOI
10.1007/978-3-032-06106-5_18
Scopus ID

2-s2.0-105020013870

Author(s)
Khramtsova, Ekaterina

The University of Queensland

Baktashmotlagh, Mahsa

The University of Queensland

Zuccon, Guido

The University of Queensland

Wang, Xi

Neusoft Corporation

Salzmann, Mathieu  

EPFL

Editors
Ribeiro, Rita P.
•
Soares, Carlos
•
Gama, João
•
Pfahringer, Bernhard
•
Japkowicz, Nathalie
•
Larrañaga, Pedro
•
Jorge, Alípio M.
•
Abreu, Pedro H.
Date Issued

2025-10-03

Publisher

Springer Science and Business Media Deutschland GmbH

Published in
Machine Learning and Knowledge Discovery in Databases. Research Track - European Conference, ECML PKDD 2025, Proceedings
Series title/Series vol.

Lecture Notes in Computer Science; 16018 LNCS

ISSN (of the series)

1611-3349

0302-9743

Start page

306

End page

321

Subjects

Generalisability Estimation

•

Performance Prediction

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
CVLAB  
Event nameEvent acronymEvent placeEvent date
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases

Porto, Portugal

2025-09-15 - 2025-09-19

RelationRelated workURL/DOI

IsSupplementedBy

[CODE] hessian_performance_estimator

https://github.com/khramtsova/hessian_performance_estimator/
Available on Infoscience
November 12, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/255785
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés