Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. 3D Face Reconstruction Error Decomposed: A Modular Benchmark for Fair and Fast Method Evaluation
 
conference paper

3D Face Reconstruction Error Decomposed: A Modular Benchmark for Fair and Fast Method Evaluation

Sariyanidi, Evangelos
•
Ferrari, Claudio
•
Nocentini, Federico
Show more
2025
2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025
The 19th IEEE International Conference on Automatic Face and Gesture Recognition

Computing the standard benchmark metric for 3D face reconstruction, namely geometric error, requires a number of steps, such as mesh cropping, rigid alignment, or point correspondence. Current benchmark tools are monolithic (they implement a specific combination of these steps), even though there is no consensus on the best way to measure error. We present a toolkit for a Modularized 3D Face reconstruction Benchmark (M3DFB), where the fundamental components of error computation are segregated and interchangeable, allowing one to quantify the effect of each. Furthermore, we propose a new component, namely correction, and present a computationally efficient approach that penalizes for mesh topology inconsistency. Using this toolkit, we test 16 error estimators with 10 reconstruction methods on two real and two synthetic datasets. Critically, the widely used ICP-based estimator provides the worst benchmarking performance, as it significantly alters the true ranking of the top- 5 reconstruction methods. Notably, the correlation of ICP with the true error can be as low as 0.41. Moreover, non-rigid alignment leads to significant improvement (correlation larger than 0.90), highlighting the importance of annotating 3D landmarks on datasets. Finally, the proposed correction scheme, together with non-rigid warping, leads to an accuracy on a par with the best non-rigid ICP-based estimators, but runs an order of magnitude faster. Our open-source codebase is designed for researchers to easily compare alternatives for each component, thus helping accelerating progress in benchmarking for 3D face reconstruction and, furthermore, supporting the improvement of learned reconstruction methods, which depend on accurate error estimation for effective training.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/FG61629.2025.11099357
Scopus ID

2-s2.0-105014517439

Author(s)
Sariyanidi, Evangelos

The Children's Hospital of Philadelphia

Ferrari, Claudio

Università di Parma

Nocentini, Federico

Università degli Studi di Firenze

Berretti, Stefano

Università degli Studi di Firenze

Cavallaro, Andrea  

École Polytechnique Fédérale de Lausanne

Tunc, Birkan

The Children's Hospital of Philadelphia

Date Issued

2025

Publisher

Institute of Electrical and Electronics Engineers Inc.

Published in
2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition, FG 2025
ISBN of the book

979-8-3315-5341-8

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIDIAP  
Event nameEvent acronymEvent placeEvent date
The 19th IEEE International Conference on Automatic Face and Gesture Recognition

Clearwater, US

2025-05-26 - 2025-05-30

FunderFunding(s)Grant NumberGrant URL

National Institute of Child Health and Human Development

Office of the Director

OD

Show more
Available on Infoscience
September 8, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/253887
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés