Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild
 
conference paper

Do We Always Need the Simplicity Bias? Looking for Optimal Inductive Biases in the Wild

Teney, Damien
•
Jiang, Liangze  
•
Gogianu, Florin
Show more
June 10, 2025
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2025). Proceedings
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025

Neural architectures tend to fit their data with relatively simple functions. This "simplicity bias" is widely regarded as key to their success. This paper explores the limits of this principle. Building on recent findings that the simplicity bias stems from ReLU activations [96], we introduce a method to meta-learn new activation functions and inductive biases better suited to specific tasks.Findings. We identify multiple tasks where the simplicity bias is inadequate and ReLUs suboptimal. In these cases, we learn new activation functions that perform better by inducing a prior of higher complexity. Interestingly, these cases correspond to domains where neural networks have historically struggled: tabular data, regression tasks, cases of shortcut learning, and algorithmic grokking tasks. In comparison, the simplicity bias induced by ReLUs proves adequate on image tasks where the best learned activations are nearly identical to ReLUs and GeLUs.Implications. Contrary to popular belief, the simplicity bias of ReLU networks is not universally useful. It is near-optimal for image classification, but other inductive biases are sometimes preferable. We showed that activation functions can control these inductive biases, but future tailored architectures might provide further benefits. Advances are still needed to characterize a model’s inductive biases beyond "complexity", and their adequacy with the data.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/cvpr52734.2025.00017
Author(s)
Teney, Damien
Jiang, Liangze  

École Polytechnique Fédérale de Lausanne

Gogianu, Florin
Abbasnejad, Ehsan
Date Issued

2025-06-10

Publisher

IEEE

Published in
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2025). Proceedings
DOI of the book
https://doi.org/10.1109/CVPR52734.2025
ISBN of the book

979-8-3315-4364-8

Start page

79

End page

90

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LIDIAP  
Event nameEvent acronymEvent placeEvent date
The IEEE/CVF Conference on Computer Vision and Pattern Recognition 2025

Nashville, Tennessee, US

2025-06-11 - 2025-06-15

Available on Infoscience
August 20, 2025
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/253269
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés