Generalized Gradient Norm Clipping & Non-Euclidean (L 0 , L 1 )-Smoothness
This work introduces a hybrid non-Euclidean optimization method which generalizes gradient norm clipping by combining steepest descent and conditional gradient approaches. The method achieves the best of both worlds by establishing a descent property under a generalized notion of (L 0 ,L 1)-smoothness. Weight decay is incorporated in a principled manner by identifying a connection to the Frank-Wolfe short step. In the stochastic case, we show an order optimal O(n −1/4) convergence rate by leveraging a momentum based gradient estimator. We discuss how to instantiate the algorithms for deep learning, which we dub Clipped Scion, and demonstrate their properties on image classification and language modeling. The code is available at https://github.com/LIONS-EPFL/ClippedScion. * Equal contribution. 2 By conditional gradient based methods, we mean those methods which leverage a linear minimization oracle lmo(d) = arg min x∈D ⟨d, x⟩ when updating their parameters with an open-loop stepsize. 39th Conference on Neural Information Processing Systems (NeurIPS 2025).
4418_Generalized_Gradient_Norm.pdf
Main Document
Accepted version
openaccess
N/A
656.1 KB
Adobe PDF
d3cc5efdaa13a7796302ec3481556a98