Towards Stable and Efficient Adversarial Training against $l_1$ Bounded Adversarial Attacks
We address the problem of stably and efficiently training a deep neural network robust to adversarial perturbations bounded by an $l_1$ norm. We demonstrate that achieving robustness against $l_1$-bounded perturbations is more challenging than in the $l_2$ or $l_\infty$ cases, because adversarial training against $l_1$-bounded perturbations is more likely to suffer from catastrophic overfitting and yield training instabilities. Our analysis links these issues to the coordinate descent strategy used in existing methods. We address this by introducing Fast-EG-$l_1$, an efficient adversarial training algorithm based on Euclidean geometry and free of coordinate descent. Fast-EG-$l_1$ comes with no additional memory costs and no extra hyper-parameters to tune. Our experimental results on various datasets demonstrate that Fast-EG-$l_1$ yields the best and most stable robustness against $l_1$-bounded adversarial attacks among the methods of comparable computational complexity. Code and the checkpoints are available at https://github.com/IVRL/FastAdvL.
Camera-Ready.pdf
postprint
openaccess
copyright
2.67 MB
Adobe PDF
038b05a0e15796af2aa5334e34f54053