Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. Robust Training and Verification of Deep Neural Networks
 
doctoral thesis

Robust Training and Verification of Deep Neural Networks

Latorre Gomez, Fabian Ricardo  
2023

According to the proposed Artificial Intelligence Act by the European Comission (expected to pass at the end of 2023), the class of High-Risk AI Systems (Title III) comprises several important applications of Deep Learning like autonomous driving vehicles or robot-assisted surgery, which rely on supervised learning with image data. According to Article 15 in the aforementioned legal framework, such systems must be resilient to errors, faults or inconsistencies that may occur within the the environment, and to attempts by unauthorised third parties to alter their performance by exploiting the system
vulnerabilities. Non-compliance can result in fines and a forced withdrawal from the market for infringing products and companies.

In this work, we develop theory and algorithms to train and certify robust Deep Neural Networks. Our theoretical results and proposed algorithms provide resiliency in different scenarios like the presence of adversarial perturbations or injection of random noise in the input features. In this way, our framework allows compliance with the requirements of the AI Act, and is a step towards a safe rollout of High-Risk AI systems based on Deep Learning.

To summarize, the main contributions of this Ph.D. thesis are: (I) first algorithm for certifying the robustness of Deep Neural Networks with the use of Polynomial Optimization, by upper bounding their Lipschitz constant (II) first algorithm with guarantees for performing 1-path-norm regularization for Shallow Networks, and proof of its relation with the robustness to adversarial perturbations, (III) extension of 1-path-norm regularization methods to Deep Neural Networks, (IV) first generalization bounds and robustness analysis of Deep Polynomial Networks, and a novel regularization scheme to improve their robustness, (V) first theoretically correct descent method for Adversarial Training, the most common algorithm for training robust networks, (VI) first theoretically correct formulation of
Adversarial Training as a bilevel optimization problem, which provides a solution of the robust overfitting phenomenon, (VII) ADMM algorithm with guarantees of fast convergence, for the problem of Denoising Adversarial Examples using Generative Adversarial Networks as a prior and (VIII) an explicit regularization scheme for Quadratic Neural Networks with guaranteed improvement in the robustness to
random noise, compared to SVMs.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

EPFL_TH9223.pdf

Type

N/a

Access type

openaccess

License Condition

copyright

Size

5.69 MB

Format

Adobe PDF

Checksum (MD5)

f60f57a5f6402524d073cf031638467b

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés