Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. EPFL thesis
  4. Safe Deep Neural Networks
 
doctoral thesis

Safe Deep Neural Networks

Matoba, Kyle Michael  
2024

The capabilities of deep learning systems have advanced much faster than our ability to understand them. Whilst the gains from deep neural networks (DNNs) are significant, they are accompanied by a growing risk and gravity of a bad outcome. This is troubling because DNNs can perform well on a task most of the time, but can sometimes exhibit nonintuitive and nonsensical behavior for reasons that are not well understood.

I begin this thesis arguing that closer alignment between human intuition and the operation of DNNs is massively beneficial. Next, I identify a class of DNNs that are particularly tractable and which play an important role in science and technology. Then I posit three dimensions on which alignment can be achieved â (1) philosophy: thought exercises to understand the fundamental considerations, (2) pedagogy: to help fallible humans interact effectively with neural networks, and (3) practice: methods to impose desired properties upon neural network, without degrading their performance.

Then I present my work along these lines. Chapter 2 analyzes philosophically the issues of using penalty terms in criterion functions to avoid (negative) side effects via a three-way decomposition into the choice of (1) baseline, (2) deviation measure, and (3) scale of the penalty. Chapter 3 attempts to understand whether a DNN maps inputs to an output class. I present two approaches to this problem, which can help users recognize unsafe behavior, even if they cannot formulate safety beforehand. Chapter 4 examines whether max pooling can be written as the composition of ReLU activations in order to investigate an open conjecture that max pooling is essentially redundant. These studies advance our pedagogical grasp of DNN modelling. Finally, Chapter 5 engages with practice by presenting a method for making DNNs more linear, and thereby more human-compatible.

  • Files
  • Details
  • Metrics
Type
doctoral thesis
DOI
10.5075/epfl-thesis-10384
Author(s)
Matoba, Kyle Michael  
Advisors
Vandergheynst, Pierre  
•
Fleuret, François  
Jury

Prof. Alexandre Massoud Alahi (président) ; Prof. Pierre Vandergheynst, Prof. François Fleuret (directeurs) ; Prof. Martin Jaggi, Dr Timon Gehr, Prof. Alexandros Kalousis (rapporteurs)

Date Issued

2024

Publisher

EPFL

Publisher place

Lausanne

Public defense year

2024-02-07

Thesis number

10384

Total of pages

169

Subjects

AI Safety

•

Deep Neural Network Interpretability

•

Max pooling

•

Adversarial Robustness

•

Verification

•

Polytopes.

EPFL units
LTS2  
Faculty
STI  
School
IEL  
Doctoral School
EDIC  
Available on Infoscience
February 5, 2024
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/203461
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés