Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. On The Robustness of a Neural Network
 
conference paper

On The Robustness of a Neural Network

El Mhamdi, El Mahdi  
•
Guerraoui, Rachid  
•
Rouault, Sébastien Louis Alexandre  
2017
2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS)
36th IEEE International Symposium on Reliable Distributed Systems

With the development of neural networks based machine learning and their usage in mission critical applications, voices are rising against the \textit{black box} aspect of neural networks as it becomes crucial to understand their limits and capabilities. With the rise of neuromorphic hardware, it is even more critical to understand how a neural network, as a distributed system, tolerates the failures of its computing nodes, neurons, and its communication channels, synapses. Experimentally assessing the robustness of neural networks involves the quixotic venture of testing all the possible failures, on all the possible inputs, which ultimately hits a combinatorial explosion for the first, and the impossibility to gather all the possible inputs for the second. In this paper, we prove an upper bound on the expected error of the output when a subset of neurons crashes. This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case. It involves a polynomial dependency on the Lipschitz coefficient of the neurons’ activation function, and an exponential dependency on the depth of the layer where a failure occurs. We back up our theoretical results with experiments illustrating the extent to which our prediction matches the dependencies between the network parameters and robustness. Our results show that the robustness of neural networks to the average crash can be estimated without the need to neither test the network on all failure configurations, nor access the training set used to train the network, both of which are practically impossible requirements.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1109/SRDS.2017.21
Author(s)
El Mhamdi, El Mahdi  
Guerraoui, Rachid  
Rouault, Sébastien Louis Alexandre  
Date Issued

2017

Published in
2017 IEEE 36th Symposium on Reliable Distributed Systems (SRDS)
Start page

84

End page

93

Subjects

Neural Networks

•

Fault Tolerance

•

Robustness

•

Error Propagation

•

Machine Learning

•

Neuromorphic Computing

•

ml-ai

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
DCL  
Event nameEvent placeEvent date
36th IEEE International Symposium on Reliable Distributed Systems

Hong Kong

September 26-29, 2017

Available on Infoscience
July 24, 2017
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/139440
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés