Error Tolerance Analysis of Deep Learning Hardware Using a Restricted Boltzmann Machine Toward Low-Power Memory Implementation
Remarkable hardware robustness of deep learning (DL) is revealed by error injection analyses performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). RBMs in deep belief networks demonstrate robustness against memory errors during and after learning. Fine-tuning significantly affects the recovery of accuracy for static errors injected to the structural data of RBMs. The memory error tolerance is observable using our hardware networks with fine-graded memory distribution, resulting in reliable DL hardware with low-voltage driven memory suitable to low-power applications.
WOS:000400566100022
2017
64
4
462
466
REVIEWED
EPFL