Résumé

Remarkable hardware robustness of deep learning (DL) is revealed by error injection analyses performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). RBMs in deep belief networks demonstrate robustness against memory errors during and after learning. Fine-tuning significantly affects the recovery of accuracy for static errors injected to the structural data of RBMs. The memory error tolerance is observable using our hardware networks with fine-graded memory distribution, resulting in reliable DL hardware with low-voltage driven memory suitable to low-power applications.

Détails

Actions