Error Tolerance Analysis of Deep Learning Hardware Using a Restricted Boltzmann Machine Toward Low-Power Memory Implementation

Remarkable hardware robustness of deep learning (DL) is revealed by error injection analyses performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). RBMs in deep belief networks demonstrate robustness against memory errors during and after learning. Fine-tuning significantly affects the recovery of accuracy for static errors injected to the structural data of RBMs. The memory error tolerance is observable using our hardware networks with fine-graded memory distribution, resulting in reliable DL hardware with low-voltage driven memory suitable to low-power applications.


Published in:
Ieee Transactions On Circuits And Systems Ii-Express Briefs, 64, 4, 462-466
Year:
2017
Publisher:
Piscataway, Ieee-Inst Electrical Electronics Engineers Inc
ISSN:
1549-7747
Keywords:
Laboratories:




 Record created 2017-05-30, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)