Memory-error tolerance of Scalable and Highly Parallel Architecture for Restricted Boltzmann Machines in Deep Belief Network

A key aspect of constructing highly scalable Deep-learning microelectronic systems is to implement fault tolerance in the learning sequence. Error-injection analyses for memory is performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). It is confirmed that the RBMs in Deep Belief Networks (DBNs) provides remarkable robustness against memory errors. Fine-tuning has significant effects on recovery of accuracy for static errors injected to the structural data of RBMs during and after learning, which are either at cell-level or block level. The memory-error tolerance is observable using our hardware networks with fine-graded memory distribution.


Published in:
2016 Ieee International Symposium On Circuits And Systems (Iscas), 357-360
Presented at:
IEEE International Symposium on Circuits and Systems (ISCAS), Montreal, CANADA, MAY 22-25, 2016
Year:
2016
Publisher:
New York, Ieee
ISSN:
0271-4302
ISBN:
978-1-4799-5341-7
Keywords:
Laboratories:




 Record created 2017-01-24, last modified 2020-08-28


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)