Memory-error tolerance of Scalable and Highly Parallel Architecture for Restricted Boltzmann Machines in Deep Belief Network
A key aspect of constructing highly scalable Deep-learning microelectronic systems is to implement fault tolerance in the learning sequence. Error-injection analyses for memory is performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). It is confirmed that the RBMs in Deep Belief Networks (DBNs) provides remarkable robustness against memory errors. Fine-tuning has significant effects on recovery of accuracy for static errors injected to the structural data of RBMs during and after learning, which are either at cell-level or block level. The memory-error tolerance is observable using our hardware networks with fine-graded memory distribution.
WOS:000390094700090
2016
978-1-4799-5341-7
New York
4
IEEE International Symposium on Circuits and Systems
357
360
REVIEWED
Event name | Event place | Event date |
Montreal, CANADA | MAY 22-25, 2016 | |