Résumé

A key aspect of constructing highly scalable Deep-learning microelectronic systems is to implement fault tolerance in the learning sequence. Error-injection analyses for memory is performed using a custom hardware model implementing parallelized restricted Boltzmann machines (RBMs). It is confirmed that the RBMs in Deep Belief Networks (DBNs) provides remarkable robustness against memory errors. Fine-tuning has significant effects on recovery of accuracy for static errors injected to the structural data of RBMs during and after learning, which are either at cell-level or block level. The memory-error tolerance is observable using our hardware networks with fine-graded memory distribution.

Détails

Actions