Large-scale neural networks implemented with nonvolatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power)

We review our work towards achieving competitive performance (classification accuracies) for on chip machine learning (ML) of large scale artificial neural networks (ANN) using Non-Volatile Memory (NVM) based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25x) and lower power (from 60–2000x) ML training than GPU–based hardware.


Published in:
Proceedings of the International Electron Devices Meeting (IEDM 2015)
Presented at:
International Electron Devices Meeting (IEDM 2015), Washington, DC, 7-9 December, 2015
Year:
2015
Note:
Invited Paper
Laboratories:




 Record created 2015-09-07, last modified 2018-09-13


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)