Abstract

We review our work towards achieving competitive performance (classification accuracies) for on chip machine learning (ML) of large scale artificial neural networks (ANN) using Non-Volatile Memory (NVM) based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25x) and lower power (from 60–2000x) ML training than GPU–based hardware.

Details

Actions