Conference paper

Large-scale neural networks implemented with nonvolatile memory as the synaptic weight element: comparative performance analysis (accuracy, speed, and power)

We review our work towards achieving competitive performance (classification accuracies) for on chip machine learning (ML) of large scale artificial neural networks (ANN) using Non-Volatile Memory (NVM) based synapses, despite the inherent random and deterministic imperfections of such devices. We then show that such systems could potentially offer faster (up to 25x) and lower power (from 60–2000x) ML training than GPU–based hardware.


    Invited Paper


    • EPFL-CONF-210993

    Record created on 2015-09-07, modified on 2017-05-10


  • There is no available fulltext. Please contact the lab or the authors.

Related material