Reducing circuit design complexity for neuromorphic machine learning systems based on Non-Volatile Memory arrays

Machine Learning (ML) is an attractive application of Non-Volatile Memory (NVM) arrays [1,2]. However, achieving speedup over GPUs will require minimal neuron circuit sharing and thus highly area-efficient peripheral circuitry, so that ML reads and writes are massively parallel and time-multiplexing is minimized [2]. This means that neuron hardware offering full `software-equivalent' functionality is impractical. We analyze neuron circuit needs for implementing back-propagation in NVM arrays and introduce approximations to reduce design complexity and area. We discuss the interplay between circuits and NVM devices, such as the need for an occasional RESET step, the number of programming pulses to use, and the stochastic nature of NVM conductance change. In all cases we show that by leveraging the resilience of the algorithm to error, we can use practical circuit approaches yet maintain competitive test accuracies on ML benchmarks.


Published in:
Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1-4
Presented at:
2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA, May 28-31, 2017
Year:
2017
Laboratories:




 Record created 2017-11-21, last modified 2018-03-17


Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)