Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Mixed-precision architecture based on computational memory for training deep neural networks
 
conference paper

Mixed-precision architecture based on computational memory for training deep neural networks

Nandakumar, S. R.
•
Le Gallo, Manuel
•
Boybat, Irem  
Show more
January 1, 2018
2018 Ieee International Symposium On Circuits And Systems (Iscas)
IEEE International Symposium on Circuits and Systems (ISCAS)

Deep neural networks (DNN) have revolutionized the field of machine learning by providing unprecedented human-like performance in solving many real-world problems such as image or speech recognition. Training of large DNNs, however, is a computationally intensive task, and this necessitates the development of novel computing architectures targeting this application. A computational memory unit where resistive memory devices are organized in crossbar arrays can be used to store the synaptic weights in their conductance states. The expensive multiply accumulate operations can be performed in place using Kirchhoff's circuit laws in a non-von Neumann manner. However, a key challenge remains the inability to alter the conductance states of the devices in a reliable manner during the weight update process. We propose a mixed-precision architecture that combines a computational memory unit storing the synaptic weights with a digital processing unit and an additional memory unit that stores the accumulated weight updates in high precision. The new architecture delivers classification accuracies comparable to those of floating-point implementations without being constrained by challenges associated with the non-ideal weight update characteristics of emerging resistive memories. The computational memory unit in a two layer neural network realized using non-linear stochastic models of phase-change memory achieves a test accuracy of 97.40% in the MNIST digit classification problem.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/ISCAS.2018.8351656
Web of Science ID

WOS:000451218703105

Author(s)
Nandakumar, S. R.
Le Gallo, Manuel
Boybat, Irem  
Rajendran, Bipin
Sebastian, Abu
Eleftheriou, Evangelos
Date Issued

2018-01-01

Publisher

IEEE

Publisher place

New York

Published in
2018 Ieee International Symposium On Circuits And Systems (Iscas)
ISBN of the book

978-1-5386-4881-0

Series title/Series vol.

IEEE International Symposium on Circuits and Systems

Subjects

Engineering, Electrical & Electronic

•

Engineering

•

deep learning

•

in-memory computing

•

mixed-precision computing

•

phase-change memory

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LSM  
Event nameEvent placeEvent date
IEEE International Symposium on Circuits and Systems (ISCAS)

Florence, ITALY

May 27-30, 2018

Available on Infoscience
December 13, 2018
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/152375
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés