Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning
 
conference paper

ESSOP: Efficient and Scalable Stochastic Outer Product Architecture for Deep Learning

Joshi, Vinay
•
Karunaratne, Geethan
•
Le Gallo, Manuel
Show more
January 1, 2020
2020 Ieee International Symposium On Circuits And Systems (Iscas)
IEEE International Symposium on Circuits and Systems (ISCAS)

Deep neural networks (DNNs) have surpassed human-level accuracy in a variety of cognitive tasks but at the cost of significant memory/time requirements in DNN training. This limits their deployment in energy and memory limited applications that require real-time learning. Matrix-vector multiplications (MVM) and vector-vector outer product (VVOP) are the two most expensive operations associated with training of DNNs. Strategies to improve the efficiency of MVM computation in hardware have been demonstrated with minimal impact on training accuracy. However, the VVOP computation remains a relatively less explored bottleneck even with the aforementioned strategies. Stochastic computing (SC) has been proposed to improve the efficiency of VVOP computation but on relatively shallow networks with bounded activation functions and floating-point (FP) scaling of activation gradients. In this paper, we propose ESSOP, an efficient and scalable stochastic outer product architecture based on the SC paradigm. We introduce efficient techniques to generalize SC for weight update computation in DNNs with the unbounded activation functions (e.g., ReLU), required by many state-of-the-art networks. Our architecture reduces the computational cost by re-using random numbers and replacing certain FP multiplication operations by bit shift scaling. We show that the ResNet-32 network with 33 convolution layers and a fully-connected layer can be trained with ESSOP on the CIFAR-10 dataset to achieve baseline comparable accuracy. Hardware design of ESSOP at 14 nm technology node shows that, compared to a highly pipelined FP16 multiplier design, ESSOP is 82:2 % and 93:7 % better in energy and area efficiency respectively for outer product computation.

  • Details
  • Metrics
Type
conference paper
DOI
10.1109/ISCAS45731.2020.9180872
Web of Science ID

WOS:000706854700055

Author(s)
Joshi, Vinay
Karunaratne, Geethan
Le Gallo, Manuel
Boybat, Irem  
Piveteau, Christophe
Sebastian, Abu
Rajendran, Bipin
Eleftheriou, Evangelos
Date Issued

2020-01-01

Publisher

IEEE

Publisher place

New York

Published in
2020 Ieee International Symposium On Circuits And Systems (Iscas)
ISBN of the book

978-1-7281-3320-1

Subjects

Engineering, Electrical & Electronic

•

Engineering

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
LSM  
Event nameEvent placeEvent date
IEEE International Symposium on Circuits and Systems (ISCAS)

ELECTR NETWORK

Oct 10-21, 2020

Available on Infoscience
November 20, 2021
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/183136
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés