Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Conferences, Workshops, Symposiums, and Seminars
  4. Error Resilient In-Memory Computing Architecture for CNN Inference on the Edge
 
conference paper

Error Resilient In-Memory Computing Architecture for CNN Inference on the Edge

Rios, Marco Antonio  
•
Ponzina, Flavio  
•
Ansaloni, Giovanni  
Show more
June 7, 2022
Proceedings of the Great Lakes Symposium on VLSI 2022
Great Lakes Symposium on VLSI 2022 (GLSVLSI ’22)

The growing popularity of edge computing has fostered the development of diverse solutions to support Artificial Intelligence (AI) in energy-constrained devices. Nonetheless, comparatively few efforts have focused on the resiliency exhibited by AI workloads (such as Convolutional Neural Networks, CNNs) as an avenue towards increasing their run-time efficiency, and even fewer have proposed strategies to increase such resiliency. We herein address this challenge in the context of Bit-line Computing architectures, an embodiment of the in-memory computing paradigm tailored towards CNN applications. We show that little additional hardware is required to add highly effective error detection and mitigation in such platforms. In turn, our proposed scheme can cope with high error rates when performing memory accesses with no impact on CNNs accuracy, allowing for very aggressive voltage scaling. Complementary, we also show that CNN resiliency can be increased by algorithmic optimizations in addition to architectural ones, adopting a combined ensembling and pruning strategy that increases robustness while not inflating workload requirements. Experiments on different quantized CNN models reveal that our combined hardware/software approach enables the supply voltage to be reduced to just 650mV, decreasing the energy per inference up to 51.3%, without affecting the baseline CNN classification accuracy.

  • Files
  • Details
  • Metrics
Type
conference paper
DOI
10.1145/3526241.3530351
Author(s)
Rios, Marco Antonio  
Ponzina, Flavio  
Ansaloni, Giovanni  
Levisse, Alexandre Sébastien Julien  
Atienza Alonso, David  
Date Issued

2022-06-07

Published in
Proceedings of the Great Lakes Symposium on VLSI 2022
ISBN of the book

978-1-4503-9322-5/22/06

Total of pages

6

Subjects

In-Memory Computing

•

Fault Tolerant Architectures

•

Deep Neural Networks

Editorial or Peer reviewed

REVIEWED

Written at

EPFL

EPFL units
ESL  
Event nameEvent placeEvent date
Great Lakes Symposium on VLSI 2022 (GLSVLSI ’22)

Irvine, California, USA

June 6-8, 2022

Available on Infoscience
April 12, 2022
Use this identifier to reference this record
https://infoscience.epfl.ch/handle/20.500.14299/187125
Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés