Mitigating the Impact of Faults in Unreliable Memories For Error-Resilient Applications

Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overheads incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. By doing so, the bit-error distribution is skewed towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in power, 89% in area, and 77% in access time when applied to various data mining applications in 28nm process technology.


Published in:
Proceedings of the Design Automation Conference, 1-6
Presented at:
Design Automation Conference (DAC'15), San Francisco, California, USA, June 7-11, 2015
Year:
2015
Keywords:
Laboratories:




 Record created 2015-02-17, last modified 2018-09-13

External link:
Download fulltext
URL
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)