Incremental Learning Meets Reduced Precision Networks

Hardware accelerators for Deep Neural Networks (DNNs) that use reduced precision parameters are more energy efficient than the equivalent full precision networks. While many studies have focused on reduced precision training methods for supervised networks with the availability of large datasets, less work has been reported on incremental learning algorithms that adapt the network for new classes and the consequence of reduced precision has on these algorithms. This paper presents an empirical study of how reduced precision training methods impact the iCARL incremental learning algorithm. The incremental network accuracies on the CIFAR-100 image dataset show that weights can be quantized to 1 bit (2.39% drop in accuracy) but when activations are quantized to 1 bit, the accuracy drops much more (12.75%). Quantizing gradients from 32 to 8 bits only affects the accuracies of the trained network by less than 1%. These results are encouraging for hardware accelerators that support incremental learning algorithms.

Published in:
Presented at:
2019 IEEE International Symposium on Circuits and Systems (ISCAS), Sapporo, Japan, May 26-28, 2019
May 01 2019
Other identifiers:

 Record created 2019-10-31, last modified 2019-11-06

Rate this document:

Rate this document:
(Not yet reviewed)