Files

Abstract

Codebook-based optimizations are a class of algorithmic-level transformations able to effectively reduce the computing and memory requirements of Convolutional Neural Networks (CNNs). This approach tightly limits the number of unique weights in each layer, allowing the storage of employed values in codebooks containing a small number of floating-point entries. Then, CNN models are represented as low-bitwidth indexes of such codebooks. This work introduces a novel iterative methodology to find highly beneficial schemes trading off accuracy and model compression in codebook-based CNNs. Our strategy can retrieve non-uniform solutions driven by an accuracy constraint embedded in the optimization loop. Our results indicate that, for a 1% accuracy degradation, our methodology can compress baseline floating-point CNN models up to 19x. Moreover, by reducing the number of memory accesses, our strategy increases energy efficiency and improves inference performance by up to 91%.

Details

PDF