Gradient Approximation of Approximate Multipliers for High-Accuracy Deep Neural Network Retraining
Approximate multipliers (AppMults) are widely employed in deep neural network (DNN) accelerators to reduce the area, delay, and power consumption. However, the inaccuracies of AppMults degrade DNN accuracy, necessitating a retraining process to recover accuracy. A critical step in retraining is computing the gradient of the AppMult, i.e., the partial derivative of the approximate product with respect to each input operand. Conventional methods approximate this gradient using that of the accurate multiplier (AccMult), often leading to suboptimal retraining results, especially for AppMults with relatively large errors. To address this issue, we propose a difference-based gradient approximation of AppMults to improve retraining accuracy. Experimental results show that compared to the state-of-the-art methods, our method improves the DNN accuracy after retraining by 4.10% and 2.93% on average for the VGG and ResNet models, respectively. Moreover, after retraining a ResNet18 model using a 7-bit AppMult, the final DNN accuracy does not degrade compared to the quantized model using the 7-bit AccMult, while the power consumption is reduced by 51%.
2025-03-31
978-3-9826741-0-0
Proceedings. Design, Automation, and Test in Europe Conference and Exhibition
1558-1101
1
7
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
DATE 2025 | Lyon, France | 2025-03-31 - 2025-04-02 | |