Constrained bit allocation for neural networks
The increasing complexity of deep neural networks (DNNs) necessitates effective model compression to reduce their computational and memory footprints for deployment on resource-constrained hardware. Layer-wise bit allocation is a prominent compression method shown to significantly reduce DNN footprints while preserving model accuracy. However, how best to incorporate hardware constraints within the allocation search remains a key question, as many tacitly assume constraints can be adequately handled via soft penalties or heuristics, often failing to guarantee feasibility or optimality. In this paper, we explore a reformulation of the bit allocation problem as an explicit constrained optimization problem, solved using interior-point methods within a NAS-based framework, notably requiring only minimal calibration data (as few as 128 samples). We corroborate this approach with experiments spanning transformer architectures (Llama, Gemma, Qwen; 500M-3B parameters), evaluating performance with MXFP formats. We show that this constrained formulation not only allows us to achieve significantly finer resolution in compression ratios compared to the discrete steps offered by uniform MXFP application (e.g., 4.25, 6.25, 8.25 bits), but also demonstrates that explicitly satisfying hardware budgets while optimizing for accuracy consistently outperforms uniform allocation methods, improving performance by up to several standard deviations in some cases, especially under strict resource limits. Our findings extend to the efficient deployment of large models in resource-constrained compute platforms, offering insights into best practices for applying bit allocation to maximize hardware resource efficiency without unduly compromising accuracy.
14_Constrained_bit_allocation_.pdf
Main Document
http://purl.org/coar/version/c_970fb48d4fbd8a85
openaccess
CC BY
1.9 MB
Adobe PDF
9b744d75b634fda7ec05ab360812cdc3