Training DNNs with Hybrid Block Floating Point

The wide adoption of DNNs has given birth to unrelenting computing requirements, forcing datacenter operators to adopt domain-specific accelerators to train them. These accelerators typically employ densely packed full-precision floating-point arithmetic to maximize performance per area. Ongoing research efforts seek to further increase that performance density by replacing floating-point with fixed-point arithmetic. However, a significant roadblock for these attempts has been fixed point's narrow dynamic range, which is insufficient for DNN training convergence. We identify block floating point (BFP) as a promising alternative representation since it exhibits wide dynamic range and enables the majority of DNN operations to be performed with fixed-point logic. Unfortunately, BFP alone introduces several limitations that preclude its direct applicability. In this work, we introduce HBFP, a hybrid BFP-FP approach, which performs all dot products in BFP and other operations in floating point. HBFP delivers the best of both worlds: the high accuracy of floating point at the superior hardware density of fixed point. For a wide variety of models, we show that HBFP matches floating point's accuracy while enabling hardware implementations that deliver up to 8.5x higher throughput.


Published in:
Proceedings of the thirty-second Conference on Neural Information Processing Systems
Presented at:
Neural Information Processing Systems, Montréal Canada, December 2-8, 2018
Year:
Dec 04 2018
Laboratories:




 Record created 2018-05-14, last modified 2018-11-26

Final:
Download fulltextPDF
Fulltext:
Download fulltextPDF
Rate this document:

Rate this document:
1
2
3
 
(Not yet reviewed)