Backpropagation-free Training of Analog AI Accelerators
Deep learning has achieved remarkable success in diverse fields in recent years. However, this growth presents significant challenges, particularly in terms of energy consumption during both training and inference phases. While there have been efforts to improve energy efficiency during the inference phase, efficient training of deep learning models remains a largely unaddressed challenge. The training method for digital deep learning models typically relies on backpropagation, a process that is difficult to implement physically due to its reliance on precise knowledge of forward-pass computations in neural networks. To overcome this issue, We present a physics-compatible deep neural network architecture, augmented by a biologically-inspired learning algorithm referred to as physical local learning (PhyLL). This framework allows for the direct training of deep physical neural networks, comprising layers of physical nonlinear systems. Notably, our approach dispenses with the need for detailed knowledge of the specific properties of these nonlinear physical layers. Our approach outperforms state-of-the-art hardware-aware training methods by enhancing training speed, reducing digital computations and power consumption in physical systems, particularly in optics.
2-s2.0-85207820280
2024
9798350373493
REVIEWED
EPFL
Event name | Event acronym | Event place | Event date |
Chania, Greece | 2024-09-09 - 2024-09-14 | ||