Hardware Complexity Analysis of Deep Neural Networks and Decision Tree Ensembles for Real-time Neural Data Classification
A fast and low-power embedded classifier with small footprint is essential for real-time applications such as brain-machine interfaces (BMIs) and closed-loop neuromodulation for neurological disorders. In most applications with large datasets of unstructured data, such as images, deep neural networks (DNNs) achieve a remarkable classification accuracy. However, DNN models impose a high computational cost during inference, and are not necessarily ideal for problems with limited training sets. The computationally intensive nature of deep models may also degrade the classification latency, that is critical for real-time closed-loop applications. Among other methods, ensembles of decision trees (DTs) have recently been very successful in neural data classification tasks. DTs can be designed to successively process a limited number of features during inference, and thus impose much lower computational and memory overhead. Here, we compare the hardware complexity of DNNs and gradient boosted DTs for classification of real-time electrophysiological data in epilepsy. Our analysis shows that the strict energy-area-latency tradeoff can be relaxed using an ensemble of DTs, and they can be significantly more efficient than alternative DNN models, while achieving better classification accuracy in real-time neural data classification tasks.
Hardware_Complexity_Analysis_of_Deep_Neural_Networks_and_Decision_Tree_Ensembles_for_Real-time_Neural_Data_Classification.pdf
Main Document
Published version
restricted
N/A
1.97 MB
Adobe PDF
db4b6d29a14277d84ae4f76316f43cd1