Constrained Zero-Shot Neural Architecture Search on Small Classification Dataset
The rapid evolution of Deep Learning (DL) has brought about significant transformations across scientific domains, marked by the development of increasingly intricate models demanding powerful GPU platforms. However, edge applications like wearables and monitoring systems impose stringent constraints on memory, size, and energy, making on-device processing imperative. To address these constraints, we employ an efficient zero-shot data-dependent Neural Architecture Search (NAS) strategy, enhancing the search speed through the utilization of proxy functions. Additionally, we integrate Knowledge Distillation (KD) during the learning process, harnessing insights from pre-trained models to enrich the performance and adaptability of our approach. This combined method not only achieves improved accuracy with but also results in a reduced memory footprint for the model. Our validation on CUB2002011 demonstrates the feasibility of achieving a competitive NASoptimized architecture for small datasets, compared to models pre-trained on larger ones.
2024-05-30
146
150
REVIEWED
EPFL
Event name | Event acronym | Event place | Event date |
SDS | Zürich | 2024-05-30 - 2024-05-31 | |