Yu, KaichengRanftl, ReneSalzmann, Mathieu2022-01-312022-01-312022-01-312021-01-0110.1109/CVPR46437.2021.01351https://infoscience.epfl.ch/handle/20.500.14299/184902WOS:000742075003091Weight sharing has become a de facto standard in neural architecture search because it enables the search to be done on commodity hardware. However, recent works have empirically shown a ranking disorder between the performance of stand-alone architectures and that of the corresponding shared-weight networks. This violates the main assumption of weight-sharing NAS algorithms, thus limiting their effectiveness. We tackle this issue by proposing a regularization term that aims to maximize the correlation between the performance rankings of the shared-weight network and that of the standalone architectures using a small set of landmark architectures. We incorporate our regularization term into three different NAS algorithms and show that it consistently improves performance across algorithms, search-spaces, and tasks.Computer Science, Artificial IntelligenceImaging Science & Photographic TechnologyComputer ScienceLandmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Searchtext::conference output::conference proceedings::conference paper