Wang, QinFink, OlgaVan Gool, LucDai, Dengxin2022-12-192022-12-192022-12-192022-01-0110.1109/CVPR52688.2022.00706https://infoscience.epfl.ch/handle/20.500.14299/193350WOS:000870759100004Test-time domain adaptation aims to adapt a source pretrained model to a target domain without using any source data. Existing works mainly consider the case where the target domain is static. However, real-world machine perception systems are running in non-stationary and continually changing environments where the target domain distribution can change over time. Existing methods, which are mostly based on self-training and entropy regularization, can suffer from these non-stationary environments. Due to the distribution shift over time in the target domain, pseudo-labels become unreliable. The noisy pseudolabels can further lead to error accumulation and catastrophic forgetting. To tackle these issues, we propose a continual test-time adaptation approach (CoTTA) which comprises two parts. Firstly, we propose to reduce the error accumulation by using weight-averaged and augmentation-averaged predictions which are often more accurate. On the other hand, to avoid catastrophic forgetting, we propose to stochastically restore a small part of the neurons to the source pre-trained weights during each iteration to help preserve source knowledge in the long-term. The proposed method enables the long-term adaptation for all parameters in the network. CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models. We demonstrate the effectiveness of our approach on four classification tasks and a segmentation task for continual testtime adaptation, on which we outperform existing methods. Our code is available at https://qin.ee/cotta.Computer Science, Artificial IntelligenceImaging Science & Photographic TechnologyComputer ScienceContinual Test-Time Domain Adaptationtext::conference output::conference proceedings::conference paper