Modern neuroscience research is generating increasingly large datasets, from recording thousands of neurons over long timescales to behavioral recordings of animals spanning weeks, months, or even years. Despite a great variety in recording setups and experiments, analysis goals are often shared. When studying biological systems, we want to probe and infer the "hidden causes" underlying a phenomenon and their dynamics, though such dynamics can have different underlying structures, and unroll on different time scales. Towards this goal, we need robust methods for processing and analyzing data, and interpreting our findings to inform subsequent experiments. In this thesis, I study the problem of supporting the scientific discovery process by applying machine learning and statistical tools for data processing (Ch.2-5), analysis (Ch.6-7), and informing subsequent experiments through interpretability (Ch.8). For processing, in Ch.2 I introduce new evaluation paradigms for testing the performance of a computer vision model under distribution shift at deployment time. In many realistic scenarios, a few unlabeled samples from the target distribution are available in such scenarios. I leverage this assumption to propose batch norm adaptation which considerably improves the error rates of current machine vision models on the ImageNet-C and ImageNet-R datasets. I then extend the methodology for test-time adaptation and empirically study the performance of self-learning techniques in Ch.3. I show that self-learning methods are effective at adapting models of all kinds on a range of adaptation benchmarks. While more powerful than batch norm adaptation, self-learning techniques are prone to collapse during long adaptation spans. In Ch.4 I study this problem in-depth, and show through a simple baseline that the only effective solution right now is to perform periodic resetting of the model. In Ch.5, I study the robustness problem in the context of pose estimation, and assert that pre-training is crucial for out-of-distribution performance. For analysis, in Ch.6, I study the effectiveness of current self-supervised learning approaches for representation learning, and show that through building of specialized loss functions we can use contrastive learning to solve non-linear independent component analysis for different assumptions on the latent distribution of a dataset. In Ch.7, I design such a loss function, a generalized variant of the InfoNCE loss, and apply the algorithm to several open neuroscience datasets. The method, CEBRA, can perform scientific discovery and hypothesis testing within a single algorithmic framework to jointly model behavioral and neural data. Finally, I extend this model to allow interpretability and propose an identifiable approach to generating attribution maps in Ch.8. This method is able to attribute latent and observable factors back into the original signal space. Such methods can close the loop to informing data collection for the next iteration of experiments by proposing worthwhile interventions. This work is a step towards more reliably using machine learning methods for science, where reproducibility and robustness is of even greater interest than in engineering applications.
EPFL_TH12067.pdf
n/a
openaccess
copyright
62.92 MB
Adobe PDF
0953f235a03355d594f062a6b20b3c6b