Fichiers

Résumé

Imaging modalities such as Electron Microscopy (EM) and Light Microscopy (LM) can now deliver high-quality, high-resolution image stacks of neural structures. Though these imaging modalities can be used to analyze a variety of components that are critical to understanding brain function, the amount of human annotation effort required to analyze them remains a major bottleneck. This has triggered great interest in automating the annotation process, with most state-of-the-art algorithms nowadays relying on machine learning. However, such methods still require significant amounts of labeled examples for training, which can be highly time consuming and arduous, stressing the need for new approaches that require less amount of human effort. In light of this, we present here two efficient machine learning algorithms that incorporate expert knowledge to maximize prediction performance and simultaneously speed up analysis by reducing the required amount of labeled data. First, we present a new approach for the automated segmentation of synapses in image stacks acquired in EM that relies on image features specifically designed to take spatial context into account. These features are used to train a classifier that can effectively learn cues such as the presence of a nearby post-synaptic region. Our algorithm successfully distinguishes synapses from the numerous other organelles that appear within an EM volume, including those whose local textural properties are relatively similar. We evaluate our approach on three different datasets and demonstrate our ability to reliably collect shape, density, and orientation statistics over hundreds of synapses. Second, we focus on reducing the required amount of annotation effort. Due to changing experimental conditions in the image acquisition process, successive stacks often exhibit differences that are severe enough to make it difficult to use a classifier trained for a specific volume on another one. This means that the tedious annotation process has to be repeated for each new stack, resulting in a major bottleneck. We present a domain adaptation algorithm that addresses this issue by effectively leveraging labeled examples across different acquisitions and significantly reducing the annotation requirements. Our approach can handle complex, non-linear image feature transformations and scales to large microscopy datasets and high-dimensional feature spaces. We evaluate our approach on four EM and LM applications where annotation is very costly. We achieve a significant improvement over the state-of-the-art methods and demonstrate our ability to greatly reduce human annotation effort. Third, we apply our synapse segmentation approach to analyze and compare the structure and shape of synaptic densities in adult and aged mice, such as their area and number of perforations. This detailed analysis requires labeling each voxel within every synapse, making manual annotation unfeasible for large volumes. We show that we can bridge this gap with our approach and demonstrate its effectiveness on six large FIB/SEM brain stacks. Our approach generates segmentations that agree with expert annotations, while requiring very little annotation effort. To our knowledge, we are the first ones to analyze synapse shape in such detail on large stacks, as previous work has strongly relied on manual annotations, restricting analysis to small volumes.

Détails

Actions

Aperçu