The multimodality cell segmentation challenge: toward universal solutions
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.|Cell segmentation is crucial in many image analysis pipelines. This analysis compares many tools on a multimodal cell segmentation benchmark. A Transformer-based model performed best in terms of performance and general applicability.
WOS:001191084900002
2024-03-26
REVIEWED
Funder | Grant Number |
Natural Sciences and Engineering Research Council of Canada | RGPIN-2020-06189 |
CIFAR AI Chair programs | |
British Heart Foundation/NC3Rs grant | NC/S001441/1 |
Department of Science and Technology, Government of India | SPF/2021/000209 |
Infosys Centre for AI, IIIT-Delhi | |
SNSF | CRSK-3_190526 |
German Research Foundation | SPP 2041 |
Helmholtz Association's Initiative and Networking Fund through the Helmholtz International BigBrain Analytics and Learning Laboratory under the Helmholtz International Laboratory | InterLabs-0015 |