A cross-sensor approach for marine litter detection with self-supervised learning
Marine litter is a growing ecologic, economic, and societal concern that must be addressed at a global scale. Floating material aggregates under the effect of oceanic processes to form so-called windrows;, used as proxies for marine litter. Windrows reach sizes that make them visible for high-resolution optical satellites. Most recently, the availability of labeled datasets of Sentinel-2 images (MARIDA, FloatingObjects) has enabled the use of deep learning for large-scale marine litter monitoring: a segmentation model can be trained in a supervised manner to predict the presence of floating objects. However, the temporal resolution of Sentinel-2 (up to 6 days between consecutive acquisitions) limits the operational impact of such tools. Within this context, PlanetScope images can be leveraged to fill the temporal gaps of Sentinel-2 even at a higher spatial resolution: PlanetScope images have a higher spatial resolution than Sentinel-2 (3m vs. 10m) and are acquired daily. Nevertheless, there is a lack of labeled PlanetScope images for the specific purpose of marine debris detection.To address this gap, we propose a cross-sensor training strategy that allows a model to transfer knowledge from Sentinel-2 to PlanetScope without extra supervision. In particular, we leverage self-supervised learning to pre-train a model that learns a common latent space between the two sensors. Sensor-specific embedding layers project their features into a common U-Net model, itself trained to remove noise from the input images as a self-supervised learning task. Thanks to this self-supervised task, the model learns the semantics of the data without requiring any labels. Next, the model is fine-tuned on labeled Sentinel-2 images, as in most recent deep learning solutions. Since self-supervised cross-sensor pre-training has forced the model to learn a common representation between the two satellite sources, while learning to identify marine litter on Sentinel-2 images, the model co-learns to segment PlanetScope data. Thus, at prediction time, the model can be directly applied to PlanetScope images with excellent results.We evaluate the performances of the developed model on a manually annotated validation set of PlanetScope images: both visual inspection and quantitative assessment highlight the significant improvement of the proposed model, compared against a fully supervised model trained on Sentinel-2 only. This demonstrates the effectiveness of the proposed pre-training strategy as a promising solution to enable continuous large-scale mapping of marine litter on optical satellites.
EPFL
EPFL
EPFL
2025-03-18
View EGU General Assembly 2025 website
EPFL
| Event name | Event acronym | Event place | Event date |
EGU25 | Vienne, Austria | 2025-04-27 - 2025-05-02 | |