Files

Résumé

In recent years, new emerging immersive imaging modalities, e.g. light fields, have been receiving growing attention, becoming increasingly widespread over the years. Light fields are often captured through multi-camera arrays or plenoptic cameras, with the goal of measuring the light coming from every direction at every point in space. Light field cameras are often sensitive to noise, making light field denoising a crucial pre- and post-processing step. A number of conventional methods for light field denoising have been proposed in the state of the art, making use of the redundant information coming from the different views to remove the noise. While learning-based denoising has demonstrated good performance in the context of image denoising, only preliminary works have studied the benefit of using neural networks to denoise light fields. In this paper, a learning-based light field denoising technique based on a convolutional neural network is investigated by extending a state-of-the-art image denoising method, and taking advantage of the redundant information generated by different views of the same scene. The performance of the proposed approach is compared in terms of accuracy and scalability to state-of-the-art methods for image and light field denoising, both conventional and learning-based. Moreover, the robustness of the proposed method to different types of noise and their strengths is reviewed. To facilitate further research on this topic, the code is made publicly available at https://github.com/mmspg/Light-Field-Denoising

Détails

PDF