BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream
Implicit scene representation has attracted a lot of attention in recent research of computer vision and graphics. Most prior methods focus on how to reconstruct 3D scene representation from a set of images. In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream. To eliminate motion blur, we introduce event stream to regularize the learning process of NeRF by accumulating it into an image. We model the camera motion with a cubic B-Spline in SE(3) space. Both the blurry image and the brightness change within a time interval, can then be synthesized from the NeRF given the 6-DoF poses interpolated from the cubic B-Spline. Our method can jointly learn both the implicit scene representation and the camera motion by minimizing the differences between the synthesized data and the real measurements without any prior knowledge of camera poses. We evaluate the proposed method with both synthetic and real datasets. The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality.
2-s2.0-85213861083
Westlake University
École Polytechnique Fédérale de Lausanne
Westlake University
Hunan University
Hunan University
Westlake University
2024-10-26
978-3-031-72751-1
Part XXXI
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); 15089 LNCS
1611-3349
0302-9743
416
434
REVIEWED
EPFL
| Event name | Event acronym | Event place | Event date |
ECCV 2024 | Milan, Italy | 2024-09-29 - 2024-10-04 | |