Scene Decomposition and Relighting from Image Collections in Neural Rendering
The focus of our research is to generate controllable photo-realistic images of real-world scenes from existing observations, i.e., the inverse rendering problem. The approaches we focus on are those through neural rendering, utilizing neural network to decompose the scene, learn its physical properties and render with novel lighting condition. In this proposal, we discuss three papers and how they relate to our research topic. We first look at a simple framework representing 3D scenes as volumetric radiance field for view synthesis; Then we look at a modification of the first paper to allow scene decomposition for illumination, geometry, surface reflectance, etc., for relighting; we lastly present a method using signed distance functions (SDF) for scene geometry addressing drawback of previous methods. Finally, we discuss our proposed solution for the problem and possible future research directions.
Candidacy_Exam_Writeup.pdf
n/a
openaccess
n/a
7.87 MB
Adobe PDF
bd54c3b08a0e7f2f8aa63584daecc56f