Files

Abstract

The focus of our research is to generate controllable photo-realistic images of real-world scenes from existing observations, i.e., the inverse rendering problem. The approaches we focus on are those through neural rendering, utilizing neural network to decompose the scene, learn its physical properties and render with novel lighting condition. In this proposal, we discuss three papers and how they relate to our research topic. We first look at a simple framework representing 3D scenes as volumetric radiance field for view synthesis; Then we look at a modification of the first paper to allow scene decomposition for illumination, geometry, surface reflectance, etc., for relighting; we lastly present a method using signed distance functions (SDF) for scene geometry addressing drawback of previous methods. Finally, we discuss our proposed solution for the problem and possible future research directions.

Details

PDF