Get 3D scene with fewer images by adapting scene priors trained on large datasets with NFP
Get 3D scene with fewer images by adapting scene priors trained on large datasets with NFP
3D Reconstruction with Generalizable Neural Fields using Scene Priors
arXiv paper abstract https://arxiv.org/abs/2309.15164
arXiv PDF paper https://arxiv.org/pdf/2309.15164.pdf
Project page https://oasisyang.github.io/neural-prior
High-fidelity 3D scene reconstruction has been substantially advanced by recent progress in neural fields.
However, most ... methods train a separate network from scratch for each individual scene. This is not scalable, inefficient, and unable to yield good results given limited views.
... introduce training generalizable Neural Fields incorporating scene Priors (NFPs) ... maps any single-view RGB-D image into signed distance and radiance values.
A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module, which provides better flexibility.
The scene priors can be trained on large-scale datasets, allowing for fast adaptation to the reconstruction of a new scene with fewer views.
NFP ... demonstrates SOTA scene reconstruction performance and efficiency ... also supports single-image novel-view synthesis ...
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments