Get 3D object geometry and new views from 2 images by getting consistent scenes with SparseFusion
Get 3D object geometry and new views from 2 images by getting consistent scenes with SparseFusion
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
arXiv paper abstract https://arxiv.org/abs/2212.00792
arXiv PDF paper https://arxiv.org/pdf/2212.00792.pdf
Project page https://sparsefusion.github.io
... propose SparseFusion, a sparse view 3D reconstruction approach that unifies recent advances in neural rendering and probabilistic image generation.
Existing approaches typically build on neural rendering with re-projected features but fail to generate unseen regions or handle uncertainty under large viewpoint changes.
Alternate methods treat this as a (probabilistic) 2D synthesis task, and while they can generate plausible 2D images, they do not infer a consistent underlying 3D.
... show that geometric consistency and generative inference can be complementary in a mode-seeking behavior.
By distilling a 3D consistent scene representation from a view-conditioned latent diffusion model, ... are able to recover a plausible 3D representation whose renderings are both accurate and realistic.
... show that it outperforms existing methods, in both distortion and perception metrics, for sparse-view novel view synthesis.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments