Get editable 3D objects and new view from monocular RGB-D video by separate models with factorednerf
Get editable 3D objects and new view from monocular RGB-D video by separate models with factorednerf
Factored Neural Representation for Scene Understanding
arXiv paper abstract https://arxiv.org/abs/2304.10950
arXiv PDF paper https://arxiv.org/pdf/2304.10950.pdf
Project page https://yushiangw.github.io/factorednerf
A long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors.
... problem ... more challenging in the presence of multiple moving and/or deforming objects.
... neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion.
However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis.
... introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement).
... demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory) ...
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
ความคิดเห็น