Get 3D scene from a few monocular images using CLIP with varying depth bins with Hu
Get 3D scene from a few monocular images using CLIP with varying depth bins with Hu
Learning to Adapt CLIP for Few-Shot Monocular Depth Estimation
arXiv paper abstract https://arxiv.org/abs/2311.01034
arXiv PDF paper https://arxiv.org/pdf/2311.01034.pdf
Pre-trained Vision-Language Models (VLMs), such as CLIP, have shown enhanced performance across ... tasks that involve the integration of visual and linguistic modalities.
When CLIP is used for depth estimation tasks ... combined with ... semantic descriptions of the depth ... depth is then achieved by weighting and summing the depth values, called depth bins
... However, this method, utilizing fixed depth bins, may not effectively generalize as images from different scenes may exhibit distinct depth distributions.
... propose a few-shot-based method which learns to adapt the VLMs for monocular depth estimation to balance training costs and generalization capabilities.
Specifically, it assigns different depth bins for different scenes, which can be selected by the model during inference.
... With only one image per scene for training, ... method outperforms the previous state-of-the-art method by up to 10.6\% in terms of MARE.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments