Segment objects in videos using only bounding boxes along with time consistency with MaskFreeVIS
Segment objects in videos using only bounding boxes along with time consistency with MaskFreeVIS
Mask-Free Video Instance Segmentation
arXiv paper abstract https://arxiv.org/abs/2303.15904
arXiv PDF paper https://arxiv.org/pdf/2303.15904.pdf
... recent advancement in Video Instance Segmentation (VIS) ... driven by the use of deeper and increasingly data-hungry transformer-based models.
However, video masks are ... expensive to annotate ... In this work ... aim to remove the mask-annotation requirement.
... propose MaskFreeVIS, achieving highly competitive VIS performance, while only using bounding box annotations for the object
... leverage the rich temporal mask consistency constraints in videos by introducing the Temporal KNN-patch Loss (TK-Loss), providing strong mask supervision without any labels.
... TK-Loss finds one-to-many matches across frames, through an efficient patch-matching step followed by a K-nearest neighbor selection.
... mask-free objective is simple to implement, has no trainable parameters, is computationally efficient, yet outperforms baselines employing, e.g., state-of-the-art optical flow to enforce temporal mask consistency ...
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comentarios