Segment scene with RGB-D images by efficiently fusing RGB and depth features with MIPANet
Segment scene with RGB-D images by efficiently fusing RGB and depth features with MIPANet
Optimizing rgb-d semantic segmentation through multi-modal interaction and pooling attention
arXiv paper abstract https://arxiv.org/abs/2311.11312
arXiv PDF paper https://arxiv.org/pdf/2311.11312.pdf
Semantic segmentation of RGB-D images involves understanding the appearance and spatial relationships of objects ... However ... RGB and depth images often results in ... suboptimal segmentation
... propose the Multi-modal Interaction and Pooling Attention Network (MIPANet) ... to harness the interactive synergy between RGB and depth modalities, optimizing the utilization of complementary information.
... incorporate a Multi-modal Interaction Fusion Module (MIM) into the deepest layers of the network.
This module is engineered to facilitate the fusion of RGB and depth information, allowing for mutual enhancement and correction.
... introduce a Pooling Attention Module (PAM) ... amplify the features extracted by the network and integrates the module's output into the decoder in a targeted manner ... improving semantic segmentation
... MIPANet outperforms existing methods on two indoor scene datasets ...
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments