Better image classification in new domain by focusing on foreground with RobustViT
Better image classification in new domain by focusing on foreground with RobustViT
Optimizing Relevance Maps of Vision Transformers Improves Robustness
arXiv paper abstract https://arxiv.org/abs/2206.01161
arXiv PDF paper https://arxiv.org/pdf/2206.01161.pdf
Online demo https://huggingface.co/spaces/Hila/RobustViT
... visual classification models often rely mostly on the image background, neglecting the foreground, which hurts their robustness to distribution changes.
... propose to monitor the model's relevancy signal and manipulate it such that the model is focused on the foreground object.
This is done as a finetuning step, involving relatively few samples consisting of pairs of images and their associated foreground masks.
... encourage the model's relevancy map (i) to assign lower relevance to background regions, (ii) to consider as much information as possible from the foreground, and (iii) ... encourage the decisions to have high confidence.
When applied to Vision Transformer (ViT) models, a marked improvement in robustness to domain shifts is observed.
... foreground masks can be obtained automatically, from a self-supervised variant of the ViT model itself; therefore no additional supervision is required.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments