Adapt visual tasks to new domain without source domain data with DistillAdapt
Adapt visual tasks to new domain without source domain data with DistillAdapt
DistillAdapt: Source-Free Active Visual Domain Adaptation
arXiv paper abstract https://arxiv.org/abs/2205.12840v1
arXiv PDF paper https://arxiv.org/pdf/2205.12840v1.pdf
... present a novel method, DistillAdapt, for the challenging problem of Source-Free Active Domain Adaptation (SF-ADA).
The problem requires adapting a pretrained source domain network to a target domain, within a ... budget for acquiring labels in the target domain, while assuming that the source data is not available for adaptation due to privacy concerns or otherwise.
... selective distillation of features from the pre-trained network to the target network using a small subset of annotated target samples mined by H_AL.
... balances transfer-ability from the pre-trained network and uncertainty of the target network.
... is task-agnostic ... can be applied across visual tasks such as classification, segmentation and detection. ... handle shifts in output label space.
... improvement of 0.5% - 31.3% (across datasets and tasks) over prior adaptation methods that assume access to large amounts of annotated source data for adaptation.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments