Improve vision models by pretrain on uncurated images without supervision with SEER
Improve vision models by pretrain on uncurated images without supervision with SEER
Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
arXiv paper abstract https://arxiv.org/abs/2202.08360
arXiv PDF paper https://arxiv.org/pdf/2202.08360.pdf
Discriminative self-supervised learning allows training models on any random group of internet images
... using this ability, ... learn ... salient and more representative information present in diverse unbounded set of images from across the globe.
... train models on billions of random images without any data pre-processing or prior assumptions about what we want the model to learn.
... validate ... model performance on over 50 benchmarks including fairness, robustness to distribution shift, geographical diversity, fine grained recognition, image copy detection and many image classification datasets.
... resulting model ... captures ... semantic information, ... also captures information about artistic style and learns salient information such as geolocations and multilingual word embeddings based on visual content only.
... importantly, ... model is more robust, more fair, less harmful and less biased than supervised models or models trained on object centric datasets such as ImageNet.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Comments