Survey of transformers for vision language
Survey of transformers for vision language
Vision Language Transformers: A Survey
arXiv paper abstract https://arxiv.org/abs/2307.03254
arXiv PDF paper https://arxiv.org/pdf/2307.03254.pdf
Vision language tasks, such as answering questions about or generating captions that describe an image, are difficult tasks for computers to perform.
... recent ... research has adapted the pretrained transformer ... to vision language modeling. Transformer models have greatly improved performance and versatility over previous vision language models.
They do so by pretraining models on a large generic datasets and transferring their learning to new tasks with minor changes in architecture and parameter values.
This type of transfer learning has become the standard modeling practice in both natural language processing and computer vision.
Vision language transformers offer the promise of producing similar advancements in tasks which require both vision and language.
In this paper ... provide a broad synthesis of the currently available research on vision language transformer models and offer some analysis of their strengths, limitations and some open questions that remain.
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
LinkedIn https://www.linkedin.com/in/morris-lee-47877b7b
#ComputerVision #Transformers #AINewsClips #AI #ML #ArtificialIntelligence #MachineLearning
Comments