Multi-layer perceptrons for vision competitive with transformers and CNN
Multi-layer perceptrons for vision competitive with transformers and CNN
MLP is all you need... again? ... MLP-Mixer: An all-MLP Architecture for Vision
Michal Chromiak blog https://mchromiak.github.io/articles/2021/May/05/MLP-Mixer
MLP-Mixer: An all-MLP Architecture for Vision
arXiv paper PDF https://arxiv.org/abs/2105.01601v1
arXiv PDF paper https://arxiv.org/pdf/2105.01601v1.pdf
A Useful New Image Classification Method That Uses neither CNNs nor Attention
Is MLP Better Than CNN & Transformers For Computer Vision?
Analytics India Magazine https://analyticsindiamag.com/is-mlp-better-than-cnn-transformers-for-computer-vision
GitHub
rwightman / pytorch-image-models https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/mlp_mixer.py
lucidrains / mlp-mixer-pytorch https://github.com/lucidrains/mlp-mixer-pytorch
... show that while convolutions and attention are both sufficient for good performance, neither of them are necessary.
We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information).
When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. ...
Please like and share this post if you enjoyed it using the buttons at the bottom!
Stay up to date. Subscribe to my posts https://morrislee1234.wixsite.com/website/contact
Web site with my other posts by category https://morrislee1234.wixsite.com/website
Commentaires