作者:琰琰
1
MLP比肩Transformer,归纳偏置多余了?
多层感知机MLP(Multilayer Perceptron),也叫前馈神经网络(feedforward neuarl network)是最早发明的人工神经网络之一,其结构设计简单,仅由输入层、隐藏层(中间层)和输出层组成。
1
MLP比肩Transformer,归纳偏置多余了?
2
7篇论文重拳出击,Transformer扛不住了?
谷歌之后,多家科研机构相继发表7篇相关论文,试图从多个维度打击Transformer。
-
《Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks》 – 清华大学
-
《RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition》清华大学软件学院
-
《Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet》 – 牛津大学
-
《ResMLP: Feedforward networks for image classification with data-efficient training》 – Facebook AI
-
《Are Pre-trained Convolutions Better than Pre-trained Transformers?》 – Google Research
-
《FNet: Mixing Tokens with Fourier Transforms》 – Google Research
-
《Pay Attention to MLPs》 – Google Research
CNN篇
3
反映了哪些研究问题?
雷锋网特约稿件,未经授权禁止转载。详情见。
原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/139124.html