Nothing Special   »   [go: up one dir, main page]

EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation

Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, Xiaodan Liang


Abstract
Pre-trained language models have shown remarkable results on various NLP tasks. Nevertheless, due to their bulky size and slow inference speed, it is hard to deploy them on edge devices. In this paper, we have a critical insight that improving the feed-forward network (FFN) in BERT has a higher gain than improving the multi-head attention (MHA) since the computational cost of FFN is 2~3 times larger than MHA. Hence, to compact BERT, we are devoted to designing efficient FFN as opposed to previous works that pay attention to MHA. Since FFN comprises a multilayer perceptron (MLP) that is essential in BERT optimization, we further design a thorough search space towards an advanced MLP and perform a coarse-to-fine mechanism to search for an efficient BERT architecture. Moreover, to accelerate searching and enhance model transferability, we employ a novel warm-up knowledge distillation strategy at each search stage. Extensive experiments show our searched EfficientBERT is 6.9× smaller and 4.4× faster than BERTBASE, and has competitive performances on GLUE and SQuAD Benchmarks. Concretely, EfficientBERT attains a 77.7 average score on GLUE test, 0.7 higher than MobileBERTTINY, and achieves an 85.3/74.5 F1 score on SQuAD v1.1/v2.0 dev, 3.2/2.7 higher than TinyBERT4 even without data augmentation. The code is released at https://github.com/cheneydon/efficient-bert.
Anthology ID:
2021.findings-emnlp.123
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1424–1437
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.123
DOI:
10.18653/v1/2021.findings-emnlp.123
Bibkey:
Cite (ACL):
Chenhe Dong, Guangrun Wang, Hang Xu, Jiefeng Peng, Xiaozhe Ren, and Xiaodan Liang. 2021. EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1424–1437, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation (Dong et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.123.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.123.mp4
Code
 cheneydon/efficient-bert
Data
CoLAGLUEMRPCMultiNLIQNLISQuADSSTSST-2