Nothing Special   »   [go: up one dir, main page]

ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model

Zhengjie Huang, Shikun Feng, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, Yu Sun


Abstract
This paper describes the system designed by ERNIE Team which achieved the first place in SemEval-2020 Task 10: Emphasis Selection For Written Text in Visual Media. Given a sentence, we are asked to find out the most important words as the suggestion for automated design. We leverage the unsupervised pre-training model and finetune these models on our task. After our investigation, we found that the following models achieved an excellent performance in this task: ERNIE 2.0, XLM-ROBERTA, ROBERTA and ALBERT. We combine a pointwise regression loss and a pairwise ranking loss which is more close to the final Match m metric to finetune our models. And we also find that additional feature engineering and data augmentation can help improve the performance. Our best model achieves the highest score of 0.823 and ranks first for all kinds of metrics.
Anthology ID:
2020.semeval-1.190
Volume:
Proceedings of the Fourteenth Workshop on Semantic Evaluation
Month:
December
Year:
2020
Address:
Barcelona (online)
Editors:
Aurelie Herbelot, Xiaodan Zhu, Alexis Palmer, Nathan Schneider, Jonathan May, Ekaterina Shutova
Venue:
SemEval
SIG:
SIGLEX
Publisher:
International Committee for Computational Linguistics
Note:
Pages:
1456–1461
Language:
URL:
https://aclanthology.org/2020.semeval-1.190
DOI:
10.18653/v1/2020.semeval-1.190
Bibkey:
Cite (ACL):
Zhengjie Huang, Shikun Feng, Weiyue Su, Xuyi Chen, Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, and Yu Sun. 2020. ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 1456–1461, Barcelona (online). International Committee for Computational Linguistics.
Cite (Informal):
ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model (Huang et al., SemEval 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.semeval-1.190.pdf