Nothing Special   »   [go: up one dir, main page]

The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction

Dimitris Alikaniotis, Vipul Raheja


Abstract
Recent work on Grammatical Error Correction (GEC) has highlighted the importance of language modeling in that it is certainly possible to achieve good performance by comparing the probabilities of the proposed edits. At the same time, advancements in language modeling have managed to generate linguistic output, which is almost indistinguishable from that of human-generated text. In this paper, we up the ante by exploring the potential of more sophisticated language models in GEC and offer some key insights on their strengths and weaknesses. We show that, in line with recent results in other NLP tasks, Transformer architectures achieve consistently high performance and provide a competitive baseline for future machine learning models.
Anthology ID:
W19-4412
Volume:
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Helen Yannakoudakis, Ekaterina Kochmar, Claudia Leacock, Nitin Madnani, Ildikó Pilán, Torsten Zesch
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
127–133
Language:
URL:
https://aclanthology.org/W19-4412
DOI:
10.18653/v1/W19-4412
Bibkey:
Cite (ACL):
Dimitris Alikaniotis and Vipul Raheja. 2019. The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 127–133, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction (Alikaniotis & Raheja, BEA 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-4412.pdf
Code
 additional community code
Data
Billion Word BenchmarkFCEOne Billion Word Benchmark