Nothing Special   »   [go: up one dir, main page]

Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!

Katharina Kann, Sascha Rothe, Katja Filippova


Abstract
Motivated by recent findings on the probabilistic modeling of acceptability judgments, we propose syntactic log-odds ratio (SLOR), a normalized language model score, as a metric for referenceless fluency evaluation of natural language generation output at the sentence level. We further introduce WPSLOR, a novel WordPiece-based version, which harnesses a more compact language model. Even though word-overlap metrics like ROUGE are computed with the help of hand-written references, our referenceless methods obtain a significantly higher correlation with human fluency scores on a benchmark dataset of compressed sentences. Finally, we present ROUGE-LM, a reference-based metric which is a natural extension of WPSLOR to the case of available references. We show that ROUGE-LM yields a significantly higher correlation with human judgments than all baseline metrics, including WPSLOR on its own.
Anthology ID:
K18-1031
Volume:
Proceedings of the 22nd Conference on Computational Natural Language Learning
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Anna Korhonen, Ivan Titov
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
313–323
Language:
URL:
https://aclanthology.org/K18-1031
DOI:
10.18653/v1/K18-1031
Bibkey:
Cite (ACL):
Katharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-Level Fluency Evaluation: References Help, But Can Be Spared!. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 313–323, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Sentence-Level Fluency Evaluation: References Help, But Can Be Spared! (Kann et al., CoNLL 2018)
Copy Citation:
PDF:
https://aclanthology.org/K18-1031.pdf