Nothing Special   »   [go: up one dir, main page]

SSAS: Semantic Similarity for Abstractive Summarization

Raghuram Vadapalli, Litton J Kurisinkel, Manish Gupta, Vasudeva Varma


Abstract
Ideally a metric evaluating an abstract system summary should represent the extent to which the system-generated summary approximates the semantic inference conceived by the reader using a human-written reference summary. Most of the previous approaches relied upon word or syntactic sub-sequence overlap to evaluate system-generated summaries. Such metrics cannot evaluate the summary at semantic inference level. Through this work we introduce the metric of Semantic Similarity for Abstractive Summarization (SSAS), which leverages natural language inference and paraphrasing techniques to frame a novel approach to evaluate system summaries at semantic inference level. SSAS is based upon a weighted composition of quantities representing the level of agreement, contradiction, independence, paraphrasing, and optionally ROUGE score between a system-generated and a human-written summary.
Anthology ID:
I17-2034
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
198–203
Language:
URL:
https://aclanthology.org/I17-2034
DOI:
Bibkey:
Cite (ACL):
Raghuram Vadapalli, Litton J Kurisinkel, Manish Gupta, and Vasudeva Varma. 2017. SSAS: Semantic Similarity for Abstractive Summarization. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 198–203, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
SSAS: Semantic Similarity for Abstractive Summarization (Vadapalli et al., IJCNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/I17-2034.pdf
Dataset:
 I17-2034.Datasets.txt