Nothing Special   »   [go: up one dir, main page]

Skip to main content

Entity Relations Based Pointer-Generator Network for Abstractive Text Summarization

  • Conference paper
  • First Online:
Advanced Data Mining and Applications (ADMA 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13088))

Included in the following conference series:

  • 1037 Accesses

Abstract

The goal of automatic text summarization is to generate a shorter text containing the main ideas and key information of the original text. In recent years, sequence-to-sequence (Seq2Seq) models have made great progress in text summarization task. Many derived models appeared and successfully handled challenges of this task, such as fluency and readability. They also alleviate repetition and out-of-vocabulary (OOV) word problems. However, there remains an important issue to be solved, the factual consistency (also named factual coherency). Since important messages exist in the entities and their relations which appear in the original text sentences, this paper investigates the value of entity relations to boost performance of Seq2Seq abstractive text summarization models. To this end, we present Entity relations based Pointer-Generator Network (ERPG) which has 1) Informative OpenIE Relation Triples Selection Algorithm that generating non-redundant Open-domain relation triples from plain text by using Stanford OpenIE (Open Information Extraction); 2) Entity Relations Graph Attention network (ERGAT), a new graph attention neural network is designed to obtain structural features from entity relation triples in the text. 3) Entity-focused attention, a modified calculation of attention distribution is introduced to guide Seq2Seq model to focus on the salient words of the text. Experimental results show that ERPG can boost the performance of Pointer-Generator network, ERGAT is the main factor of improvement and the keyinfo attention can enhance the basic attention mechanism. The relation triples have high potential to improve abstractive text summarization models.

This work is partially supported by the Research Fund of Guangxi Key Lab of Multi-source Information Mining & Security (No. 20-A-01-01), Research Fund of Guangxi Key Lab of Multi-source Information Mining & Security (No. 20-A-01-02), Research Fund of Guangxi Key Lab of Multi-source Information Mining & Security (MIMS20-M-01) and the Project of Guangxi Science and Technology (GuiKeAD20159041).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bahdanau, D., Cho, K.H., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations, ICLR 2015 (2015)

    Google Scholar 

  2. Chen, Q., Zhu, X.D., Ling, Z.H., Wei, S., Jiang, H.: Distraction-based neural networks for modeling document. In: IJCAI, vol. 16, pp. 2754–2760 (2016)

    Google Scholar 

  3. Chopra, S., Auli, M., Rush, A.M.: Abstractive sentence summarization with attentive recurrent neural networks. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 93–98 (2016)

    Google Scholar 

  4. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. In: NIPS 2014 Workshop on Deep Learning, December 2014 (2014)

    Google Scholar 

  5. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

  6. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (2020)

    Google Scholar 

  7. Fernandes, P., Allamanis, M., Brockschmidt, M.: Structured neural summarization. In: International Conference on Learning Representations (2018)

    Google Scholar 

  8. Gers, F.A., Schmidhuber, J., Cummins, F.: Learning to forget: continual prediction with ISTM. Neural Comput. 12(10), 2451–2471 (2000)

    Article  Google Scholar 

  9. Gu, J., Lu, Z., Li, H., Li, V.O.: Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 1631–1640 (2016)

    Google Scholar 

  10. Gulcehre, C., Ahn, S., Nallapati, R., Zhou, B., Bengio, Y.: Pointing the unknown words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 140–149 (2016)

    Google Scholar 

  11. Gunel, B., Zhu, C., Zeng, M., Huang, X.: Mind the facts: knowledge-boosted coherent abstractive text summarization. In: 33rd Conference on Neural Information Processing Systems (2019)

    Google Scholar 

  12. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  13. Huang, L., Wu, L., Wang, L.: Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 5094–5107 (2020)

    Google Scholar 

  14. Huang, Y., Feng, X., Feng, X., Qin, B.: The factual inconsistency problem in abstractive text summarization: a survey. arXiv preprint arXiv:2104.14839 (2021)

  15. Keneshloo, Y., Shi, T., Ramakrishnan, N., Reddy, C.K.: Deep reinforcement learning for sequence-to-sequence models. IEEE Trans. Neural Netw. Learn. Syst. 31(7), 2469–2489 (2019)

    Google Scholar 

  16. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations (2017)

    Google Scholar 

  17. Koncel-Kedziorski, R., Bekal, D., Luan, Y., Lapata, M., Hajishirzi, H.: Text generation from knowledge graphs with graph transformers. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 2284–2293 (2019)

    Google Scholar 

  18. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105 (2012)

    Google Scholar 

  19. Kryściński, W., Keskar, N.S., McCann, B., Xiong, C., Socher, R.: Neural text summarization: a critical evaluation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 540–551 (2019)

    Google Scholar 

  20. Li, P., Bing, L., Lam, W.: Actor-critic based training framework for abstractive summarization. arXiv preprint arXiv:1803.11070 (2018)

  21. Maynez, J., Narayan, S., Bohnet, B., McDonald, R.: On faithfulness and factuality in abstractive summarization. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 1906–1919 (2020)

    Google Scholar 

  22. Nallapati, R., Zhou, B., dos Santos, C., Gulçehre, Ç., Xiang, B.: Abstractive text summarization using sequence-to-sequence RUNs and beyond. In: Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280–290 (2016)

    Google Scholar 

  23. Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. In: International Conference on Learning Representations (2018)

    Google Scholar 

  24. Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379–389 (2015)

    Google Scholar 

  25. Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Netw. 20(1), 61–80 (2008)

    Article  Google Scholar 

  26. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 1073–1083 (2017)

    Google Scholar 

  27. Sharma, E., Huang, L., Hu, Z., Wang, L.: An entity-driven framework for abstractive summarization. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3280–3291 (2019)

    Google Scholar 

  28. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Adv. Neural Inf. Process. Syst. 27, 3104–3112 (2014)

    Google Scholar 

  29. Tang, H., Xiao, B., Li, W., Wang, G.: Pixel convolutional neural network for multi-focus image fusion. Inf. Sci. 433, 125–141 (2018)

    Article  MathSciNet  Google Scholar 

  30. Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  31. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  32. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  33. Xiao, B., et al.: PAM-DenseNet: a deep convolutional neural network for computer-aided covid-19 diagnosis. IEEE Trans. Cybern. (2021). IEEE

    Google Scholar 

  34. Zhu, C., et al.: Enhancing factual consistency of abstractive summarization. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 718–733 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guangquan Lu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, T., Lu, G., Li, Z., Song, J., Wu, L. (2022). Entity Relations Based Pointer-Generator Network for Abstractive Text Summarization. In: Li, B., et al. Advanced Data Mining and Applications. ADMA 2022. Lecture Notes in Computer Science(), vol 13088. Springer, Cham. https://doi.org/10.1007/978-3-030-95408-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95408-6_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95407-9

  • Online ISBN: 978-3-030-95408-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics