Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Topic-sensitive neural headline generation

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Neural models are being widely applied for text summarization, including headline generation, and are typically trained using a set of document-headline pairs. In a large document set, documents can usually be grouped into various topics, and documents within a certain topic may exhibit specific summarization patterns. Most existing neural models, however, have not taken the topic information of documents into consideration. This paper categorizes documents into multiple topics, since documents within the same topic have similar content and share similar summarization patterns. By taking advantage of document topic information, this study proposes a topic-sensitive neural headline generation model (TopicNHG). It is evaluated on a real-world dataset, large scale Chinese short text summarization dataset. Experimental results show that it outperforms several baseline systems on each topic and achieves comparable performance with the state-of-the-art system. This indicates that TopicNHG can generate more accurate headlines guided by document topics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Edmundson H P. New methods in automatic extracting. J ACM, 1969, 16: 264–285

    Article  Google Scholar 

  2. Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of the 28th Conference on Neural Information Processing Systems, Montreal, 2014. 3104–3112

  3. Cheng J, Lapata M. Neural summarization by extracting sentences and words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, 2016. 484–494

  4. Nallapati R, Zhai F, Zhou B. Summarunner: a recurrent neural network based sequence model for extractive summarization of documents. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, 2017

  5. Tan J, Wan X, Xiao J. Abstractive document summarization with a graph-based attentional neural model. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, 2017. 1171–1181

  6. Rush A M, Chopra S, Weston J. A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, 2015. 379–389

  7. Gu J, Lu Z, Li H, et al. Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, 2016. 484–494

  8. Zhou Q, Yang N, Wei F, et al. Selective encoding for abstractive sentence summarization. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, 2017. 1095–1104

  9. Cao Z, Li W, Li S, et al. Retrieve, rerank and rewrite: soft template based neural summarization. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, 2018. 152–161

  10. Hu B, Chen Q, Zhu F. LCSTS: a large scale chinese short text summarization dataset. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, 2015. 1967–1972

  11. Jacobs R A, Jordan M I, Nowlan S J, et al. Adaptive mixtures of local experts. Neural Comput, 1991, 3: 79–87

  12. Blei D M, Ng A Y, Jordan M I. Latent dirichlet allocation. J Machine Learning Res, 2003, 3: 993–1022

    MATH  Google Scholar 

  13. Nallapati R, Zhou B, dos Santos C. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, 2016. 280–290

  14. Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, 2014. 1724–1734

  15. Mikolov T, Karaflát M, Burget L, et al. Recurrent neural network based language model. In: Proceedings of the Eleventh Annual Conference of the International Speech Communication Association, Chiba, 2010. 1045–1048

  16. Schuster M, Paliwal K K. Bidirectional recurrent neural networks. IEEE Trans Signal Process, 1997, 45: 2673–2681

    Article  Google Scholar 

  17. Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the International Conference on Learning Representations, San Diego, 2015

  18. Ayana, Shen S Q, Lin Y K, et al. Recent advances on neural headline generation. J Comput Sci Technol, 2017, 32: 768–784

    Article  Google Scholar 

  19. Chen Q, Zhu X, Ling Z, et al. Distraction-based neural networks for document summarization. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, New York, 2016

  20. Li P, Lam W, Bing L, et al. Deep recurrent generative decoder for abstractive text summarization. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, 2017. 2091–2100

  21. Wang L, Yao J, Tao Y, et al. A reinforced topic-aware convolutional sequence-to-sequence model for abstractive text summarization. In: Proceedings of the International Joint Conferences on Artifical Intelligence, Stockholm, 2018

  22. Lin C-Y. ROUGE: a package for automatic evaluation of summaries. In: Proceedings of Workshop on Text Summarization Branches Out, Post-Conference Workshop of ACL 2004, Barcelona, 2004

  23. Schluter N. The limits of automatic summarisation according to rouge. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Valencia, 2017. 41–45

  24. Chen P, Wu F, Wang T, et al. A semantic qa-based approach for text summarization evaluation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, 2018

  25. Narayan S, Cohen S B, Lapata M. Ranking sentences for extractive summarization with reinforcement learning. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, 2018. 1747–1759

  26. Morris A H, Kasper G M, Adams D A. The effects and limitations of automated text condensing on reading comprehension performance. Inf Syst Res, 1992, 3: 17–35

    Article  Google Scholar 

  27. Mani I, Klein G, House D, et al. SUMMAC: a text summarization evaluation. Nat Lang Eng, 2002, 8: 43–68

    Article  Google Scholar 

  28. Clarke J, Lapata M. Discourse constraints for document compression. Comput Linguistics, 2010, 36: 411–441

    Article  Google Scholar 

  29. Gehring J, Auli M, Grangier D, et al. Convolutional sequence to sequence learning. 2017. ArXiv: 1705.03122

  30. Rennie S J, Marcheret E, Mroueh Y, et al. Self-critical sequence training for image captioning. 2016. ArXiv: 1612.00563

  31. Cao Z, Luo C, Li W, et al. Joint copying and restricted generation for paraphrase. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, 2017

  32. Gulcehre C, Ahn S, Nallapati R, et al. Pointing the unknown words. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, 2016. 484–494

  33. Yu L, Buys J, Blunsom P. Online segment to segment neural transduction. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, 2016. 1307–1316

  34. Kikuchi Y, Neubig G, Sasano R, et al. Controlling output length in neural encoder-decoders. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, 2016. 1328–1338

  35. Miao Y, Blunsom P. Language as a latent variable: discrete generative models for sentence compression. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, 2016. 319–328

  36. Li P, Lam W, Bing L, et al. Deep recurrent generative decoder for abstractive text summarization. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, 2017. 2091–2100

  37. Cao Z, Wei F, Li W, et al. Faithful to the original: fact aware neural abstractive summarization. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, 2018

  38. Shen S, Cheng Y, He Z, et al. Minimum risk training for neural machine translation. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, 2016. 1683–1692

  39. Li P, Bing L, Lam W. Actor-critic based training framework for abstractive summarization. 2018. ArXiv: 1803.11070

  40. Celikyilmaz A, Hakkani-Tür D. Discovery of topically coherent sentences for extractive summarization. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, 2011. 491–499

  41. Li J, Li S. A novel feature-based bayesian model for query focused multi-document summarization. Trans Assoc Comput Linguist, 2013, 1: 89–98

    Article  Google Scholar 

  42. Li Y, Li S. Query-focused multi-document summarization: combining a topic model with graph-based semi-supervised learning. In: Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, 2014. 1197–1207

  43. Bairi R, Iyer R, Ramakrishnan G, et al. Summarization of multi-document topic hierarchies using submodular mixtures. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, Beijing, 2015. 553–563

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyuan Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ayana, Wang, Z., Xu, L. et al. Topic-sensitive neural headline generation. Sci. China Inf. Sci. 63, 182103 (2020). https://doi.org/10.1007/s11432-019-2657-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11432-019-2657-8

Keywords

Navigation