Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/11735106_10guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Evaluating web search result summaries

Published: 10 April 2006 Publication History

Abstract

The aim of our research is to produce and assess short summaries to aid users' relevance judgements, for example for a search engine result page. In this paper we present our new metric for measuring summary quality based on representativeness and judgeability, and compare the summary quality of our system to that of Google. We discuss the basis for constructing our evaluation methodology in contrast to previous relevant open evaluations, arguing that the elements which make up an evaluation methodology: the tasks, data and metrics, are interdependent and the way in which they are combined is critical to the effectiveness of the methodology. The paper discusses the relationship between these three factors as implemented in our own work, as well as in SUMMAC/MUC/DUC.

References

[1]
Afantenos, S., Karkaletsis, V. and Stamatopoulos, P.: Summarization from medical documents: a survey. Artificial Intelligence in Medicine, 33, 2, February (2005), 157-177.
[2]
Berger, A. and Mittal, V.O.: Query-Relevant summarisation using FAQs. ACL (2000) 294-301.
[3]
Borko, H. and Bernier, C.L.: Abstracting concepts and methods. Academic Press, San Diego, Ca., USA (1975)
[4]
Chinchor, N. Hirschman, L. and Lewis, D.D.: Evaluating message understanding systems: An analysis of the third message understanding conference. Association for Computation Linguistics, 19, 3, (1993) 409-449.
[5]
Chinchor, N.: MUC-3 Evaluation metrics. Proceedings of third message understanding conference (1991) 17-24.
[6]
Harman, D. and Over P.: The effects of human variation in DUC summarisation evaluation. Proceedings of the ACL-04 Workshop in Text Summarization Branches Out (Barcelona, Spain, July, (2004) 10-17.
[7]
Liang, S.F., Devlin, S. and Tait, J.: Poster: Using query term order for result summarisation. SIGIR'05, Brazil (2005) 629-630.
[8]
Lin, C.Y.: ROUGE: a Package for Automatic Evaluation of Summaries. Proceedings of the Workshop on Text Summarization Branches Out, Barcelona, Spain, (2004) 25-26, July.
[9]
Mani, I., Firmin, T. and Sundheim, B.: The TIPSTER SUMMAC text summarisation evaluation. Proceedings of the ninth conference on European chapter of the Association ofr Computational Linguistics, Bergen, Norway (1999) 77-85.
[10]
Mani, I.: Automatic Summarization. John Benjamins, Amsterdam (2001).
[11]
Pagano, R.R.: Understanding statistics in the behavioural sciences. Wadsworth/Thomson Learning, USA (2001).
[12]
Sparck Jones, C. and Galliers, J.R.: Evaluating natural language processing systems: an analysis and review. Springer, New York (1996).
[13]
Tipster text phase III 18-month workshop notes, (1998), May, (1998) Fairfax, VA.
[14]
Voorhees, E.M.: Variations in Relevance Judgements and the Measurement of Retrieval Effectiveness. Information Processing & Management, 36, 5, (2000) 697-716, September.

Cited By

View all
  • (2020)Extractive Snippet Generation for ArgumentsProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3397271.3401186(1969-1972)Online publication date: 25-Jul-2020
  • (2013)Improving search result summaries by using searcher behavior dataProceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval10.1145/2484028.2484093(13-22)Online publication date: 28-Jul-2013
  • (2009)Predicting the readability of short web summariesProceedings of the Second ACM International Conference on Web Search and Data Mining10.1145/1498759.1498827(202-211)Online publication date: 9-Feb-2009
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
ECIR'06: Proceedings of the 28th European conference on Advances in Information Retrieval
April 2006
582 pages
ISBN:3540333479
  • Editors:
  • Mounia Lalmas,
  • Andy MacFarlane,
  • Stefan Rüger,
  • Anastasios Tombros,
  • Theodora Tsikrika

Sponsors

  • EPSRC: Engineering and Physical Sciences Research Council
  • Yahoo! Research
  • Microsoft Research Cambridge (UK)
  • Google Inc.
  • The Council of European Professional Informatics Societies: CEPIS

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 10 April 2006

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2020)Extractive Snippet Generation for ArgumentsProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3397271.3401186(1969-1972)Online publication date: 25-Jul-2020
  • (2013)Improving search result summaries by using searcher behavior dataProceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval10.1145/2484028.2484093(13-22)Online publication date: 28-Jul-2013
  • (2009)Predicting the readability of short web summariesProceedings of the Second ACM International Conference on Web Search and Data Mining10.1145/1498759.1498827(202-211)Online publication date: 9-Feb-2009
  • (2006)Progress in information retrievalProceedings of the 28th European conference on Advances in Information Retrieval10.1007/11735106_1(1-11)Online publication date: 10-Apr-2006

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media