Nothing Special   »   [go: up one dir, main page]

Skip to main content

The Effects of Relevance Feedback Quality and Quantity in Interactive Relevance Feedback: A Simulation Based on User Modeling

  • Conference paper
Advances in Information Retrieval (ECIR 2006)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3936))

Included in the following conference series:

Abstract

Experiments on the effectiveness of relevance feedback with real users are time-consuming and expensive. This makes simulation for rapid testing desirable. We define a user model, which helps to quantify some interaction decisions involved in simulated relevance feedback. First, the relevance criterion defines the relevance threshold of the user to accept documents as relevant to his/her needs. Second, the browsing effort refers to the patience of the user to browse through the initial list of retrieved documents in order to give feedback. Third, the feedback effort refers to the effort and ability of the user to collect feedback documents. We use the model to construct several simulated relevance feedback scenarios in a laboratory setting. Using TREC data providing graded relevance assessments, we study the effect of the quality and quantity of the feedback documents on the effectiveness of the relevance feedback and compare this to the pseudo-relevance feedback. Our results indicate that one can compensate large amounts of relevant but low quality feedback by small amounts of highly relevant feedback.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Blair, D.C.: The Data-Document Distinction in Information Retrieval. Communications of the ACM 27(4), 369–374 (1984)

    Article  Google Scholar 

  2. Dennis, S., McArthur, R., Bruza, P.D.: Searching the World Wide Web made easy? The cognitive load imposed by query refinement mechanisms. In: Proceedings of the 3rd Australian Document Computing Conference. Department of Computer Science, TR-518, pp. 65–71. University of Sydney, Sydney (1998)

    Google Scholar 

  3. Efthimiadis, E.N.: Query expansion. In: Williams, M.E. (ed.) Annual Review of Information Science and Technology (ARIST 31), vol. 31, pp. 121–187. Learned Information for the American Society for Information Science, Medford (1996)

    Google Scholar 

  4. Kekäläinen, J.: The effects of query complexity, expansion and structure on retrieval performance in probabilistic text retrieval. University of Tampere, Department of Information Studies, Ph.D. Thesis, Acta Universitatis Tamperensis 678, Tampere, Finland (1999), Available at http://www.info.uta.fi/tutkimus/fire/archive/QCES.pdf (Cited October 31, 2005)

  5. Kekäläinen, J.: Binary and graded relevance in IR evaluations – Comparison of the effects on ranking of IR systems. Īnformation Processing & Management 41(5), 1019–1033 (2005)

    Article  Google Scholar 

  6. Kekäläinen, J., Järvelin, K.: Using graded relevance assessments in IR evaluation. Journal of the American Society for Information Science and Technology 53(13), 1120–1129 (2002)

    Article  Google Scholar 

  7. Pirkola, A., Leppänen, E., Järvelin, K.: The RATF Formula (Kwok’s Formula): Exploiting average term frequency in cross-language retrieval. Information Research, 7(2) (2002), http://InformationR.net/ir/7-2/infres72.html

  8. Ruthven, I., Lalmas, M.: A survey on the use of relevance feedback for information access systems. Knowledge Engineering Review 18(2), 95–145 (2003)

    Article  Google Scholar 

  9. Ruthven, I., Lalmas, M., van Rijsbergen, K.: Incorporating user search behaviour into relevance feedback. Journal of the American Society for Information Science and Technology 54(6), 529–549 (2003)

    Article  Google Scholar 

  10. Salton, G.: Automatic Text Processing: The Transformation, Analysis And Retrieval of Information by Computer. Addison-Wesley, Reading (1989)

    Google Scholar 

  11. Sihvonen, A., Vakkari, P.: Subject knowledge, thesaurus-assisted query expansion and search success. In: RIAO, Coupling Approaches, Coupling Media And Coupling Languages for Information Retrieval, Proceedings of RIAO 2004 conference, pp. 393–404. C.I.D, Paris (2004)

    Google Scholar 

  12. Sormunen, E.: Liberal relevance criteria of TREC – Counting on negligible documents? In: Beaulieu, M., Baeza-Yates, R., Myaeng, S.H., Järvelin, K. (eds.) Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 25), Tampere, Finland, August 11-15, pp. 320–330. ACM Press, New York (2002)

    Google Scholar 

  13. Sormunen, E., Kekäläinen, J., Koivisto, J., Järvelin, K.: Document Text Characteristics Affect the Ranking of the Most Relevant Documents by Expanded Structured Queries. Journal of Documentation 57(3), 358–374 (2001)

    Article  Google Scholar 

  14. Vakkari, P., Sormunen, E.: The Influence of Relevance Levels on the Effectiveness of Interactive Information Retrieval. Journal of the American Society for Information Science and Technology 55(11), 963–969 (2004)

    Article  Google Scholar 

  15. Voorhees, E.: Evaluation by Highly Relevant Documents. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 24), New Orleans, Lousiana, USA, September 9-13, pp. 74–82. ACM Press, New York (2001)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Keskustalo, H., Järvelin, K., Pirkola, A. (2006). The Effects of Relevance Feedback Quality and Quantity in Interactive Relevance Feedback: A Simulation Based on User Modeling. In: Lalmas, M., MacFarlane, A., Rüger, S., Tombros, A., Tsikrika, T., Yavlinsky, A. (eds) Advances in Information Retrieval. ECIR 2006. Lecture Notes in Computer Science, vol 3936. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11735106_18

Download citation

  • DOI: https://doi.org/10.1007/11735106_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-33347-0

  • Online ISBN: 978-3-540-33348-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics