Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3209542.3209567acmconferencesArticle/Chapter ViewAbstractPublication PageshtConference Proceedingsconference-collections
short-paper

As Stable As You Are: Re-ranking Search Results using Query-Drift Analysis

Published: 03 July 2018 Publication History

Abstract

This work studies the merits of using query-drift analysis for search re-ranking. A relationship between the ability to predict the quality of a result list retrieved by an arbitrary method, as manifested by its estimated query-drift, and the ability to improve that method's initial retrieval by re-ranking documents in the list based on such prediction is established. A novel document property, termed "aspect-stability", is identified as the main enabler for transforming the output of an aspect-level query-drift analysis into concrete document scores for search re-ranking. Using an evaluation with various TREC corpora with common baseline retrieval methods, the potential of the proposed re-ranking approach is demonstrated.

References

[1]
J. Bali'ski and C. Danilowicz. Re-ranking method based on inter-document distances. Inf. Process. Manage., 41(4):759--775, July 2005.
[2]
Morton B Brown and Alan B Forsythe. Robust tests for the equality of variances. Journal of the American Statistical Association, 69(346):364--367, 1974.
[3]
C. Buckley. Why current ir engines fail. In Proceedings of SIGIR '04.
[4]
N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proceedings of WSDM '08.
[5]
F. Diaz. Condensed list relevance models. In Proceedings of ICTIR '15.
[6]
F. Diaz. Regularizing ad hoc retrieval scores. In Proceedings of CIKM '05.
[7]
O. Kurland and L. Lee. Pagerank without hyperlinks: Structural re-ranking using links induced by language models. In Proceedings of SIGIR '05.
[8]
V. Lavrenko and W. B. Croft. Relevance based language models. In Proceedings of SIGIR '01.
[9]
X. Liu and W. B. Croft. Cluster-based retrieval using language models. In Proceedings of SIGIR '04, pages 186--193, 2004.
[10]
L. Meister, O. Kurland, and I. Gelfer Kalmanovich. Re-ranking search results using an additional retrieved list. Inf. Retr., 14(4):413--437, August 2011.
[11]
H. Roitman. An enhanced approach to query performance prediction using reference lists. In Proceedings of SIGIR '17.
[12]
H. Roitman, S. Erera, and B. Weiner. Robust standard deviation estimation for query performance prediction. In Proceedings of ICTIR '17.
[13]
H. Roitman, S. Hummel, and O. Kurland. Using the cross-entropy method to re-rank search results. In Proceedings of SIGIR '14, pages 839--842, 2014.
[14]
A. Shtok, D. Kurland, O.and Carmel, F. Raiber, and G. Markovits. Predicting query performance by query-drift estimation. ACM Trans. Inf. Syst., 30(2), May 2012.
[15]
S. Wu. Data fusion in information retrieval, volume 13. Springer, 2012.
[16]
L. Yang, D. Ji, G. Zhou, Y. Nie, and G. Xiao. Document re-ranking using cluster validation and label propagation. In Proceedings of CIKM '06.
[17]
E. Yom-Tov, S. Fine, D. Carmel, and A. Darlow. Learning to estimate query difficulty: Including applications to missing content detection and distributed information retrieval. In Proceedings of SIGIR '05, pages 512--519, 2005.
[18]
L. Zighelnic and O. Kurland. Query-drift prevention for robust query expansion. In Proceedings of SIGIR '08.

Cited By

View all
  • (2024)No Query Left Behind: Query Refinement via BacktranslationProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679729(1961-1972)Online publication date: 21-Oct-2024

Index Terms

  1. As Stable As You Are: Re-ranking Search Results using Query-Drift Analysis

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    HT '18: Proceedings of the 29th on Hypertext and Social Media
    July 2018
    266 pages
    ISBN:9781450354271
    DOI:10.1145/3209542
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 03 July 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. query drift analysis
    2. query performance prediction
    3. search re-ranking

    Qualifiers

    • Short-paper

    Conference

    HT '18
    Sponsor:

    Acceptance Rates

    HT '18 Paper Acceptance Rate 19 of 69 submissions, 28%;
    Overall Acceptance Rate 378 of 1,158 submissions, 33%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 18 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)No Query Left Behind: Query Refinement via BacktranslationProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679729(1961-1972)Online publication date: 21-Oct-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media