Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

About learning models with multiple query-dependent features

Published: 05 August 2013 Publication History

Abstract

Several questions remain unanswered by the existing literature concerning the deployment of query-dependent features within learning to rank. In this work, we investigate three research questions in order to empirically ascertain best practices for learning-to-rank deployments. (i) Previous work in data fusion that pre-dates learning to rank showed that while different retrieval systems could be effectively combined, the combination of multiple models within the same system was not as effective. In contrast, the existing learning-to-rank datasets (e.g., LETOR), often deploy multiple weighting models as query-dependent features within a single system, raising the question as to whether such a combination is needed. (ii) Next, we investigate whether the training of weighting model parameters, traditionally required for effective retrieval, is necessary within a learning-to-rank context. (iii) Finally, we note that existing learning-to-rank datasets use weighting model features calculated on different fields (e.g., title, content, or anchor text), even though such weighting models have been criticized in the literature. Experiments addressing these three questions are conducted on Web search datasets, using various weighting models as query-dependent and typical query-independent features, which are combined using three learning-to-rank techniques. In particular, we show and explain why multiple weighting models should be deployed as features. Moreover, we unexpectedly find that training the weighting model's parameters degrades learned model's effectiveness. Finally, we show that computing a weighting model separately for each field is less effective than more theoretically-sound field-based weighting models.

References

[1]
Amati, G. 2003. Probabilistic models for information retrieval based on divergence from randomness. Ph.D. thesis, Department of Computing Science, University of Glasgow.
[2]
Amati, G., Ambrosi, E., Bianchi, M., Gaibisso, C., and Gambosi, G. 2008. FUB, IASI-CNR and University of Tor Vergata at TREC 2007 Blog track. In Proceedings of the 16th Text REtrieval Conference (TREC'07).
[3]
Becchetti, L., Castillo, C., Donato, D., Leonardi, S., and Baeza-Yates, R. 2006. Link-based characterization and detection of Web spam. In Proceedings of the 2nd International Workshop on Adversarial Information Retrieval on the Web (AIRWEB'06).
[4]
Beitzel, S. M., Jensen, E. C., Chowdhury, A., Grossman, D., Frieder, O., and Goharian, N. 2004. Fusion of effective retrieval strategies in the same information retrieval system. J. Amer. Soc. Inf. Sci. Technol. 55, 10, 859--868.
[5]
Bendersky, M., Croft, W. B., and Diao, Y. 2011. Quality-biased ranking of Web documents. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining (WSDM'11). ACM, New York, NY, 95--104.
[6]
Broder, A. Z., Carmel, D., Herscovici, M., Soffer, A., and Zien, J. 2003. Efficient query evaluation using a two-level retrieval process. In Proceedings of the 12th International Conference on Information and Knowledge Management (CIKM'03). ACM Press, New York, NY, 426--434.
[7]
Burges, C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N., and Hullender, G. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd International Conference on Machine Learning (ICML'05). ACM, New York, NY, 89--96.
[8]
Callan, J., Hoy, M., Yoo, C., and Zhao, L. 2009. The ClueWeb09 dataset. In Proceedings of the 18th Text REtrieval Conference (TREC'09).
[9]
Cambazoglu, B. B., Zaragoza, H., Chapelle, O., Chen, J., Liao, C., Zheng, Z., and Degenhardt, J. 2010. Early exit optimizations for additive machine learned ranking systems. In Proceedings of the 3rd ACM International Conference on Web Search and Data Mining (WSDM'10). ACM, New York, NY, 411--420.
[10]
Cao, Z., Qin, T., Liu, T.-Y., Tsai, M.-F., and Li, H. 2007. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning (ICML'07). ACM, New York, NY, 129--136.
[11]
Chappelle, O. and Chang, Y. 2011. Yahoo! learning to rank challenge overview. J. Machine Learn. Res.-Proc. Track, 14, 1--24.
[12]
Chowdhury, A., McCabe, M. C., Grossman, D., and Frieder, O. 2002. Document normalization revisited. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'02). ACM, New York, NY, 381--382.
[13]
Clarke, C. L. A., Craswell, N., and Soboroff, I. 2010. Overview of the TREC 2009 Web track. In Proceedings of the 18th Text REtrieval Conference (TREC'09).
[14]
Clarke, C. L. A., Craswell, N., and Soboroff, I. 2011. Overview of the TREC 2010 Web track. In Proceedings of the 19th Text REtrieval Conference (TREC'10).
[15]
Clarke, C. L. A., Craswell, N., and Soboroff, I. 2012. Overview of the TREC 2011 Web track. In Proceedings of the 20th Text REtrieval Conference (TREC'11).
[16]
Cormack, G. V., Smucker, M. D., and Clarke, C. L. A. 2011. Efficient and effective spam filtering and re-ranking for large Web datasets. Inf. Retr. 14, 5, 441--465.
[17]
Craswell, N., Fetterly, D., Najork, M., Robertson, S., and Yilmaz, E. 2010. Microsoft Research at TREC 2009. In Proceedings of the 18th Text REtrieval Conference (TREC'09).
[18]
Croft, B., Metzler, D., and Strohman, T. 2009. Search Engines: Information Retrieval in Practice. Addison-Wesley, Boston, MA.
[19]
Fang, H., Tao, T., and Zhai, C. 2004. A formal study of information retrieval heuristics. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'04). ACM, New York, NY, 49--56.
[20]
Fox, E. A., Koushik, P., Shaw, J. A., Modlin, R., and Rao, D. 1993. Combining evidence from multiple searches. In Proceedings of the 1st Text REtrieval Conference (TREC-1). 319--328. NIST Special Publication 500--207, Gaithersburg, MD.
[21]
Fox, E. A. and Shaw, J. A. 1994. Combination of multiple searches. In Proceedings of the 2nd Text REtrieval Conference (TREC-2). 243--252. NIST Special Publication 500--215, Gaithersburg, MD.
[22]
Ganjisaffar, Y., Caruana, R., and Lopes, C. 2011. Bagging gradient-boosted trees for high precision, low variance ranking models. In Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information (SIGIR'11). ACM, New York, NY, 85--94.
[23]
He, B. 2007. Term frequency normalisation for information retrieval. Ph.D. thesis, Department of Computing Science, University of Glasgow.
[24]
He, B., Macdonald, C., and Ounis, I. 2008. Retrieval sensitivity under training using different measures. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'08). ACM, New York, NY, 67--74.
[25]
He, B. and Ounis, I. 2003. A study of parameter tuning for term frequency normalization. In Proceedings of the 12th International Conference on Information and Knowledge Management (CIKM'03). ACM Press, New York, NY, 10--16.
[26]
He, B. and Ounis, I. 2005. A study of the Dirichlet Priors for term frequency normalisation. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'05). ACM, New York, NY, 465--471.
[27]
He, B. and Ounis, I. 2007a. Combining fields for query expansion and adaptive query expansion. Inf. Process. Manage. 43, 5, 1294--1307.
[28]
He, B. and Ounis, I. 2007b. On setting the hyper-parameters of term frequency normalization for information retrieval. ACM Trans. Inf. Syst. 25, 3.
[29]
Joachims, T. 2002. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'02). ACM, New York, NY, 133--142.
[30]
Jones, K. S., Walker, S., and Robertson, S. E. 2000. A probabilistic model of information retrieval: Development and comparative experiments. Inf. Process. Manage. 36, 779--808.
[31]
Kamps, J. 2006. Effective smoothing for a terabyte of text. In Proceedings of the 14th Text REtrieval Conference (TREC'05). NIST Special Publication, Gaithersburg, MD.
[32]
Katzer, J., McGill, M. J., Tessier, J., Frakes, W., and Dasgupta, P. 1982. A study of the overlap among document representations. Inf. Technol. Res. Develo. 1, 2, 261--274.
[33]
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. 1983. Optimization by simulated annealing. Science 220, 4598, 671--680.
[34]
Kraaij, W., Westerveld, T., and Hiemstra, D. 2002. The importance of prior probabilities for entry page search. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'02). ACM, New York, NY, 27--34.
[35]
Lee, J. H. 1997. Analyses of multiple evidence combination. In Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'97). ACM, New York, NY, 267--276.
[36]
Lempel, R. and Moran, S. 2001. SALSA: The stochastic approach for link-structure analysis. ACM Trans. Inf. Syst. 19, 131--160.
[37]
Li, H. 2011. Learning to Rank for Information Retrieval and Natural Language Processing. Synthesis Lectures on Human Language Technologies, Morgan & Claypool Publishers.
[38]
Liu, T.-Y. 2009. Learning to rank for information retrieval. Found. Trends Inf. Retriev. 3, 3, 225--331.
[39]
Macdonald, C., McCreadie, R., Santos, R., and Ounis, I. 2012. From puppy to maturity: Experiences in developing terrier. In Proceedings of the SIGIR Workshop in Open Source Information Retrieval.
[40]
Macdonald, C., Plachouras, V., He, B., Lioma, C., and Ounis, I. 2006. Experiments in per-field normlization and language specific stemming. In Proceedings of the CLEF Workshop 2005.
[41]
Macdonald, C., Santos, R., and Ounis, I. 2012. The whens and hows of learning to rank for web search. Info. Retrieval 10.1007/s10791-012-9209-9.
[42]
Metzler, D. 2007a. Using gradient descent to optimize language modeling smoothing parameters. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'07). ACM, New York, NY, 687--688.
[43]
Metzler, D. A. 2007b. Automatic feature selection in the markov random field model for information retrieval. In Proceedings of the 16th ACM Conference on Information and Knowledge Management (CIKM'07). ACM, New York, NY, 253--262.
[44]
Moffat, A. and Zobel, J. 1996. Self-indexing inverted files for fast text retrieval. ACM Trans. Inf. Syst. 14, 4, 349--379.
[45]
Ogilvie, P. and Callan, J. 2003. Combining document representations for known-item search. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval (SIGIR'03). ACM, New York, NY, 143--150.
[46]
Page, L., Brin, S., Motwani, R., and Winograd, T. 1998. The PageRank citation ranking: Bringing order to the Web. Tech. rep. Stanford Digital Library Technologies Project, Stanford, CA.
[47]
Peng, J., Macdonald, C., He, B., Plachouras, V., and Ounis, I. 2007. Incorporating term dependency in the DFR framework. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'07). ACM Press, New York, NY.
[48]
Plachouras, V. 2006. Selective Web information retrieval. Ph.D. Dissertation, Department of Computing Science, University of Glasgow.
[49]
Plachouras, V., Ounis, I., and Amati, G. 2005. The static absorbing model for the Web. J. Web Eng. 4, 2, 165--186.
[50]
Qin, T., Liu, T.-Y., Xu, J., and Li, H. 2009. LETOR: A benchmark collection for research on learning to rank for information retrieval. Info. Retriev. 13, 4, 347--374.
[51]
Robertson, S., Zaragoza, H., and Taylor, M. 2004. Simple BM25 extension to multiple weighted fields. In Proceedings of the 13th ACM Conference on Information and Knowledge Management (CIKM'04). ACM, New York, NY, 42--49.
[52]
Robertson, S. E. and Jones, K. S. 1976. Relevance weighting of search terms. J. Am. Soc. Inf. Sci. 27, 3, 129--146.
[53]
Robertson, S. E., Walker, S., Hancock-Beaulieu, M., Gatford, M., and Payne, A. 1995. Okapi at TREC-4. In Proceedings of the 4th Text REtrieval Conference (TREC-4).
[54]
Robertson, S. E., Walker, S., Hancock-Beaulieu, M., Gull, A., and Lau, M. 1992. Okapi at TREC. In Proceedings of the Text REtrieval Conference (TREC-1). 21--30.
[55]
Singhal, A., Buckley, C., and Mitra, M. 1996. Pivoted document length normalization. In Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'96). ACM, New York, NY, 21--29.
[56]
Smucker, M. D. and Allan, J. 2005. An investigation of dirichlet prior smoothing's performance advantage. Tech. rep. IR-391, University of Massachusetts, Amherst, MA.
[57]
Turtle, H. and Flood, J. 1995. Query evaluation: Strategies and optimizations. Inf. Proces. Manag. 31, 6, 831--850.
[58]
Tyree, S., Weinberger, K. Q., Agrawal, K., and Paykin, J. 2011. Parallel boosted regression trees for Web search ranking. In Proceedings of the 20th International Conference on World Wide Web (WWW'11). ACM, New York, NY, 387--396.
[59]
van Rijsbergen, C. 1979. Information Retrieval, 2nd Ed. Butterworths, London. U.K.
[60]
Vogt, C. 1997. When does it make sense to linearly combine relevance scores. In Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'97). ACM, New York, NY.
[61]
Vogt, C. C. and Cottrell, G. W. 1998. Predicting the performance of linearly combined IR systems. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'98). ACM, New York, NY, 190--196.
[62]
Weinberger, K., Mohan, A., and Chen, Z. 2010. Tree ensembles and transfer learning. In J. Machine Learn. Res.-Proc. Track. 14, 1--24.
[63]
Wu, Q., Burges, C. J. C., Svore, K. M., and Gao, J. 2008. Ranking, boosting, and model adaptation. Tech. rep. MSR-TR-2008-109, Microsoft, Redmand, WA.
[64]
Xu, J. and Li, H. 2007. Adarank: A boosting algorithm for information retrieval. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'07). ACM, New York, NY, 391--398.
[65]
Zaragoza, H., Craswell, N., Taylor, M., Saria, S., and Robertson, S. 2004. Microsoft Cambridge at TREC-13: Web and HARD tracks. In Proceedings of the 13th Text REtrieval Conference (TREC'04).
[66]
Zhai, C. and Lafferty, J. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'01). ACM, New York, NY, 334--342.

Cited By

View all
  • (2024)A set of novel HTML document quality features for Web information retrieval: Including applications to learning to rank for information retrievalExpert Systems with Applications10.1016/j.eswa.2024.123177246(123177)Online publication date: Jul-2024
  • (2023)Selective Query Processing: A Risk-Sensitive Selection of Search ConfigurationsACM Transactions on Information Systems10.1145/360847442:1(1-35)Online publication date: 21-Aug-2023
  • (2022)How to build high quality L2R training data: Unsupervised compression-based selective sampling for learning to rankInformation Sciences10.1016/j.ins.2022.04.012601(90-113)Online publication date: Jul-2022
  • Show More Cited By

Index Terms

  1. About learning models with multiple query-dependent features

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Information Systems
    ACM Transactions on Information Systems  Volume 31, Issue 3
    July 2013
    202 pages
    ISSN:1046-8188
    EISSN:1558-2868
    DOI:10.1145/2493175
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 05 August 2013
    Accepted: 01 February 2013
    Revised: 01 September 2012
    Received: 01 April 2012
    Published in TOIS Volume 31, Issue 3

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Learning to rank
    2. field-based weighting models
    3. samples

    Qualifiers

    • Research-article
    • Research
    • Refereed

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A set of novel HTML document quality features for Web information retrieval: Including applications to learning to rank for information retrievalExpert Systems with Applications10.1016/j.eswa.2024.123177246(123177)Online publication date: Jul-2024
    • (2023)Selective Query Processing: A Risk-Sensitive Selection of Search ConfigurationsACM Transactions on Information Systems10.1145/360847442:1(1-35)Online publication date: 21-Aug-2023
    • (2022)How to build high quality L2R training data: Unsupervised compression-based selective sampling for learning to rankInformation Sciences10.1016/j.ins.2022.04.012601(90-113)Online publication date: Jul-2022
    • (2021)Defining an Optimal Configuration Set for Selective Search Strategy - A Risk-Sensitive ApproachProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3482422(1335-1345)Online publication date: 26-Oct-2021
    • (2020)ON THE USEFULNESS OF HTML META ELEMENTS FOR WEB RETRIEVALEskişehir Technical University Journal of Science and Technology A - Applied Sciences and Engineering10.18038/estubtda.61510321:1(182-198)Online publication date: 31-Mar-2020
    • (2020)Declarative Experimentation in Information Retrieval using PyTerrierProceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval10.1145/3409256.3409829(161-168)Online publication date: 14-Sep-2020
    • (2020)Feature Extraction for Large-Scale Text CollectionsProceedings of the 29th ACM International Conference on Information & Knowledge Management10.1145/3340531.3412773(3015-3022)Online publication date: 19-Oct-2020
    • (2020)Effective contact recommendation in social networks by adaptation of information retrieval modelsInformation Processing & Management10.1016/j.ipm.2020.10228557:5(102285)Online publication date: Sep-2020
    • (2020)Aggregation on Learning to Rank for Consumer Health Information RetrievalModelling and Development of Intelligent Systems10.1007/978-3-030-39237-6_6(81-93)Online publication date: 17-Jan-2020
    • (2019)Boosting Search Performance Using Query VariationsACM Transactions on Information Systems10.1145/334500137:4(1-25)Online publication date: 4-Oct-2019
    • Show More Cited By

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media