Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2970398.2970435acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
short-paper

Learning to Rank with Labeled Features

Published: 12 September 2016 Publication History

Abstract

Classic learning to rank algorithms are trained using a set of labeled documents, pairs of documents, or rankings of documents. Unfortunately, in many situations, gathering such labels requires significant overhead in terms of time and money. We present an algorithm for training a learning to rank model using a set of labeled features elicited from system designers or domain experts. Labeled features incorporate a system designer's belief about the correlation between certain features and relative relevance. We demonstrate the efficacy of our model on a public learning to rank dataset. Our results show that we outperform our baselines even when using as little as a single feature label.

References

[1]
S. Amershi, M. Chickering, S. M. Drucker, B. Lee, P. Simard, and J. Suh. Modeltracker: Redesigning performance analysis tools for machine learning. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.
[2]
J. A. Aslam, E. Kanoulas, V. Pavlu, S. Savev, and E. Yilmaz. Document selection methodologies for efficient and effective learning-to-rank. In Proceedings of the 32Nd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2009.
[3]
J. A. Aslam, V. Pavlu, and E. Yilmaz. A statistical method for system evaluation using incomplete judgments. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006.
[4]
J. Bai, F. Diaz, Y. Chang, Z. Zheng, and K. Chen. Cross-market model adaptation with pairwise preference data for web search ranking. In Proceedings of the 23rd International Conference on Computational Linguistics, 2010.
[5]
C. J. Burges. From ranknet to lambdarank to lambdamart: An overview. Technical Report MSR-TR-2010-82, Microsoft Research, 2010.
[6]
B. Carterette and P. N. Bennett. Evaluation measures for preference judgments. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, 2008.
[7]
G. Druck, G. Mann, and A. McCallum. Learning from labeled features using generalized expectation criteria. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, 2008.
[8]
H. Fang and C. Zhai. An exploration of axiomatic approaches to information retrieval. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, 2005.
[9]
G. C. M. Gomes, V. C. Oliveira, J. M. Almeida, and M. A. Gonçalves. Is learning to rank worth it? a statistical analysis of learning to rank methods. Journal of Information and Data Management, 4(1):57--66, February 2013.
[10]
T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, 2002.
[11]
T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[12]
B. Long, Y. Chang, A. Dong, and J. He. Pairwise cross-domain factor model for heterogeneous transfer ranking. In Proceedings of the Fifth ACM International Conference on Web Search and Data Mining, 2012.
[13]
A. Mohan, Z. Chen, and K. Q. Weinberger. Web-search ranking with initialized gradient boosted regression trees. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:77--89, 2011.
[14]
T. Qin, T.-Y. Liu, W. Ding, J. Xu, and H. Li. Microsoft learning to rank datasets, May 2010.
[15]
H. Raghavan, O. Madani, and R. Jones. Active learning with feedback on features and instances. J. Mach. Learn. Res., 7:1655--1686, Dec. 2006.
[16]
S. E. Robertson, S. Walker, S. Jones, M. Hancock-Beaulieu, and M. Gatford. Okapi at trec-3. In Proceedings of the Third Text REtrieval Conference, 1994.
[17]
R. E. Schapire, M. Rochery, M. G. Rahim, and N. Gupta. Incorporating prior knowledge into boosting. In Proceedings of the Nineteenth International Conference on Machine Learning, 2002.

Cited By

View all
  • (2023)Field features: The impact in learning to rank approachesApplied Soft Computing10.1016/j.asoc.2023.110183138(110183)Online publication date: May-2023
  • (2022)An Empirical Study of the Impact of Field Features in Learning-to-rank MethodIntelligent Data Engineering and Automated Learning – IDEAL 202110.1007/978-3-030-91608-4_18(176-187)Online publication date: 1-Jan-2022
  • (2021)Good to the Last Bit: Data-Driven Encoding with CodecDBProceedings of the 2021 International Conference on Management of Data10.1145/3448016.3457283(843-856)Online publication date: 9-Jun-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ICTIR '16: Proceedings of the 2016 ACM International Conference on the Theory of Information Retrieval
September 2016
318 pages
ISBN:9781450344975
DOI:10.1145/2970398
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 September 2016

Permissions

Request permissions for this article.

Check for updates

Author Tag

  1. learning to rank

Qualifiers

  • Short-paper

Conference

ICTIR '16
Sponsor:

Acceptance Rates

ICTIR '16 Paper Acceptance Rate 41 of 79 submissions, 52%;
Overall Acceptance Rate 235 of 527 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)1
Reflects downloads up to 18 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Field features: The impact in learning to rank approachesApplied Soft Computing10.1016/j.asoc.2023.110183138(110183)Online publication date: May-2023
  • (2022)An Empirical Study of the Impact of Field Features in Learning-to-rank MethodIntelligent Data Engineering and Automated Learning – IDEAL 202110.1007/978-3-030-91608-4_18(176-187)Online publication date: 1-Jan-2022
  • (2021)Good to the Last Bit: Data-Driven Encoding with CodecDBProceedings of the 2021 International Conference on Management of Data10.1145/3448016.3457283(843-856)Online publication date: 9-Jun-2021
  • (2018)Ranking DistillationProceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining10.1145/3219819.3220021(2289-2298)Online publication date: 19-Jul-2018
  • (2018)SIGIR 2018 Workshop on Learning from Limited or Noisy Data for Information RetrievalThe 41st International ACM SIGIR Conference on Research & Development in Information Retrieval10.1145/3209978.3210200(1439-1440)Online publication date: 27-Jun-2018
  • (2017)Neural Ranking Models with Weak SupervisionProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3077136.3080832(65-74)Online publication date: 7-Aug-2017
  • (2016)Boosting Titles does not Generally Improve Retrieval EffectivenessProceedings of the 21st Australasian Document Computing Symposium10.1145/3015022.3015028(25-32)Online publication date: 5-Dec-2016

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media