Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3471158.3472241acmconferencesArticle/Chapter ViewAbstractPublication PagesictirConference Proceedingsconference-collections
research-article

Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models

Published: 31 August 2021 Publication History

Abstract

Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.

References

[1]
Ahmed M Alaa and Mihaela van der Schaar. 2019. Demystifying Black-box Models with Symbolic Metamodels. In Advances in Neural Information Processing Systems. 11304--11314.
[2]
Isabelle Alvarez. 2004. Explaining the result of a decision tree to the end-user. In ECAI.
[3]
David Alvarez-Melis and Tommi Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In EMNLP. ACL.
[4]
Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In ICML.
[5]
Brandon Carter, Jonas Mueller, Siddhartha Jain, and David Gifford. 2019. What made you do this? Understanding black-box decisions with sufficient input subsets. In Proceedings of Machine Learning Research (Proceedings of Machine Learning Research), Kamalika Chaudhuri and Masashi Sugiyama (Eds.), Vol. 89. 567--576. http://proceedings.mlr.press/v89/carter19a.html
[6]
Abhimanyu Das and David Kempe. 2011. Submodular Meets Spectral: Greedy Algorithms for Subset Selection, Sparse Approximation and Dictionary Selection. In ICML.
[7]
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. ERASER: A Benchmark to Evaluate Rationalized NLP Models. arXiv preprint arXiv:1911.03429 (2019).
[8]
Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, and Amin Karbasi. 2017. Streaming Weak Submodularity: Interpreting Neural Networks on the Fly. In NeurIPS.
[9]
Yoav Freund, Raj Iyer, Robert E Schapire, and Yoram Singer. 2003. An efficient boosting algorithm for combining preferences. Journal of machine learning research (2003).
[10]
Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2019. Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intelligent Systems (2019).
[11]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM CSUR (2018).
[12]
Maximilian Idahl, Lijun Lyu, Ujwal Gadiraju, and Avishek Anand. 2021. Towards Benchmarking the Utility of Explanations for Model Debugging. arXiv preprint arXiv:2105.04505 (2021).
[13]
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J Gershman, and Finale Doshi-Velez. 2019. Human Evaluation of Models Built for Interpretability. In AAAI.
[14]
Benjamin Letham, Cynthia Rudin, Tyler H McCormick, David Madigan, et al. 2015. Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model. The Annals of Applied Statistics (2015).
[15]
Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P Trevino, Jiliang Tang, and Huan Liu. 2017. Feature selection: A data perspective. ACM Computing Surveys (CSUR) (2017).
[16]
Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in NeurIPS.
[17]
Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition (2017).
[18]
Tao Qin and Tie-Yan Liu. 2013. Introducing LETOR 4.0 Datasets. CoRR (2013).
[19]
Daniël Rennings, Felipe Moraes, and Claudia Hauff. 2019. An Axiomatic Approach to Diagnosing Neural IR Models. In Advances in Information Retrieval, Leif Azzopardi, Benno Stein, Norbert Fuhr, Philipp Mayr, Claudia Hauff, and Djoerd Hiemstra (Eds.). Springer International Publishing, Cham, 489--503.
[20]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why Should I Trust You?: Explaining the Predictions of Any Classifier. In ACM SIGKDD.
[21]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model-agnostic explanations. In AAAI.
[22]
Jaspreet Singh and Avishek Anand. 2018. Posthoc Interpretability of Learning to Rank Models using Secondary Training Data. arXiv preprint arXiv:1806.11330 (2018).
[23]
Jaspreet Singh and Avishek Anand. 2019. EXS: Explainable Search Using Local Model Agnostic Interpretability. In ACM WSDM .
[24]
Jaspreet Singh and Avishek Anand. 2020. Model agnostic interpretability of rankers via intent modelling. In Conference on Fairness, Accountability, and Transparency.
[25]
Manisha Verma and Debasis Ganguly. 2019. LIRME: Locally Interpretable Ranking Model Explanation. In ACM SIGIR.
[26]
Michael Völske, Alexander Bondarenko, Maik Fröbe, Matthias Hagen, Benno Stein, Jaspreet Singh, and Avishek Anand. 2021. Towards Axiomatic Explanations for Neural Ranking Models. arXiv preprint arXiv:2106.08019 (2021).
[27]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML.
[28]
Jinsung Yoon, James Jordon, and Mihaela van der Schaar. 2018. INVASE: Instance-wise variable selection using neural networks. In International Conference on Learning Representations.

Cited By

View all
  • (2024)Local List-Wise Explanations of LambdaMARTExplainable Artificial Intelligence10.1007/978-3-031-63797-1_19(369-392)Online publication date: 10-Jul-2024
  • (2023)Explainability of Text Processing and Retrieval MethodsProceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3632754.3632944(153-157)Online publication date: 15-Dec-2023
  • (2023)Extractive Explanations for Interpretable Text RankingACM Transactions on Information Systems10.1145/357692441:4(1-31)Online publication date: 23-Mar-2023
  • Show More Cited By

Index Terms

  1. Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICTIR '21: Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
    July 2021
    334 pages
    ISBN:9781450386111
    DOI:10.1145/3471158
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 31 August 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. LTR
    2. explainability
    3. interpretability
    4. learning-to-rank

    Qualifiers

    • Research-article

    Funding Sources

    • BMBF

    Conference

    ICTIR '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 235 of 527 submissions, 45%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)37
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 09 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Local List-Wise Explanations of LambdaMARTExplainable Artificial Intelligence10.1007/978-3-031-63797-1_19(369-392)Online publication date: 10-Jul-2024
    • (2023)Explainability of Text Processing and Retrieval MethodsProceedings of the 15th Annual Meeting of the Forum for Information Retrieval Evaluation10.1145/3632754.3632944(153-157)Online publication date: 15-Dec-2023
    • (2023)Extractive Explanations for Interpretable Text RankingACM Transactions on Information Systems10.1145/357692441:4(1-31)Online publication date: 23-Mar-2023
    • (2023)Explainable Information RetrievalProceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3594249(3448-3451)Online publication date: 19-Jul-2023
    • (2023)Zorro: Valid, Sparse, and Stable Explanations in Graph Neural NetworksIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2022.320117035:8(8687-8698)Online publication date: 1-Aug-2023
    • (2023)A Trustworthy View on Explainable Artificial Intelligence Method EvaluationComputer10.1109/MC.2022.323380656:4(50-60)Online publication date: 1-Apr-2023
    • (2022)SparCAssistProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3531677(3219-3223)Online publication date: 6-Jul-2022

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media