Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3450614.3462238acmconferencesArticle/Chapter ViewAbstractPublication PagesumapConference Proceedingsconference-collections
short-paper

Interactivity, Fairness and Explanations in Recommendations

Published: 22 June 2021 Publication History

Abstract

More and more aspects of our everyday lives are influenced by automated decisions made by systems that statistically analyze traces of our activities. It is thus natural to question whether such systems are trustworthy, particularly given the opaqueness and complexity of their internal workings. In this paper, we present our ongoing work towards a framework that aims to increase trust in machine-generated recommendations by combining ideas from three separate recent research directions, namely explainability, fairness and user interactive visualization. The goal is to enable different stakeholders, with potentially varying levels of background and diverse needs, to query, understand, and fix sources of distrust.

References

[1]
Nikos Bikakis, John Liagouris, Maria Krommyda, George Papastefanatos, and Timos K. Sellis. 2016. graphVizdb: A scalable platform for interactive large graph visualization. In ICDE.
[2]
Nikos Bikakis, Stavros Maroulis, George Papastefanatos, and Panos Vassiliadis. 2021. In-situ visual exploration over big raw data. Inf. Syst. 95, 101616. https://doi.org/10.1016/j.is.2020.101616
[3]
Rodrigo Borges and Kostas Stefanidis. 2019. Enhancing Long Term Fairness in Recommendations with Variational Autoencoders. In MEDES.
[4]
Rodrigo Borges and Kostas Stefanidis. 2020. On Measuring Popularity Bias in Collaborative Filtering Data. In BigVis.
[5]
Rodrigo Borges and Kostas Stefanidis. 2020. On Mitigating Popularity Bias in Recommendations via Variational Autoencoders. In SAC.
[6]
Robin Burke. 2017. Multisided Fairness for Recommendation. CoRR abs/1707.00093(2017).
[7]
Fatih Gedikli, Dietmar Jannach, and Mouzhi Ge. 2014. How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum. Comput. Stud. 72, 4 (2014).
[8]
Bryce Goodman and Seth R. Flaxman. 2017. European Union Regulations on Algorithmic Decision-Making and a ”Right to Explanation”. AI Magazine 38, 3 (2017), 50–57.
[9]
Jonathan L. Herlocker, Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In CSCW.
[10]
Aylin Caliskan Islam, Joanna J. Bryson, and Arvind Narayanan. 2016. Semantics derived automatically from language corpora necessarily contain human biases. CoRR abs/1608.07187(2016).
[11]
Uwe Jugel, Zbigniew Jerzak, Gregor Hackenbroich, and Volker Markl. 2016. VDDA: automatic visualization-driven data aggregation in relational databases. VLDB J. 25, 1 (2016), 53–77.
[12]
Vassilis Kaffes, Dimitris Sacharidis, and Giorgos Giannopoulos. 2021. Model-Agnostic Counterfactual Explanations of Recommendations. In UMAP.
[13]
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. 2018. Recommendation Independence. In FAT.
[14]
Bart P. Knijnenburg, Svetlin Bostandjiev, John O’Donovan, and Alfred Kobsa. 2012. Inspectability and control in social recommenders. In RecSys.
[15]
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual Fairness. In NIPS.
[16]
Yuyu Luo, Xuedi Qin, Nan Tang, and Guoliang Li. 2018. DeepEye: Towards Automatic Data Visualization. In ICDE.
[17]
Stavros Maroulis, Nikos Bikakis, George Papastefanatos, and Panos Vassiliadis. 2021. RawVis: A System for Efficient In-situ Visual Analytics. In SIGMOD (Demo).
[18]
Stavros Maroulis, Nikos Bikakis, George Papastefanatos, Panos Vassiliadis, and Yannis Vassiliou. 2021. Adaptive Indexing for In-situ Visual Exploration and Analytics. In DOLAP.
[19]
Dominik Moritz, Danyel Fisher, Bolin Ding, and Chi Wang. 2017. Trust, but Verify: Optimistic Visualizations of Approximate Queries for Exploring Big Data. In CHI.
[20]
Evaggelia Pitoura, Georgia Koutrika, and Kostas Stefanidis. 2020. Fairness in Rankings and Recommenders. In EDBT.
[21]
Evaggelia Pitoura, Kostas Stefanidis, and Georgia Koutrika. 2021. Fairness in Rankings and Recommendations: An Overview. CoRR abs/2104.05994(2021). https://arxiv.org/abs/2104.05994
[22]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In KDD.
[23]
Dimitris Sacharidis. 2019. Top-N group recommendations with fairness. In SAC.
[24]
Dimitris Sacharidis, Kyriakos Mouratidis, and Dimitrios Kleftogiannis. 2019. A Common Approach for Consumer and Provider Fairness in Recommendations. In RecSys Late-Breaking Results.
[25]
Babak Salimi, Luke Rodriguez, Bill Howe, and Dan Suciu. 2019. Interventional Fairness: Causal Database Repair for Algorithmic Fairness. In SIGMOD.
[26]
Julia Stoyanovich, Serge Abiteboul, and Gerome Miklau. 2016. Data Responsibly: Fairness, Neutrality and Transparency in Data Analysis. In EDBT.
[27]
Maria Stratigi, Jyrki Nummenmaa, Evaggelia Pitoura, and Kostas Stefanidis. 2020. Fair sequential group recommendations. In SAC.
[28]
Maria Stratigi, Katerina Tzompanaki, and Kostas Stefanidis. 2020. Why-Not Questions & Explanations for Collaborative Filtering. In WISE(Lecture Notes in Computer Science).
[29]
Nava Tintarev and Judith Masthoff. 2007. A Survey of Explanations in Recommender Systems. In ICDEW.
[30]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems - Methodological issues and empirical studies on the impact of personalization. User Model. User-Adapt. Interact. 22, 4-5 (2012), 399–439.
[31]
Georgia Troullinou, Haridimos Kondylakis, Kostas Stefanidis, and Dimitris Plexousakis. 2018. Exploring RDFS KBs Using Summaries. In ISWC.
[32]
Jesse Vig, Shilad Sen, and John Riedl. 2009. Tagsplanations: explaining recommendations using tags. In IUI.
[33]
Sandra Wachter, Brent D. Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. CoRR abs/1711.00399(2017).
[34]
Jing Nathan Yan, Ziwei Gu, Hubert Lin, and Jeffrey M. Rzeszotarski. 2020. Silva: Interactively Assessing Machine Learning Fairness Using Causality. In CHI.
[35]
Ziwei Zhu, Xia Hu, and James Caverlee. 2018. Fairness-Aware Tensor-Based Recommendation. In CIKM.

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)Ethics-based AI auditingInformation and Management10.1016/j.im.2024.10396961:5Online publication date: 1-Jul-2024
  • (2023)Towards adaptive and transparent tourism recommendations: A surveyExpert Systems10.1111/exsy.13400Online publication date: 18-Jul-2023

Index Terms

  1. Interactivity, Fairness and Explanations in Recommendations
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      UMAP '21: Adjunct Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization
      June 2021
      431 pages
      ISBN:9781450383677
      DOI:10.1145/3450614
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 June 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Recommender systems
      2. explainability
      3. fairness
      4. interactive visualization

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Conference

      UMAP '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 162 of 633 submissions, 26%

      Upcoming Conference

      UMAP '25

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)26
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 21 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
      • (2024)Ethics-based AI auditingInformation and Management10.1016/j.im.2024.10396961:5Online publication date: 1-Jul-2024
      • (2023)Towards adaptive and transparent tourism recommendations: A surveyExpert Systems10.1111/exsy.13400Online publication date: 18-Jul-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media