Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3397271.3401292acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper

Multi-grouping Robust Fair Ranking

Published: 25 July 2020 Publication History

Abstract

Rankings are at the core of countless modern applications and thus play a major role in various decision making scenarios. When such rankings are produced by data-informed, machine learning-based algorithms, the potentially harmful biases contained in the data and algorithms are likely to be reproduced and even exacerbated. This motivated recent research to investigate a methodology for fair ranking, as a way to correct the aforementioned biases. Current approaches to fair ranking consider that the protected groups, i.e., the partition of the population potentially impacted by the biases, are known. However, in a realistic scenario, this assumption might not hold as different biases may lead to different partitioning into protected groups. Only accounting for one such partition (i.e., grouping) would still lead to potential unfairness with respect to the other possible groupings. Therefore, in this paper, we study the problem of designing fair ranking algorithms without knowing in advance the groupings that will be used later to assess their fairness. The approach that we follow is to rely on a carefully chosen set of groupings when deriving the ranked lists, and we empirically investigate which selection strategies are the most effective. An efficient two-step greedy brute-force method is also proposed to embed our strategy. As benchmark for this study, we adopted the dataset and setting composing the TREC 2019 Fair Ranking track.

Supplementary Material

MP4 File (3397271.3401292.mp4)
Current approaches to fair ranking consider that the protected groups, i.e., the partition of the population potentially impacted by biases, are known. However, in a realistic scenario, this assumption might not hold as different biases may lead to different partitioning into protected groups. Only accounting for one such partition (i.e., grouping) would still lead to potential unfairness with respect to the other possible groupings. We then propose to study the problem of designing fair ranking algorithms without knowing in advance the groupings that will be used later to assess their fairness. The approach that we follow is to rely on a carefully chosen set of groupings when deriving the ranked lists, and we empirically investigate which selection strategies are the most effective. An efficient two-step greedy brute-force method is also proposed to embed our strategy. As benchmark for this study, we adopted the setting from the TREC 2019 Fair Ranking track.

References

[1]
David J. Aldous. 1985. Exchangeability and Related Topics. In École d'É té de Probabilité s de Saint-Flour XIII -- 1983. 1--198.
[2]
Asia J. Biega, Fernando Diaz, and Michael D. Ekstrand. 2019. TREC 2019 Fair Ranking Track Overview. In TREC '19.
[3]
Asia J. Biega, Krishna P. Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In SIGIR '18. 405--414.
[4]
L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. 2018. Ranking with Fairness Constraints. In ICALP '18. 28:1--28:15.
[5]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD '18. 2219--2228.
[6]
Ashudeep Singh and Thorsten Joachims. 2019. Policy Learning for Fairness in Ranking. arXiv:1902.04056 (2019).
[7]
Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM '17 (SSDBM '17). Article 22, 6 pages.
[8]
Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In CIKM '17. 1569--1578.

Cited By

View all
  • (2023)A Systematic Review of Fairness, Accountability, Transparency and Ethics in Information RetrievalACM Computing Surveys10.1145/3637211Online publication date: 15-Dec-2023
  • (2022)Introducing the Expohedron for Efficient Pareto-optimal Fairness-Utility Amortizations in Repeated RankingsProceedings of the Fifteenth ACM International Conference on Web Search and Data Mining10.1145/3488560.3498490(498-507)Online publication date: 11-Feb-2022
  • (2022)Pareto-Optimal Fairness-Utility Amortizations in Rankings with a DBN Exposure ModelProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3532036(748-758)Online publication date: 6-Jul-2022
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGIR '20: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval
July 2020
2548 pages
ISBN:9781450380164
DOI:10.1145/3397271
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 July 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. fair ranking
  2. grouping robustness
  3. multi-grouping fair ranking

Qualifiers

  • Short-paper

Conference

SIGIR '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 792 of 3,983 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)1
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2023)A Systematic Review of Fairness, Accountability, Transparency and Ethics in Information RetrievalACM Computing Surveys10.1145/3637211Online publication date: 15-Dec-2023
  • (2022)Introducing the Expohedron for Efficient Pareto-optimal Fairness-Utility Amortizations in Repeated RankingsProceedings of the Fifteenth ACM International Conference on Web Search and Data Mining10.1145/3488560.3498490(498-507)Online publication date: 11-Feb-2022
  • (2022)Pareto-Optimal Fairness-Utility Amortizations in Rankings with a DBN Exposure ModelProceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3477495.3532036(748-758)Online publication date: 6-Jul-2022
  • (2021)Incentives for Item Duplication Under Fair Ranking PoliciesAdvances in Bias and Fairness in Information Retrieval10.1007/978-3-030-78818-6_7(64-77)Online publication date: 25-Jun-2021

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media