Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3600211.3604676acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article

On the Connection between Game-Theoretic Feature Attributions and Counterfactual Explanations

Published: 29 August 2023 Publication History

Abstract

Explainable Artificial Intelligence (XAI) has received widespread interest in recent years, and two of the most popular types of explanations are feature attributions, and counterfactual explanations. These classes of approaches have been largely studied independently and the few attempts at reconciling them have been primarily empirical. This work establishes a clear theoretical connection between game-theoretic feature attributions, focusing on but not limited to SHAP, and counterfactuals explanations. After motivating operative changes to Shapley values based feature attributions and counterfactual explanations, we prove that, under conditions, they are in fact equivalent. We then extend the equivalency result to game-theoretic solution concepts beyond Shapley values. Moreover, through the analysis of the conditions of such equivalence, we shed light on the limitations of naively using counterfactual explanations to provide feature importances. Experiments on three datasets quantitatively show the difference in explanations at every stage of the connection between the two approaches and corroborate the theoretical findings.

References

[1]
Kjersti Aas, Martin Jullum, and Anders Løland. 2021. Explaining Individual Predictions When Features Are Dependent: More Accurate Approximations to Shapley Values. Artificial Intelligence 298 (2021), 103502.
[2]
Emanuele Albini, Jason Long, Danial Dervovic, and Daniele Magazzeni. 2022. Counterfactual Shapley Additive Explanations. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency(FAccT ’22). Association for Computing Machinery, 1054–1070.
[3]
Emanuele Albini, Antonio Rago, Pietro Baroni, and Francesca Toni. 2020. Relation-Based Counterfactual Explanations for Bayesian Network Classifiers. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 451–457.
[4]
Emanuele Albini, Antonio Rago, Pietro Baroni, and Francesca Toni. 2021. Influence-Driven Explanations for Bayesian Network Classifiers. In PRICAI 2021: Trends in Artificial Intelligence(Lecture Notes in Computer Science). Springer International Publishing, 88–100.
[5]
John F. III Banzhaf. 1964. Weighted Voting Doesn’t Work: A Mathematical Analysis. Rutgers Law Review 19 (1964), 317.
[6]
Solon Barocas, Andrew D. Selbst, and Manish Raghavan. 2020. The Hidden Assumptions behind Counterfactual Explanations and Principal Reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). ACM, 80–89.
[7]
Barry Becker. 1994. Adult Dataset: Extract of 1994 U.S. Income Census.
[8]
James Bergstra, Daniel Yamins, and David Cox. 2013. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. In Proceedings of the 30th International Conference on Machine Learning. PMLR, 115–123.
[9]
Siddharth Bhatore, Lalit Mohan, and Y. Raghu Reddy. 2020. Machine Learning Techniques for Credit Risk Evaluation: A Systematic Literature Review. Journal of Banking and Financial Technology 4, 1 (2020), 111–138.
[10]
Umang Bhatt, Adrian Weller, and José M. F. Moura. 2020. Evaluating and Aggregating Feature-based Model Explanations. In Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI). 3016–3022.
[11]
U.S. Consumer Financial Protection Bureau CFPB. 2018. Equal Credit Opportunity Act (Regulation B), 12 CFR Part 1002.
[12]
Hugh Chen, Joseph D. Janizek, Scott Lundberg, and Su-In Lee. 2020. True to the Model or True to the Data?. In ICML Workshop on Human Interpretability in Machine Learning. arxiv:2006.16234
[13]
Tianqi Chen and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD ’16). Association for Computing Machinery, 785–794.
[14]
James Samuel Coleman. 1968. Control of Collectivities and the Power of a Collectivity to Act. Technical Report. RAND Corporation.
[15]
Ian C Covert, Scott Lundberg, and Su-In Lee. 2020. Feature Removal Is A Unifying Principle For Model Explanation Methods. In NeurIPS ML-Retrospectives, Surveys & Meta-Analyses Workshop.
[16]
Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-Objective Counterfactual Explanations. In Parallel Problem Solving from Nature – PPSN XVI. Vol. 12269. Springer International Publishing, 448–469.
[17]
J. Deegan and E. W. Packel. 1978. A New Index of Power for Simplen-Person Games. International Journal of Game Theory 7, 2 (1978), 113–123.
[18]
John Deegan and Edward W. Packel. 1983. To the (Minimal Winning) Victors Go the (Equally Divided) Spoils: A New Power Index for Simple n-Person Games. In Political and Related Models. Springer, 239–255.
[19]
Pierre Dehez. 2017. On Harsanyi Dividends and Asymmetric Values. International Game Theory Review 19, 03 (2017), 1750012.
[20]
Rubén R. Fernández, Isaac Martín de Diego, Víctor Aceña, Javier M. Moguerza, and Alberto Fernández-Isabel. 2019. Relevance Metric for Counterfactuals Selection in Decision Trees. In Intelligent Data Engineering and Automated Learning – IDEAL 2019. Vol. 11871. Springer International Publishing, 85–93.
[21]
FICO Community. 2019. Explainable Machine Learning Challenge.
[22]
Christopher Frye, Damien de Mijolla, Tom Begley, Laurence Cowton, Megan Stanley, and Ilya Feige. 2021. Shapley Explainability on the Data Manifold. In Proceedings of the 9th International Conference on Learning Representations (ICLR).
[23]
Daniel Fryer, Inga Strümke, and Hien Nguyen. 2021. Shapley Values for Feature Selection: The Good, the Bad, and the Axioms. IEEE Access 9 (2021), 144352–144360.
[24]
Sainyam Galhotra, Romila Pradhan, and Babak Salimi. 2021. Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals. In Proceedings of the 2021 International Conference on Management of Data. ACM, 577–590.
[25]
GDPR. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).
[26]
Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of Neural Networks Is Fragile. Proceedings of the 33rd AAAI Conference on Artificial Intelligence 33, 01 (2019), 3681–3688.
[27]
Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. 2020. A Survey of Deep Learning Techniques for Autonomous Driving. Journal of Field Robotics 37, 3 (2020), 362–386. arxiv:1910.07738
[28]
Riccardo Guidotti. 2022. Counterfactual Explanations and How to Find Them: Literature Review and Benchmarking. Data Mining and Knowledge Discovery (2022).
[29]
Joseph Y. Halpern. 2016. Actual Causality.
[30]
Goerge Charles Harsanyi. 1958. A Bargaining Model for the Cooperatiove N-Person Game. Ph. D. Dissertation.
[31]
Manfred J. Holler. 1978. A Priori Party Party Power and Government Formation: Esimerkkinä Suomi.
[32]
Manfred J. Holler and Edward W. Packel. 1983. Power, Luck and the Right Index. Zeitschrift für Nationalökonomie / Journal of Economics 43, 1 (1983), 21–29. jstor:41798164
[33]
Dominik Janzing, Lenon Minorics, and Patrick Bloebaum. 2020. Feature Relevance Quantification in Explainable AI: A Causal Problem. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 2907–2916.
[34]
Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1, 9 (2019), 389–399.
[35]
Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, and Yuichi Ike. 2022. Counterfactual Explanation Trees: Transparent and Consistent Actionable Recourse with Decision Trees. In Proceedings of The 25th International Conference on Artificial Intelligence and Statistics (AISTATS). PMLR, 1846–1870.
[36]
Adam Karczmarz, Tomasz Michalak, Anish Mukherjee, Piotr Sankowski, and Piotr Wygocki. 2022. Improved Feature Importance Computation for Tree Models Based on the Banzhaf Value. In Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence (UAI). PMLR, 969–979.
[37]
Amir-Hossein Karimi, G. Barthe, B. Balle, and Isabel Valera. 2020. Model-Agnostic Counterfactual Explanations for Consequential Decisions. In Proceedings of the 23rd International Conference on. Artificial Intelligence and Statistics (AISTATS).
[38]
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2022. A Survey of Algorithmic Recourse: Contrastive Explanations and Consequential Recommendations. Comput. Surveys 55, 5 (2022), 95:1–95:29.
[39]
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020. Algorithmic Recourse under Imperfect Causal Knowledge: A Probabilistic Approach. In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NeurIPS). 265–277. arxiv:2006.06831
[40]
Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 4466–4474.
[41]
Maurice Kendall and Jean D. Gibbons. 1990. Rank Correlation Methods (fifth ed.). A Charles Griffin Title.
[42]
Eoin M. Kenny and Mark T. Keane. 2021. On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence 35, 13 (2021), 11575–11585.
[43]
Satyapriya Krishna, Tessa Han, Alex Gu, Javin Pombra, Shahin Jabbari, Steven Wu, and Himabindu Lakkaraju. 2022. The Disagreement Problem in Explainable Machine Learning: A Practitioner’s Perspective. arxiv:2202.01602
[44]
I. Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle A. Friedler. 2020. Problems with Shapley-value-based Explanations as Feature Importance Measures. In Proceedings of the 37th International Conference on Machine Learning(ICML’20). JMLR.org, 5491–5500.
[45]
Aditya Lahiri, Kamran Alipour, Ehsan Adeli, and Babak Salimi. 2022. Combining Counterfactuals With Shapley Values To Explain Image Models. In ICML 2022 Workshop on Responsible Decision Making in Dynamic Environments. arXiv. arxiv:2206.07087
[46]
Jana Lang, Martin Giese, Winfried Ilg, and Sebastian Otte. 2022. Generating Sparse Counterfactual Explanations For Multivariate Time Series. arxiv:2206.00931
[47]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2019. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2801–2807.
[48]
Lending Club. 2019. Lending Club Loans.
[49]
Scott M. Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M. Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From Local Explanations to Global Understanding with Explainable AI for Trees. Nature Machine Intelligence 2, 1 (2020), 56–67.
[50]
Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS). 4768–4777.
[51]
Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The Role of Explainability in Creating Trustworthy Artificial Intelligence for Health Care: A Comprehensive Survey of the Terminology, Design Choices, and Evaluation Strategies. Journal of Biomedical Informatics 113 (2021), 103655.
[52]
Luke Merrick and Ankur Taly. 2020. The Explanation Game: Explaining Machine Learning Models Using Shapley Values. In Proceedings of 2020 International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE). 17–38. arxiv:1909.08128
[53]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the Social Sciences. Artificial Intelligence 267 (2019), 1–38.
[54]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2021. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems 11, 3-4 (2021), 24:1–24:45.
[55]
R. K. Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2021. Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End. In AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 652–663.
[56]
Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). Association for Computing Machinery, 607–617.
[57]
Martin J. Osborne and Ariel Rubinstein. 1994. A Course in Game Theory. MIT Press.
[58]
Ioannis Papantonis and Vaishak Belle. 2022. Principled Diverse Counterfactuals in Multilinear Models. arxiv:2201.06467
[59]
Martin Pawelczyk, Klaus Broelemann, and Gjergji Kasneci. 2020. On Counterfactual Explanations under Predictive Multiplicity. In Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI). PMLR, 809–818.
[60]
Martin Pawelczyk, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2022. Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse. In Proceedings of the 11th International Conference on Learning Representations (ICLR) 2023. arxiv:2203.06768
[61]
L. S. Penrose. 1946. The Elementary Statistics of Majority Voting. Journal of the Royal Statistical Society 109, 1 (1946), 53–57. jstor:2981392
[62]
Hans Peters. 2008. Game Theory: A Multi-Leveled Approach. Springer.
[63]
Vitali Petsiuk, Abir Das, and Kate Saenko. 2018. RISE: Randomized Input Sampling for Explanation of Black-box Models. In Proceedings of the British Machine Vision Conference (BMVC). arXiv. arxiv:1806.07421
[64]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. FACE: Feasible and Actionable Counterfactual Explanations. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 344–350. arxiv:1909.09369
[65]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1135–1144.
[66]
Pau Rodriguez, Massimo Caccia, Alexandre Lacoste, Lee Zamparo, Issam Laradji, Laurent Charlin, and David Vazquez. 2021. Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 1036–1045.
[67]
Fabrizio Russo and Francesca Toni. 2022. Causal Discovery and Injection for Feed-Forward Neural Networks. arxiv:2205.09787
[68]
Alessia Sarica, Andrea Quattrone, and Aldo Quattrone. 2022. Introducing the Rank-Biased Overlap as Similarity Measure for Feature Importance in Explainable Machine Learning: A Case Study on Parkinson’s Disease. In Brain Informatics(Lecture Notes in Computer Science). Springer International Publishing, 129–139.
[69]
Andrew D. Selbst and Solon Barocas. 2018. The Intuitive Appeal of Explainable Machines. Fordham Law Review 87, 1085 (2018).
[70]
Lloyd Stowell Shapley. 1951. Notes on the N-Person Game-II: The Value of an n-Person Game. Project Rand, U.S. Air Force (1951).
[71]
L. S. Shapley and Martin Shubik. 1954. A Method for Evaluating the Distribution of Power in a Committee System. The American Political Science Review 48, 3 (1954), 787–792. jstor:1951053
[72]
Shubham Sharma, Alan H. Gee, Jette Henderson, and Joydeep Ghosh. 2022. FASTER-CE: Fast, Sparse, Transparent, and Robust Counterfactual Explanations. arxiv:2210.06578
[73]
Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2020. CERTIFAI: A Common Framework to Provide Explanations and Analyse the Fairness and Robustness of Black-box Models. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. ACM, 166–172.
[74]
Ravid Shwartz-Ziv and Amitai Armon. 2022. Tabular Data: Deep Learning Is Not All You Need. Information Fusion 81 (2022), 84–90.
[75]
Barry Smyth and Mark T. Keane. 2021. A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations. Technical Report. arxiv:2101.09056
[76]
C. Spearman. 1987. The Proof and Measurement of Association between Two Things. The American Journal of Psychology 100, 3/4 (1987), 441–471. jstor:1422689
[77]
Thomas Spooner, Danial Dervovic, Jason Long, Jon Shepard, Jiahao Chen, and Daniele Magazzeni. 2021. Counterfactual Explanations for Arbitrary Regression Models. In ICML’21 Workshop on Algorithmic Recourse. arxiv:2106.15212
[78]
Ilia Stepin, Jose M. Alonso, Alejandro Catala, and Martin Pereira-Farina. 2021. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. IEEE Access 9 (2021), 11974–12001.
[79]
Erik Strumbelj and Igor Kononenko. 2010. An Efficient Explanation of Individual Classifications Using Game Theory. Journal of Machine Learning Research 11, 1 (2010), 1–18.
[80]
Pascal Sturmfels, Scott Lundberg, and Su-In Lee. 2020. Visualizing the Impact of Feature Attribution Baselines. Distill 5, 1 (2020), e22.
[81]
Mukund Sundararajan and Amir Najmi. 2020. The Many Shapley Values for Model Explanation. In Proceedings of the 37th International Conference on Machine Learning (ICML). 11.
[82]
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (ICML). PMLR, 3319–3328.
[83]
Sohini Upadhyay, Shalmali Joshi, and Himabindu Lakkaraju. 2021. Towards Robust and Reliable Algorithmic Recourse. NeurIPS 2021 Poster (2021), 12.
[84]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable Recourse in Linear Classification. Proceedings of the Conference on Fairness, Accountability, and Transparency (2019), 10–19. arxiv:1809.06514
[85]
René van den Brink and Gerard van der Laan. 1998. Axiomatizations of the Normalized Banzhaf Value and the Shapley Value. Social Choice and Welfare 15, 4 (1998), 567–582.
[86]
Arnaud Van Looveren and Janis Klaise. 2021. Interpretable Counterfactual Explanations Guided by Prototypes. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13–17, 2021, Proceedings, Part II. Springer-Verlag, 650–665.
[87]
Manuela Veloso, Tucker Balch, Daniel Borrajo, Prashant Reddy, and Sameena Shah. 2021. Artificial Intelligence Research in Finance: Discussion and Examples. Oxford Review of Economic Policy 37, 3 (2021), 564–584.
[88]
Suresh Venkatasubramanian and Mark Alfano. 2020. The Philosophical Basis of Algorithmic Recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, 284–293.
[89]
Sahil Verma, Varich Boonsanong, Minh Hoang, Keegan E. Hines, John P. Dickerson, and Chirag Shah. 2020. Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review. (2020).
[90]
Mattia Villani, Joshua Lockhart, and Daniele Magazzeni. 2022. Feature Importance for Time Series Data: Improving KernelSHAP.
[91]
Julius von Kügelgen, Amir-Hossein Karimi, Umang Bhatt, Isabel Valera, Adrian Weller, and Bernhard Schölkopf. 2022. On the Fairness of Causal Algorithmic Recourse. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). arxiv:2010.06529
[92]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. SSRN Electronic Journal (2017).
[93]
William Webber, Alistair Moffat, and Justin Zobel. 2010. A Similarity Measure for Indefinite Rankings. ACM Transactions on Information Systems 28, 4 (2010), 20:1–20:38.
[94]
H. P. Young. 1985. Monotonic Solutions of Cooperative Games. International Journal of Game Theory 14, 2 (1985), 65–72.
[95]
Kun-Hsing Yu, Andrew L. Beam, and Isaac S. Kohane. 2018. Artificial Intelligence in Healthcare. Nature Biomedical Engineering 2, 10 (2018), 719–731.

Cited By

View all
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Drawing Attributions From Evolved CounterfactualsProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664122(1582-1589)Online publication date: 14-Jul-2024
  • (2024)Reasoning With and About BiasPerspectives on Logics for Data-driven Reasoning10.1007/978-3-031-77892-6_7(127-154)Online publication date: 6-Nov-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
August 2023
1026 pages
ISBN:9798400702310
DOI:10.1145/3600211
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 August 2023

Check for updates

Author Tags

  1. SHAP
  2. Shapley values
  3. XAI
  4. counterfactuals
  5. feature attribution

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

AIES '23
Sponsor:
AIES '23: AAAI/ACM Conference on AI, Ethics, and Society
August 8 - 10, 2023
QC, Montréal, Canada

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)64
  • Downloads (Last 6 weeks)3
Reflects downloads up to 10 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Drawing Attributions From Evolved CounterfactualsProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664122(1582-1589)Online publication date: 14-Jul-2024
  • (2024)Reasoning With and About BiasPerspectives on Logics for Data-driven Reasoning10.1007/978-3-031-77892-6_7(127-154)Online publication date: 6-Nov-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media