Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3442188.3445899acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

Algorithmic Recourse: from Counterfactual Explanations to Interventions

Published: 01 March 2021 Publication History

Abstract

As machine learning is increasingly used to inform consequential decision-making (e.g., pre-trial bail and loan approval), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision. Counterfactual explanations -"how the world would have (had) to be different for a desirable outcome to occur"- aim to satisfy these criteria. Existing works have primarily focused on designing algorithms to obtain counterfactual explanations for a wide range of settings. However, it has largely been overlooked that ultimately, one of the main objectives is to allow people to act rather than just understand. In layman's terms, counterfactual explanations inform an individual where they need to get to, but not how to get there. In this work, we rely on causal reasoning to caution against the use of counterfactual explanations as a recommendable set of actions for recourse. Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions, shifting the focus from explanations to interventions.

References

[1]
Kevin Bache and Moshe Lichman. 2013. UCI machine learning repository.
[2]
Chelsea Barabas, Karthik Dinakar, Joichi Ito, Madars Virza, and Jonathan Zittrain. 2017. Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. arXiv preprint arXiv:1712.08238 (2017).
[3]
Solon Barocas, Andrew D Selbst, and Manish Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 80--89.
[4]
Sander Beckers and Joseph Y Halpern. 2019. Abstracting causal models. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 2678--2685.
[5]
Silvia Chiappa. 2019. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 7801--7808.
[6]
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems. 592--603.
[7]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[8]
Frederick Eberhardt. 2007. Causation and intervention. PhD dissertation. California Institute of Technology.
[9]
Frederick Eberhardt. 2017. Introduction to the foundations of causal discovery. International Journal of Data Science and Analytics 3, 2 (2017), 81--91.
[10]
Frederick Eberhardt and Richard Scheines. 2007. Interventions and causal inference. Philosophy of science 74, 5 (2007), 981--995.
[11]
Dorothy Edgington. 2014. Indicative Conditionals. In The Stanford Encyclopedia of Philosophy (winter 2014 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[12]
Clark Glymour, Kun Zhang, and Peter Spirtes. 2019. Review of causal discovery methods based on graphical models. Frontiers in Genetics 10 (2019).
[13]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi, Franco Turini, and Fosca Giannotti. 2018. Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 (2018).
[14]
David Gunning. 2019. DARPA's explainable artificial intelligence (XAI) program. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, ii-ii.
[15]
Vivek Gupta, Pegah Nokhiz, Chitradeep Dutta Roy, and Suresh Venkatasubramanian. 2019. Equalizing Recourse across Groups. arXiv preprint arXiv:1909.03166 (2019).
[16]
Leif Hancox-Li. 2020. Robustness in machine learning explanations: does it matter?. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 640--647.
[17]
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. REVISE: Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems. arXiv preprint arXiv:1907.09615 (2019).
[18]
Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-agnostic counterfactual explanations for consequential decisions. In International Conference on Artificial Intelligence and Statistics. 895--905.
[19]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666.
[20]
Yves Kodratoff. 1994. The comprehensibility manifesto. KDD Nugget Newsletter 94, 9 (1994).
[21]
Issa Kohler-Hausmann. 2018. Eddie Murphy and the dangers of counterfactual causal thinking about detecting racial discrimination. Nw. UL Rev. 113 (2018), 1163.
[22]
Kevin B Korb, Lucas R Hope, Ann E Nicholson, and Karl Axnick. 2004. Varieties of causal intervention. In Pacific Rim International Conference on Artificial Intelligence. Springer, 322--331.
[23]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fairness. In Advances in Neural Information Processing Systems. 4066--4076.
[24]
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1902.00006 (2019).
[25]
David A Lagnado, Tobias Gerstenberg, and Ro'i Zultan. 2013. Causal responsibility and counterfactuals. Cognitive science 37, 6 (2013), 1036--1073.
[26]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443 (2017).
[27]
Zachary C Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (2018), 31--57.
[28]
Divyat Mahajan, Chenhao Tan, and Amit Sharma. 2019. Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers. arXiv preprint arXiv:1912.03277 (2019).
[29]
Daniel Malinsky and David Danks. 2018. Causal discovery algorithms: A practical guide. Philosophy Compass 13, 1 (2018), e12470.
[30]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1--38.
[31]
Ramaravind Kommiya Mothilal, Amit Sharma, and Chenhao Tan. 2019. DiCE: Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. arXiv preprint arXiv:1905.07697 (2019).
[32]
W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 116, 44 (2019), 22071--22080.
[33]
Judea Pearl. 1994. A probabilistic calculus of actions. In Uncertainty Proceedings 1994. Elsevier, 454--462.
[34]
Judea Pearl. 2000. Causality: models, reasoning and inference. Vol. 29. Springer.
[35]
Judea Pearl. 2013. Structural counterfactuals: A brief introduction. Cognitive Science 37, 6 (2013), 977--985.
[36]
Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.
[37]
Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of causal inference. The MIT Press.
[38]
Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2019. FACE: Feasible and Actionable Counterfactual Explanations. arXiv preprint arXiv:1909.09369 (2019).
[39]
Goutham Ramakrishnan, Yun Chan Lee, and Aws Albargouthi. 2019. Synthesizing Action Sequences for Modifying Model Decisions. arXiv preprint arXiv:1910.00057 (2019).
[40]
Paul K Rubenstein, Sebastian Weichwald, Stephan Bongers, Joris M Mooij, Dominik Janzing, Moritz Grosse-Wentrup, and Bernhard Schölkopf. 2017. Causal consistency of structural equation models. arXiv preprint arXiv:1707.00819 (2017).
[41]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence 1, 5 (2019), 206--215.
[42]
Stefan Rüping. 2006. Learning interpretable models. PhD dissertation. Technical University of Dortmund.
[43]
Chris Russell. 2019. Efficient Search for Diverse Coherent Explanations. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). ACM, 20--28. https://doi.org/10.1145/3287560.3287569
[44]
Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. 2017. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems. 6414--6423.
[45]
Shubham Sharma, Jette Henderson, and Joydeep Ghosh. 2019. CERTIFAI: Counterfactual Explanations for Robustness, Transparency, Interpretability, and Fairness of Artificial Intelligence models. arXiv preprint arXiv:1905.07857 (2019).
[46]
Peter Spirtes, Clark N Glymour, Richard Scheines, and David Heckerman. 2000. Causation, prediction, and search. MIT press.
[47]
William Starr. 2019. Counterfactuals. In The Stanford Encyclopedia of Philosophy (fall 2019 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[48]
William Starr. 2019. Counterfactuals. In The Stanford Encyclopedia of Philosophy (fall 2019 ed.), Edward N. Zalta (Ed.). Metaphysics Research Lab, Stanford University.
[49]
Berk Ustun, Alexander Spangher, and Yang Liu. 2019. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 10--19.
[50]
Suresh Venkatasubramanian and Mark Alfano. 2020. The philosophical basis of algorithmic recourse. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM.
[51]
Paul Voigt and Axel Von dem Bussche. [n.d.]. The EU General Data Protection Regulation (GDPR). ([n. d.]).
[52]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31, 2 (2017).
[53]
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2019. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE transactions on visualization and computer graphics 26, 1 (2019), 56--65.

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)Understanding the User Perception and Experience of Interactive Algorithmic Recourse CustomizationACM Transactions on Computer-Human Interaction10.1145/367450331:3(1-25)Online publication date: 28-Jun-2024
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic recourse
  2. causal inference
  3. consequential recommendations
  4. contrastive explanations
  5. counterfactual explanations
  6. explainable artificial intelligence
  7. minimal interventions

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)481
  • Downloads (Last 6 weeks)42
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)Understanding the User Perception and Experience of Interactive Algorithmic Recourse CustomizationACM Transactions on Computer-Human Interaction10.1145/367450331:3(1-25)Online publication date: 28-Jun-2024
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Directions of Technical Innovation for Regulatable AI SystemsCommunications of the ACM10.1145/365367067:11(82-89)Online publication date: 1-Oct-2024
  • (2024)Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and OpportunitiesACM Computing Surveys10.1145/364847256:9(1-33)Online publication date: 24-Apr-2024
  • (2024)Are We Explaining the Same Recommenders? Incorporating Recommender Performance for Evaluating ExplainersProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3691709(1113-1118)Online publication date: 8-Oct-2024
  • (2024)Seamful XAI: Operationalizing Seamful Design in Explainable AIProceedings of the ACM on Human-Computer Interaction10.1145/36373968:CSCW1(1-29)Online publication date: 26-Apr-2024
  • (2024)CARMA: A practical framework to generate recommendations for causal algorithmic recourse at scaleProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659003(1745-1762)Online publication date: 3-Jun-2024
  • (2024)Actionable Recourse for Automated Decisions: Examining the Effects of Counterfactual Explanation Type and Presentation on Lay User UnderstandingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658997(1682-1700)Online publication date: 3-Jun-2024
  • (2024)Visibility into AI AgentsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658948(958-973)Online publication date: 3-Jun-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media