Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3448016.3458455acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article

Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals

Published: 18 June 2021 Publication History

Abstract

There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)~can compute provably effective explanations and recourse at local, global and contextual levels; (2)~is designed to work with users with varying levels of background knowledge of the underlying causal model; and (3)~makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on four real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.

Supplementary Material

MP4 File (3448016.3458455.mp4)
Algorithmic systems are increasingly used to aid in decision-making, with potentially significant consequences for individuals, institutions and society. This has led to much interest in explainable artificial intelligence (XAI), which aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to better understand and trust these systems. Much work in this context has focused on the attribution of responsibility for an algorithms decisions to its inputs; responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that unifies existing methods in XAI and addresses their limitations. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show that such counterfactuals can serve as a theoretical foundation for both quantifying direct and indirect influences of a variable on decisions made by an algorithm as well as generate an actionable recourse for individuals negatively affected by the algorithm's decision. Unlike previous proposals, our system (1) can computeprovably correct explanations and recourse at local, global and contextual levels, (2) is designed to work with users with any level of knowledge of the underlying causal model, and (3) makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We evaluate our proposal on real and synthetic data and demonstrate improvement over state-of-the-art approaches in XAI, including popular LIME, SHAP and actionable recourse methods.

References

[1]
Machine bias https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, 2016.
[2]
Kjersti Aas, Martin Jullum, and Anders Løland. Explaining individual predictions when features are dependent: More accurate approximations to shapley values. arXiv preprint arXiv:1903.10464, 2019.
[3]
Philip Adler, Casey Falk, Sorelle A Friedler, Tionney Nix, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and Suresh Venkatasubramanian. Auditing black-box models for indirect influence. Knowledge and Information Systems, 54(1):95--122, 2018.
[4]
David Alvarez-Melis and Tommi S Jaakkola. On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049, 2018.
[5]
Daniel W Apley and Jingyu Zhu. Visualizing the effects of predictor variables in black box supervised learning models. arXiv preprint arXiv:1612.08468, 2016.
[6]
Elias Bareinboim, JD Correa, Duligur Ibeling, and Thomas Icard. On pearl's hierarchy and the foundations of causal inference. ACM Special Volume in Honor of Judea Pearl (provisional title), 2020.
[7]
Elias Bareinboim and Judea Pearl. Controlling selection bias in causal inference. In Artificial Intelligence and Statistics, pages 100--108, 2012.
[8]
Solon Barocas, Andrew D Selbst, and Manish Raghavan. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 80--89, 2020.
[9]
Richard Berk. Machine Learning Risk Assessments in Criminal Justice Settings. Springer, 2019.
[10]
Leopoldo E. Bertossi, Jordan Li, Maximilian Schleich, Dan Suciu, and Zografoula Vagena. Causality-based explanation of classification outcomes. In Proceedings of the Fourth Workshop on Data Management for End-To-End Machine Learning, In conjunction with the 2020 ACM SIGMOD/PODS Conference, DEEM@SIGMOD 2020, Portland, OR, USA, June 14, 2020, pages 6:1--6:10, 2020.
[11]
Leo Breiman. Random forests. Mach. Learn., 45(1):5--32, October 2001.
[12]
Silvia Chiappa. Path-specific counterfactual fairness. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 7801--7808. AAAI Press, 2019.
[13]
Silvia Chiappa. Path-specific counterfactual fairness. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7801--7808, 2019.
[14]
Louis Anthony Cox Jr. Probability of causation and the attributable proportion risk. Risk Analysis, 4(3):221--230, 1984.
[15]
Adnan Darwiche and Auguste Hirth. On the reasons behind decisions. arXiv preprint arXiv:2002.09284, 2020.
[16]
Adnan Darwiche and Judea Pearl. Symbolic causal networks. In AAAI, pages 238--244, 1994.
[17]
Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In 2016 IEEE symposium on security and privacy (SP), pages 598--617. IEEE, 2016.
[18]
Maartje MA De Graaf and Bertram F Malle. How people explain action (and autonomous intelligent systems should too). In 2017 AAAI Fall Symposium Series, 2017.
[19]
Johan De Kleer, Alan K Mackworth, and Raymond Reiter. Characterizing diagnoses and systems. Artificial intelligence, 56(2--3):197--222, 1992.
[20]
Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems, pages 592--603, 2018.
[21]
Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
[22]
Curt J Ducasse. On the nature and the observability of the causal relation. The Journal of Philosophy, 23(3):57--68, 1926.
[23]
E. Fehrman, A. K. Muhammad, E. M. Mirkes, V. Egan, and A. N. Gorban. The five factor model of personality and evaluation of drug consumption risk, 2017.
[24]
Aaron Fisher, Cynthia Rudin, and Francesca Dominici. Model class reliance: Variable importance measures for any machine learning model class, from the "rashomon" perspective. arXiv preprint arXiv:1801.01489, 68, 2018.
[25]
Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of statistics, pages 1189--1232, 2001.
[26]
Christopher Frye, Ilya Feige, and Colin Rowat. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. arXiv preprint arXiv:1910.06358, 2019.
[27]
Sainyam Galhotra, Yuriy Brun, and Alexandra Meliou. Fairness testing: testing software for discrimination. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, pages 498--510. ACM, 2017.
[28]
Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri, and Kush R Varshney. Fair data integration. arXiv preprint arXiv:2006.06053, 2020.
[29]
Tobias Gerstenberg, Noah D Goodman, David A Lagnado, and Joshua B Tenenbaum. How, whether, why: Causal judgments as counterfactual contrasts. In CogSci, 2015.
[30]
Clark Glymour, Kun Zhang, and Peter Spirtes. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10:524, 2019.
[31]
Alex Goldstein, Adam Kapelner, Justin Bleich, and Emil Pitkin. Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24(1):44--65, 2015.
[32]
Sander Greenland. Relation of probability of causation to relative risk and doubling dose: a methodologic error that has become a social problem. American journal of public health, 89(8):1166--1169, 1999.
[33]
Sander Greenland and James M Robins. Epidemiology, justice, and the probability of causation. Jurimetrics, 40:321, 1999.
[34]
Brandon M Greenwell, Bradley C Boehmke, and Andrew J McCarthy. A simple and effective model-based variable importance measure. arXiv preprint arXiv:1805.04755, 2018.
[35]
Eric Grynaviski. Contrasts, counterfactuals, and causes. European Journal of International Relations, 19(4):823--846, 2013.
[36]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):1--42, 2018.
[37]
Joseph Y Halpern and Judea Pearl. Causes and explanations: A structural-model approach. part ii: Explanations. The British journal for the philosophy of science, 56(4):889--911, 2005.
[38]
Giles Hooker. Discovering additive structure in black box functions. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 575--580, 2004.
[39]
Giles Hooker and Lucas Mentch. Please stop permuting features: An explanation and alternatives. arXiv preprint arXiv:1905.03151, 2019.
[40]
Mark Hopkins and Judea Pearl. Clarifying the usage of structural models for commonsense causal reasoning. In Proceedings of the AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning, pages 83--89. AAAI Press Menlo Park, CA, 2003.
[41]
Alexey Ignatiev. Towards trustable explainable ai. In 29th International Joint Conference on Artificial Intelligence, pages 5154--5158, 2020.
[42]
Dominik Janzing, Lenon Minorics, and Patrick Blöbaum. Feature relevance quantification in explainable ai: A causal problem. In International Conference on Artificial Intelligence and Statistics, pages 2907--2916. PMLR, 2020.
[43]
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. Towards realistic individual recourse and actionable explanations in black-box decision making systems. arXiv preprint arXiv:1907.09615, 2019.
[44]
Amir-Hossein Karimi, Gilles Barthe, Borja Belle, and Isabel Valera. Model-agnostic counterfactual explanations for consequential decisions. arXiv preprint arXiv:1905.11190, 2019.
[45]
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. arXiv preprint arXiv:2010.04050, 2020.
[46]
Amir-Hossein Karimi, Julius von Kügelgen, Bernhard Schölkopf, and Isabel Valera. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. arXiv preprint arXiv:2006.06831, 2020.
[47]
Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems, pages 656--666, 2017.
[48]
I Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. Problems with shapley-value-based explanations as feature importance measures. In International Conference on Machine Learning, pages 5491--5500. PMLR, 2020.
[49]
Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4069--4079, 2017.
[50]
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In NIPS, pages 4069--4079, 2017.
[51]
Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. Inverse classification for comparison-based interpretability in machine learning. arXiv preprint arXiv:1712.08443, 2017.
[52]
David K Lewis. Causal explanation. 1986.
[53]
M. Lichman. Uci machine learning repository, 2013.
[54]
Stan Lipovetsky and Michael Conklin. Analysis of regression in game theory approach. Applied Stochastic Models in Business and Industry, 17(4):319--330, 2001.
[55]
Peter Lipton. Contrastive explanation. Royal Institute of Philosophy Supplement, 27:247--266, 1990.
[56]
Shusen Liu, Bhavya Kailkhura, Donald Loveland, and Yong Han. Generative counterfactual introspection for explainable deep learning. arXiv preprint arXiv:1907.03077, 2019.
[57]
Scott M Lundberg, Gabriel G Erion, and Su-In Lee. Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888, 2018.
[58]
Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765--4774, 2017.
[59]
Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In NIPS, pages 4765--4774, 2017.
[60]
Divyat Mahajan, Chenhao Tan, and Amit Sharma. Preserving causal constraints in counterfactual explanations for machine learning classifiers. arXiv preprint arXiv:1912.03277, 2019.
[61]
David R Mandel. Counterfactual and causal explanation. Routledge research international series in social psychology. The Psychology of Counterfactual Thinking, pages 11--27, 2005.
[62]
Luke Merrick and Ankur Taly. The explanation game: Explaining machine learning models with cooperative game theory. arXiv preprint arXiv:1909.08128, 2019.
[63]
Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1--38, 2019.
[64]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pages 279--288, 2019.
[65]
Christoph Molnar. Interpretable Machine Learning. Lulu. com, 2020.
[66]
Adam Morton. Contrastive knowledge. Contrastivism in philosophy, pages 101--115, 2013.
[67]
Ramaravind K Mothilal, Divyat Mahajan, Chenhao Tan, and Amit Sharma. Towards unifying feature attribution and counterfactual explanations: Different means to the same end. arXiv preprint arXiv:2011.04917, 2020.
[68]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 607--617, 2020.
[69]
Razieh Nabi and Ilya Shpitser. Fair inference on outcomes. In Proceedings of the... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, volume 2018, page 1931. NIH Public Access, 2018.
[70]
Judea Pearl. Direct and indirect effects. In Jack S. Breese and Daphne Koller, editors, UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, University of Washington, Seattle, Washington, USA, August 2--5, 2001, pages 411--420. Morgan Kaufmann, 2001.
[71]
Judea Pearl. Causality. Cambridge university press, 2009.
[72]
Judea Pearl. Detecting latent heterogeneity. Sociological Methods & Research, 46(3):370--389, 2017.
[73]
Judea Pearl. The seven tools of causal inference, with reflections on machine learning. Communications of the ACM, 62(3):54--60, 2019.
[74]
Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. Causal inference in statistics: A primer. John Wiley & Sons, 2016.
[75]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135--1144, 2016.
[76]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In AAAI, volume 18, pages 1527--1535, 2018.
[77]
David W Robertson. Common sense of cause in fact. Tex. L. Rev., 75:1765, 1996.
[78]
James Robins and Sander Greenland. The probability of causation under a stochastic model for individual risk. Biometrics, pages 1125--1138, 1989.
[79]
Chris Russell, Matt J Kusner, Joshua Loftus, and Ricardo Silva. When worlds collide: integrating different counterfactual assumptions in fairness. In Advances in Neural Information Processing Systems, pages 6414--6423, 2017.
[80]
Babak Salimi, Corey Cole, Peter Li, Johannes Gehrke, and Dan Suciu. Hypdb: a demonstration of detecting, explaining and resolving bias in olap queries. Proceedings of the VLDB Endowment, 11(12):2062--2065, 2018.
[81]
Babak Salimi, Johannes Gehrke, and Dan Suciu. Bias in OLAP queries: Detection, explanation, and removal. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD Conference 2018, Houston, TX, USA, June 10--15, 2018, pages 1021--1035, 2018.
[82]
Babak Salimi, Bill Howe, and Dan Suciu. Database repair meets algorithmic fairness. ACM SIGMOD Record, 49(1):34--41, 2020.
[83]
Babak Salimi, Harsh Parikh, Moe Kayali, Lise Getoor, Sudeepa Roy, and Dan Suciu. Causal relational learning. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pages 241--256, 2020.
[84]
Babak Salimi, Luke Rodriguez, Bill Howe, and Dan Suciu. Capuchin: Causal database repair for algorithmic fairness. arXiv preprint arXiv:1902.08283, 2019.
[85]
Babak Salimi, Luke Rodriguez, Bill Howe, and Dan Suciu. Interventional fairness: Causal database repair for algorithmic fairness. In Proceedings of the 2019 International Conference on Management of Data, pages 793--810. ACM, 2019.
[86]
Andrew D Selbst and Solon Barocas. The intuitive appeal of explainable machines. Fordham L. Rev., 87:1085, 2018.
[87]
Andy Shih, Arthur Choi, and Adnan Darwiche. A symbolic approach to explaining bayesian network classifiers. arXiv preprint arXiv:1805.03364, 2018.
[88]
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. Fooling lime and shap: Adversarial attacks on post hoc explanation methods. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pages 180--186, 2020.
[89]
Kacper Sokol and Peter A Flach. Counterfactual explanations of machine learning predictions: opportunities and challenges for ai safety. In SafeAI@ AAAI, 2019.
[90]
Erik vS trumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3):647--665, 2014.
[91]
Jin Tian and Judea Pearl. Probabilities of causation: Bounds and identification. Annals of Mathematics and Artificial Intelligence, 28(1--4):287--313, 2000.
[92]
Florian Tramèr, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, Jean-Pierre Hubaux, Mathias Humbert, Ari Juels, and Huang Lin. Fairtest: Discovering unwarranted associations in data-driven applications. In IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2017.
[93]
https://docs.fast.ai/tabular.learner.htm. Fastai neural network.
[94]
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html. sklearn python library.
[95]
https://xgboost.readthedocs.io/en/latest/. Xgboost.
[96]
Berk Ustun, Alexander Spangher, and Yang Liu. Actionable recourse in linear classification. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 10--19, 2019.
[97]
Jennifer Valentino-Devries, Jeremy Singer-Vine, and Ashkan Soltani. Websites vary prices, deals based on users' information. Wall Street Journal, 10:60--68, 2012.
[98]
Arnaud Van Looveren and Janis Klaise. Interpretable counterfactual explanations guided by prototypes. arXiv preprint arXiv:1907.02584, 2019.
[99]
Suresh Venkatasubramanian and Mark Alfano. The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 284--293, 2020.
[100]
Sahil Verma, John Dickerson, and Keegan Hines. Counterfactual explanations for machine learning: A review. arXiv preprint arXiv:2010.10596, 2020.
[101]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841, 2017.
[102]
James Woodward. Making things happen: A theory of causal explanation. Oxford university press, 2005.
[103]
Indre Zliobaite, Faisal Kamiran, and Toon Calders. Handling conditional discrimination. In Proceedings of the 2011 IEEE 11th International Conference on Data Mining, ICDM '11, page 992--1001, USA, 2011. IEEE Computer Society.

Cited By

View all
  • (2024)Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic ProcessSensors10.3390/s2412372824:12(3728)Online publication date: 8-Jun-2024
  • (2024)Using ML to Predict User Satisfaction with ICT Technology for Educational Institution AdministrationInformation10.3390/info1504021815:4(218)Online publication date: 12-Apr-2024
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGMOD '21: Proceedings of the 2021 International Conference on Management of Data
June 2021
2969 pages
ISBN:9781450383431
DOI:10.1145/3448016
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 June 2021

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. causality
  2. explainable AI
  3. machine learning
  4. recourse

Qualifiers

  • Research-article

Conference

SIGMOD/PODS '21
Sponsor:

Acceptance Rates

Overall Acceptance Rate 785 of 4,003 submissions, 20%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)325
  • Downloads (Last 6 weeks)44
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Interpretability of Causal Discovery in Tracking Deterioration in a Highly Dynamic ProcessSensors10.3390/s2412372824:12(3728)Online publication date: 8-Jun-2024
  • (2024)Using ML to Predict User Satisfaction with ICT Technology for Educational Institution AdministrationInformation10.3390/info1504021815:4(218)Online publication date: 12-Apr-2024
  • (2024)Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A ReviewACM Computing Surveys10.1145/367711956:12(1-42)Online publication date: 9-Jul-2024
  • (2024)OTClean: Data Cleaning for Conditional Independence Violations using Optimal TransportProceedings of the ACM on Management of Data10.1145/36549632:3(1-26)Online publication date: 30-May-2024
  • (2024)Counterfactual Explanation at Will, with Zero Privacy LeakageProceedings of the ACM on Management of Data10.1145/36549332:3(1-29)Online publication date: 30-May-2024
  • (2024)Drawing Attributions From Evolved CounterfactualsProceedings of the Genetic and Evolutionary Computation Conference Companion10.1145/3638530.3664122(1582-1589)Online publication date: 14-Jul-2024
  • (2024)Actionable Recourse for Automated Decisions: Examining the Effects of Counterfactual Explanation Type and Presentation on Lay User UnderstandingProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658997(1682-1700)Online publication date: 3-Jun-2024
  • (2024)A Critical Survey on Fairness Benefits of Explainable AIProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658990(1579-1595)Online publication date: 3-Jun-2024
  • (2024)Counterfactual Explanations of Black-box Machine Learning Models using Causal Discovery with Applications to Credit Rating2024 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN60899.2024.10650130(1-8)Online publication date: 30-Jun-2024
  • (2024)Are Objective Explanatory Evaluation Metrics Trustworthy? An Adversarial Analysis2024 IEEE International Conference on Image Processing (ICIP)10.1109/ICIP51287.2024.10647779(3938-3944)Online publication date: 27-Oct-2024
  • Show More Cited By

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media