Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Free access
Just Accepted

Categorical and Continuous Features in Counterfactual Explanations of AI Systems

Online AM: 20 June 2024 Publication History

Abstract

Recently, eXplainable AI (XAI) research has focused on the use of counterfactual explanations to address interpretability, algorithmic recourse, and bias in AI system decision-making. The developers of these algorithms claim they meet user requirements in generating counterfactual explanations with “plausible”, “actionable” or “causally important” features. However, few of these claims have been tested in controlled psychological studies. Hence, we know very little about which aspects of counterfactual explanations really help users understand the decisions of AI systems. Nor do we know whether counterfactual explanations are an advance on more traditional causal explanations that have a longer history in AI (e.g., in expert systems). Accordingly, we carried out three user studies to (i) test a fundamental distinction in feature-types, between categorical and continuous features, and (ii) compare the relative effectiveness of counterfactual and causal explanations. The studies used a simulated, automated decision-making app that determined safe driving limits after drinking alcohol, based on predicted blood alcohol content, where users’ responses were measured objectively (using predictive accuracy) and subjectively (using satisfaction and trust judgments). Study 1 (N = 127) showed that users understand explanations referring to categorical features more readily than those referring to continuous features. It also discovered a dissociation between objective and subjective measures: counterfactual explanations elicited higher accuracy than no-explanation controls but elicited no more accuracy than causal explanations, yet counterfactual explanations elicited greater satisfaction and trust than causal explanations. In Study 2 (N = 136) we transformed the continuous features of presented items to be categorical (i.e., binary) and found that these converted features led to highly accurate responding. Study 3 (N = 211) explicitly compared matched items involving either mixed features (i.e., a mix of categorical and continuous features) or categorical features (i.e., categorical and categorically-transformed continuous features), and found that users were more accurate when categorically-transformed features were used instead of continuous ones. It also replicated the dissociation between objective and subjective effects of explanations. The findings delineate important boundary conditions for current and future counterfactual explanation methods in XAI.

References

[1]
Solon Barocas, Andrew D. Selbst, and Manish Raghavan. 2020. The Hidden Assumptions behind Counterfactual Explanations and Principal Reasons. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 80–89. https://doi.org/10.1145/3351095.3372830
[2]
Alejandro Barredo Arrieta and Díaz-Rodríguez, Natalia and Del Ser, Javier and Bennetot, Adrien and Tabik, Siham and Barbado, Alberto and Garcia, Salvador and Gil-Lopez, Sergio and Molina, Daniel and Benjamins, Richard and Chatila, Raja and Herrera, Francisco. 2020. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
[3]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3173951
[4]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 454–464. https://doi.org/10.1145/3377325.3377498
[5]
Bruce G. Buchanan and Edward H. Shortliffe. 1984. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley, Reading, MA.
[6]
Ruth M.J. Byrne. 2005. The Rational Imagination: How people create alternatives to reality. MIT Press, Cambridge, MA.
[7]
Ruth M.J. Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from human reasoning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. International Joint Conferences on Artificial Intelligence Organization, 6276–6282. https://doi.org/10.24963/ijcai.2019/876
[8]
Ruth M.J. Byrne and Alessandra Tasso. 1999. Deductive reasoning with factual, possible, and counterfactual conditionals. Memory and Cognition 27, 4 (1999), 726–740. https://doi.org/10.3758/BF03211565
[9]
Carrie J. Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. International Conference on Intelligent User Interfaces, Proceedings IUI Part F1476 (2019), 258–262. https://doi.org/10.1145/3301275.3302289
[10]
Lenart Celar and Ruth M.J. Byrne. 2023. How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains. Memory & Cognition 51 (2023), 1481–1496. https://doi.org/10.3758/s13421-023-01407-5
[11]
Xinyue Dai, Mark T. Keane, Laurence Shalloo, Elodie Ruelle, and Ruth M.J. Byrne. 2022. Counterfactual Explanations for Prediction and Diagnosis in XAI. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. ACM, New York, NY, USA, 215–226. https://doi.org/10.1145/3514094.3534144
[12]
Eoin Delaney, Arjun Pakrashi, Derek Greene, and Mark T Keane. 2023. Counterfactual explanations for misclassified images: How human and machine explanations differ. Artificial Intelligence 324 (2023), 103995.
[13]
Amit Dhurandhar, Tejaswini Pedapati, Avinash Balakrishnan, Pin Yu Chen, Karthikeyan Shanmugam, and Ruchir Puri. 2019. Model agnostic contrastive explanations for structured data. arXiv:1906.00117. https://arxiv.org/abs/1906.00117
[14]
Jonathan Dodge, Q. Vera Liao, Yunfeng Zhang, Rachel K. E. Bellamy, and Casey Dugan. 2019. Explaining Models: An Empirical Study of How Explanations Impact Fairness Judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 275–285. https://doi.org/10.1145/3301275.3302310
[15]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017).
[16]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. arXiv. http://arxiv.org/abs/1702.08608
[17]
Upol Ehsan and Mark O. Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. arXiv. http://arxiv.org/abs/2109.12480
[18]
Orlando Espino and Ruth M.J. Byrne. 2018. Thinking About the Opposite of What Is Said: Counterfactual Conditionals and Symbolic or Alternate Simulations of Negation. Cognitive Science 42, 8 (2018), 2459–2501. https://doi.org/10.1111/cogs.12677
[19]
European Commission. 2020. White Paper on Artificial Intelligence - A European approach to excellence and trust. (2020). https://ec.europa.eu/info/sites/default/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf
[20]
Franz Faul, Edgar Erdfelder, Axel Buchner, and Albert-Georg Lang. 2009. Statistical power analyses using G* Power 3.1: Tests for correlation and regression analyses. Behavior research methods 41, 4 (2009), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149
[21]
Courtney Ford and Mark T. Keane. 2023. Explaining Classifications to Non-experts: An XAI User Study of Post-Hoc Explanations for a Classifier When People Lack Expertise. In Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges, Jean-Jacques Rousseau and Bill Kapralos (Eds.). Springer Nature Switzerland, Cham, 246–260.
[22]
Tobias Gerstenberg, Noah D. Goodman, David A. Lagnado, and Joshua B. Tenenbaum. 2021. A counterfactual simulation model of causal judgments for physical events. Psychological Review 128, 5 (2021), 936–975. https://doi.org/10.1037/rev0000281
[23]
Yash Goyal, Ziyan Wu, Jan Ernst, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Counterfactual Visual Explanations. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 2376–2384. https://proceedings.mlr.press/v97/goyal19a.html
[24]
Riccardo Guidotti. 2022. Counterfactual explanations and how to find them : literature review and benchmarking. Data Mining and Knowledge Discovery (2022). https://doi.org/10.1007/s10618-022-00831-6
[25]
Riccardo Guidotti, Anna Monreale, Fosca Giannotti, Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2019. Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intelligent Systems 34, 6 (11 2019), 14–23. https://doi.org/10.1109/MIS.2019.2957223
[26]
David Gunning and David W. Aha. 2019. DARPA’s explainable artificial intelligence program. AI Magazine 40, 2 (2019), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
[27]
Joseph Y. Halpern and Judea Pearl. 2005. Causes and explanations: A structural-model approach. Part I: Causes. 56, 4 (2005), 843–887. https://doi.org/10.1093/bjps/axi147
[28]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. arXiv:1812.04608. http://arxiv.org/abs/1812.04608
[29]
David Hume. 1748. An Enquiry concerning Human Understanding (a critical edition, 1999 ed.). Oxford University Press, Oxford, UK.
[30]
Johan Huysmans, Karel Dejaeger, Christophe Mues, Jan Vanthienen, and Bart Baesens. 2011. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decision Support Systems 51, 1 (2011), 141–154. https://doi.org/10.1016/j.dss.2010.12.003
[31]
Shalmali Joshi, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019. Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems. arXiv:1907.09615. http://arxiv.org/abs/1907.09615
[32]
Daniel Kahneman and Dale T. Miller. 1986. Norm Theory. Comparing Reality to Its Alternatives. Psychological Review 93, 2 (1986), 136–153. https://doi.org/10.1037/0033-295X.93.2.136
[33]
Daniel Kahneman and Amos Tversky. 1982. The Simulation Heuristic. In Judgment Under Uncertainty: Heuristics and Biases, Daniel Kahneman, Paul Slovic, and Amos Tversky (Eds.). Cambridge University Press, New York, 201–8.
[34]
Amir-Hossein Karimi, Gilles Barthe, Borja Balle, and Isabel Valera. 2020. Model-Agnostic Counterfactual Explanations for Consequential Decisions. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (Proceedings of Machine Learning Research, Vol. 108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 895–905. https://proceedings.mlr.press/v108/karimi20a.html
[35]
Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2022. A survey of algorithmic recourse: contrastive explanations and consequential recommendations. ACM Computing Surverys 55, 5, Article 95 (Dec 2022), 29 pages. https://doi.org/10.1145/3527848
[36]
Amir-Hossein Karimi, Bernhard Schölkopf, and Isabel Valera. 2021. Algorithmic Recourse: From Counterfactual Explanations to Interventions. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Virtual Event, Canada) (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 353–362. https://doi.org/10.1145/3442188.3445899
[37]
Evelyn Walker Katz and Sandor B Brent. 1968. Understanding connectives. Journal of Verbal Learning and Verbal Behavior 7, 2 (1968), 501–509.
[38]
Mark T. Keane, Eoin M. Kenny, Eoin Delaney, and Barry Smyth. 2021. If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21, Zhi-Hua Zhou (Ed.). International Joint Conferences on Artificial Intelligence Organization, 4466–4474. https://doi.org/10.24963/ijcai.2021/609 Survey Track.
[39]
Mark T. Keane and Barry Smyth. 2020. Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). In Case-Based Reasoning Research and Development, Ian Watson and Rosina Weber (Eds.). Springer International Publishing, Cham, 163–178. https://doi.org/10.1007/978-3-030-58342-2_11
[40]
Frank C. Keil. 2006. Explanation and understanding. Annual Review of Psychology 57 (2006), 227–254. https://doi.org/10.1146/annurev.psych.57.102904.190100
[41]
Eoin M. Kenny, Eoin D. Delaney, Derek Greene, and Mark T. Keane. 2021. Post-hoc Explanation Options for XAI in Deep Learning: The Insight Centre for Data Analytics Perspective. In Pattern Recognition. ICPR International Workshops and Challenges, Alberto Del Bimbo, Rita Cucchiara, Stan Sclaroff, Giovanni Maria Farinella, Tao Mei, Marco Bertini, Hugo Jair Escalante, and Roberto Vezzani (Eds.). Springer International Publishing, Cham, 20–34. https://doi.org/10.1007/978-3-030-68796-0_2
[42]
Eoin M. Kenny, Courtney Ford, Molly Quinn, and Mark T. Keane. 2021. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence 294 (2021), 103459. https://doi.org/10.1016/j.artint.2021.103459
[43]
Eoin M. Kenny and Mark T. Keane. 2021. Explaining Deep Learning using examples: Optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowledge-Based Systems 233 (2021), 107530. https://doi.org/10.1016/j.knosys.2021.107530
[44]
Eoin M. Kenny and Mark T Keane. 2021. On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence 35, 13 (May 2021), 11575–11585. https://doi.org/10.1609/aaai.v35i13.17377
[45]
Sangeet S Khemlani, Aron K Barbey, and Philip N Johnson-Laird. 2014. Causal reasoning with mental models. Frontiers in human neuroscience 8 (2014), 849.
[46]
Lara Kirfel and Alice Liefgreen. 2021. What If (and How…)? - Actionability Shapes People’s Perceptions of Counterfactual Explanations in Automated Decision-Making. In ICML (International Conference on Machine Learning) Workshop on Algorithmic Recourse. https://drive.google.com/file/d/1asi0PtgygYpJIAx2aiCG6OtldVvz7R2i/view
[47]
Ulrike Kuhl, André Artelt, and Barbara Hammer. 2022. Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting. In FAccT 2022 - Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery (ACM), 2125–2137. https://doi.org/10.1145/3531146.3534630
[48]
Ulrike Kuhl, André Artelt, and Barbara Hammer. 2023. Let’s go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning. Frontiers in Computer Science 5 (2023), 20. https://doi.org/10.3389/fcomp.2023.1087929
[49]
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel J Gershman, and Finale Doshi-Velez. 2019. Human Evaluation of Models Built for Interpretability. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 59–67.
[50]
David A. Lagnado, Tobias Gerstenberg, and Ro’i Zultan. 2013. Causal responsibility and counterfactuals. Cognitive Science 37, 6 (2013), 1036–1073. https://doi.org/10.1111/cogs.12054
[51]
Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. 2016. Interpretable Decision Sets: A Joint Framework for Description and Prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Francisco, California, USA) (KDD ’16). Association for Computing Machinery, New York, NY, USA, 1675–1684. https://doi.org/10.1145/2939672.2939874
[52]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2017. Interpretable & explorable approximations of black box models. arXiv:1707.01154. https://arxiv.org/abs/1707.01154
[53]
Matthew L. Leavitt and Ari Morcos. 2020. Towards falsifiable interpretability research. arXiv:2010.12016 (2020). http://arxiv.org/abs/2010.12016
[54]
David Lewis. 1973. Causation. Journal of Philosophy 70, 17 (1973), 556–567. https://doi.org/10.2307/2025310
[55]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and Why Not Explanations Improve the Intelligibility of Context-Aware Intelligent Systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Boston, MA, USA) (CHI ’09). Association for Computing Machinery, New York, NY, USA, 2119–2128. https://doi.org/10.1145/1518701.1519023
[56]
Zachary C. Lipton. 2017. The Doctor Just Won’t Accept That!. In Interpretable ML Symposium, 31st Conference on Neural Information Processing Systems (Long Beach, CA, USA). http://arxiv.org/abs/1711.08037
[57]
Christopher G. Lucas and Charles Kemp. 2015. An improved probabilistic account of counterfactual reasoning. Psychological Review 122, 4 (2015), 700–734. https://doi.org/10.1037/a0039655
[58]
Ana Lucic, Hinda Haned, and Maarten de Rijke. 2020. Why Does My Model Fail? Contrastive Local Explanations for Retail Forecasting. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 90–98. https://doi.org/10.1145/3351095.3372824
[59]
David R. Mandel and Darrin R. Lehman. 1996. Counterfactual Thinking and Ascriptions of Cause and Preventability. Journal of Personality and Social Psychology 71, 3 (1996), 450–463. https://doi.org/10.1037/0022-3514.71.3.450
[60]
Keith D. Markman, Matthew N. McMullen, and Ronald A. Elizaga. 2008. Counterfactual thinking, persistence, and performance: A test of the Reflection and Evaluation Model. Journal of Experimental Social Psychology 44, 2 (2008), 421–428. https://doi.org/10.1016/j.jesp.2007.01.001
[61]
David Martens and Foster Provost. 2014. Explaining data-driven document classifications. MIS Quarterly: Management Information Systems 38, 1 (2014), 73–99. https://doi.org/10.25300/MISQ/2014/38.1.04
[62]
Rachel McCloy and Ruth M.J. Byrne. 2002. Semifactual “even if” thinking. Thinking and Reasoning 8, 1 (2002), 41–67. https://doi.org/10.1080/13546780143000125
[63]
Alice McEleney and Ruth M.J. Byrne. 2006. Spontaneous counterfactual thoughts and causal explanations. Thinking and Reasoning 12, 2 (2006), 235–255. https://doi.org/10.1080/13546780500317897
[64]
Aaron McGarvey. 2015. Easypower: sample size estimation for experimental designs. R package version 1, 1 (2015).
[65]
Bjorn Meder, Tobias Gerstenberg, York Hagmayer, and Michael R. Waldmann. 2010. Observing and Intervening: Rational and Heuristic Models of Causal Decision Making. The Open Psychology Journal 3, 2 (2010), 119–135. https://doi.org/10.2174/1874350101003020119
[66]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
[67]
Keith K Millis, Jonathan M Golding, and Gregory Barker. 1995. Causal connectives increase inference generation. Discourse Processes 20, 1 (1995), 29–49.
[68]
Ramaravind K. Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 607–617. https://doi.org/10.1145/3351095.3372850
[69]
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2018. How do humans understand explanations from machine learning systems? an evaluation of the human-interpretability of explanation. arXiv preprint arXiv:1802.00682 (2018).
[70]
Mike Oaksford and Keith Stenning. 1992. Reasoning with conditionals containing negated constituents. Journal of Experimental Psychology: Learning, Memory, and Cognition 18, 4 (1992), 835. https://doi.org/10.1037/0278-7393.18.4.835
[71]
Isabel Orenes, Orlando Espino, and Ruth M.J. Byrne. 2022. Similarities and differences in understanding negative and affirmative counterfactuals and causal assertions: Evidence from eye-tracking. Quarterly Journal of Experimental Psychology 75, 4 (4 2022), 633–651. https://doi.org/10.1177/17470218211044085
[72]
Emmanuel M Pothos. 2007. Theories of artificial grammar learning. Psychological bulletin 133, 2 (2007), 227.
[73]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Vol. 13-17-August. 1135–1144. https://doi.org/10.1145/2939672.2939778
[74]
Neal J. Roese and Kai Epstude. 2017. The Functional Theory of Counterfactual Thinking: New Evidence, New Challenges, New Insights. In Advances in Experimental Social Psychology. Vol. 56. 1–79. https://doi.org/10.1016/bs.aesp.2017.02.001
[75]
Benjamin M Rottman and Reid Hastie. 2016. Do people reason rationally about causally related events? Markov violations, weak inferences, and failures of explaining away. Cognitive psychology 87 (2016), 88–134. https://doi.org/10.1016/j.cogpsych.2016.05.002
[76]
Leonid Rozenblit and Frank Keil. 2002. The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science 26, 5 (2002), 521–562. https://doi.org/10.1016/S0364-0213(02)00078-2
[77]
Barry Smyth and Mark T. Keane. 2022. A Few Good Counterfactuals: Generating Interpretable, Plausible and Diverse Counterfactual Explanations. In Case-Based Reasoning Research and Development, Mark T. Keane and Nirmalie Wiratunga (Eds.). Springer International Publishing, Cham, 18–32. https://doi.org/10.1007/978-3-031-14923-8_2
[78]
Barbara A. Spellman and David R. Mandel. 1999. When Possibility Informs Reality: Counterfactual Thinking as a Cue to Causality. Current Directions in Psychological Science 8, 4 (8 1999), 120–123. https://doi.org/10.1111/1467-8721.00028
[79]
Simon Stephan, Neele Engelmann, and Michael R. Waldmann. 2023. The perceived dilution of causal strength. Cognitive Psychology 140 (2023), 101540. https://doi.org/10.1016/j.cogpsych.2022.101540
[80]
Ilia Stepin, Jose M. Alonso, Alejandro Catala, and Martin Pereira-Farina. 2021. A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. IEEE Access 9 (2021), 11974–12001. https://doi.org/10.1109/ACCESS.2021.3051315
[81]
Girish H. Subramanian, John Nosek, Sankaran P. Raghunathan, and Santosh S. Kanitkar. 1992. A Comparison of the Decision Table and Tree. Commun. ACM 35, 1 (Jan 1992), 89–94. https://doi.org/10.1145/129617.129621
[82]
Berk Ustun, Alexander Spangher, and Yang Liu. 2018. Actionable recourse in linear classification. In Proceedings of the 5th Workshop on Fairness, Accountability and Transparency in Machine Learning,. 10–19. https://econcs.seas.harvard.edu/files/econcs/files/spangher_fatml18.pdf
[83]
Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers, and Mark Neerincx. 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404. https://doi.org/10.1016/j.artint.2020.103404
[84]
Arnaud Van Looveren and Janis Klaise. 2021. Interpretable Counterfactual Explanations Guided by Prototypes. In Machine Learning and Knowledge Discovery in Databases. Research Track, Nuria Oliver, Fernando Pérez-Cruz, Stefan Kramer, Jesse Read, and Jose A. Lozano (Eds.). Springer International Publishing, Cham, 650–665. https://doi.org/10.1007/978-3-030-86520-7_40
[85]
Sahil Verma, John Dickerson, and Keegan Hines. 2020. Counterfactual Explanations for Machine Learning: A Review. arXiv:2010.10596. http://arxiv.org/abs/2010.10596
[86]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31, April 2018 (2018). https://doi.org/10.2139/ssrn.3063289
[87]
Greta Warren, Barry Smyth, and Mark T. Keane. 2022. ”Better” Counterfactuals, Ones People Can Understand: Psychologically-Plausible Case-Based Counterfactuals Using Categorical Features for Explainable AI (XAI). In Case-Based Reasoning Research and Development, Mark T. Keane and Nirmalie Wiratunga (Eds.). Springer International Publishing, Cham, 63–78. https://doi.org/10.1007/978-3-031-14923-8_5
[88]
Erik Matteo Prochet Widmark. 1981. Principles and applications of medicolegal alcohol determination. Biomedical Publications, Davis, CA.

Cited By

View all
  • (2024)POTDAI: A Tool to Evaluate the Perceived Operational Trust Degree in Artificial Intelligence SystemsIEEE Access10.1109/ACCESS.2024.345406112(133097-133109)Online publication date: 2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems Just Accepted
EISSN:2160-6463
Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Online AM: 20 June 2024
Accepted: 10 May 2024
Revised: 05 March 2024
Received: 21 October 2023

Check for updates

Author Tags

  1. XAI
  2. explanation
  3. counterfactual
  4. user study

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)121
  • Downloads (Last 6 weeks)27
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)POTDAI: A Tool to Evaluate the Perceived Operational Trust Degree in Artificial Intelligence SystemsIEEE Access10.1109/ACCESS.2024.345406112(133097-133109)Online publication date: 2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media