A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability
<p>High cognitive load from ML can increase potential cognitive effort.</p> "> Figure 2
<p>Example of one atom connected to constants: full view.</p> "> Figure 3
<p>Example of one atom connected to constants: enlarged partial view.</p> "> Figure 4
<p>Example of AML Description Language for gait analysis.</p> "> Figure 5
<p>Several AML chains of inputs–atoms–outputs: full view.</p> "> Figure 6
<p>Several AML chains of inputs–atoms–outputs: enlarged partial view.</p> "> Figure 7
<p>Tree diagram representation of results of AML-enabled gait analysis.</p> "> Figure 8
<p>Illustrative visual comparation of three explanation methods, (<b>a</b>) SHAP, (<b>b</b>) LIME, (<b>c</b>) CIU, which shows that CIU entails less split-attention effect and redundancy effect than SHAP and LIME.</p> "> Figure 9
<p>Importance of applying CLT.</p> ">
Abstract
:1. Introduction
2. Cognitive Load
2.1. Cognitive Load
2.2. Explainability, Transparency, and Interpretability
3. CLT Assessment of a Machine Learning That Could Have High Potential for ETISI
4. Broad Relevance of CLT to Machine Learning ETISI
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Garcke, J.; Roscher, R. Explainable Machine Learning. Mach. Learn. Knowl. Extr. 2023, 5, 169–170. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Hooshyar, D.; Azevedo, R.; Yang, Y. Augmenting Deep Neural Networks with Symbolic Educational Knowledge: Towards Trustworthy and Interpretable AI for Education. Mach. Learn. Knowl. Extr. 2024, 6, 593–618. [Google Scholar] [CrossRef]
- Kliegr, T.; Bahník, Š.; Fürnkranz, J. A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif. Intell. 2022, 295, 103458. [Google Scholar]
- O’Brien, K.; Eriksen, S.E.; Schjolden, A.; Nygaard, L.P. What’s in a Word? Conflicting Interpretations of Vulnerability in Climate Change Research; CICERO Working Paper; CICERO Center for International Climate and Environmental Research: Oslo, Norway, 2004. [Google Scholar]
- Mishra, A.; Mishra, H. Border bias: The belief that state borders can protect against Disasters. Psychol. Sci. 2010, 21, 1582–1586. [Google Scholar] [CrossRef]
- Cabello, A. Interpretations of quantum theory: A map of madness. In What is Quantum Information; Lombardi, O., Fortin, S., Holik, F., López, C., Eds.; Cambridge University Press: Cambridge, UK, 2017; pp. 138–143. [Google Scholar]
- Pound, R. Interpretations of Legal History; Harvard University Press: Cambridge, MA, USA, 2013. [Google Scholar]
- Balmaña, J.; Digiovanni, L.; Gaddam, P.; Walsh, M.F.; Joseph, V.; Stadler, Z.K.; Nathanson, K.L.; Garber, J.E.; Couch, F.J.; Offit, K.; et al. Conflicting interpretation of genetic variants and cancer risk by commercial laboratories as assessed by the prospective registry of multiplex testing. J. Clin. Oncol. 2016, 34, 4071. [Google Scholar] [CrossRef]
- Novick, L.R.; Catley, K.M. When relationships depicted diagrammatically conflict with prior knowledge: An investigation of students’ interpretations of evolutionary trees. Sci. Educ. 2014, 98, 269–304. [Google Scholar] [CrossRef]
- Friston, K.; Moran, R.J.; Nagai, Y.; Taniguchi, T.; Gomi, H.; Tenenbaum, J. World model learning and inference. Neural Netw. 2021, 144, 573–590. [Google Scholar] [CrossRef]
- Fox, S.; Rey, V.F. Representing Human Ethical Requirements in Hybrid Machine Learning Models: Technical Opportunities and Fundamental Challenges. Mach. Learn. Knowl. Extr. 2024, 6, 580–592. [Google Scholar] [CrossRef]
- Hanham, J.; Castro-Alonso, J.C.; Chen, O. Integrating cognitive load theory with other theories, within and beyond educational psychology. Br. J. Educ. Psychol. 2023, 93, 239–250. [Google Scholar]
- Ou, W.J.; Henriques, G.J.; Senthilnathan, A.; Ke, P.J.; Grainger, T.N.; Germain, R.M. Writing accessible theory in ecology and evolution: Insights from cognitive load theory. BioScience 2022, 72, 300–313. [Google Scholar] [CrossRef]
- Bunch, R.L.; Lloyd, R.E. The cognitive load of geographic information. Prof. Geogr. 2006, 58, 209–220. [Google Scholar] [CrossRef]
- Sweller, J. Cognitive load theory, learning difficulty, and instructional design. Learn. Instr. 1994, 4, 295–312. [Google Scholar] [CrossRef]
- Abdul, A.; Von Der Weth, C.; Kankanhalli, M.; Lim, B.Y. COGAM: Measuring and moderating cognitive load in machine learning model explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020. Paper number 448. [Google Scholar]
- Lalor, J.P.; Guo, H. Measuring algorithmic interpretability: A human-learning-based framework and the corresponding cognitive complexity score. arXiv 2022, arXiv:2205.10207. [Google Scholar]
- Ross, A.; Chen, N.; Hang, E.Z.; Glassman ELDoshi-Velez, F. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021. Article number 80. [Google Scholar]
- Spitzer, P.; Holstein, J.; Hemmer, P.; Vössing, M.; Kühl, N.; Martin, D.; Satzger, G. On the Effect of Contextual Information on Human Delegation Behavior in Human-AI collaboration. arXiv 2024, arXiv:2401.04729. [Google Scholar]
- Button, A.; Merk, D.; Hiss, J.A.; Schneider, G. Automated de novo molecular design by hybrid machine intelligence and rule-driven chemical synthesis. Nat. Mach. Intell. 2019, 1, 307–315. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, Q.; Zhao, D.; Chen, Y. Lane change decision-making through deep reinforcement learning with rule-based constraints. In Proceedings of the International Joint Conference on Neural Networks, Budapest, Hungary, 14–19 July 2019; p. N-2051. [Google Scholar]
- Martin-Maroto, F.; de Polavieja, G.G. Semantic Embeddings in Semilattices. arXiv 2022, arXiv:2205.12618. [Google Scholar]
- Martin-Maroto, F.; de Polavieja, G.G. Algebraic Machine Learning. arXiv 2018, arXiv:1803.05252. [Google Scholar]
- Knapič, S.; Malhi, A.; Saluja, R.; Främling, K. Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain. Mach. Learn. Knowl. Extr. 2021, 3, 740–770. [Google Scholar] [CrossRef]
- Sweller, J.; Ayres, P.; Kalyuga, S. Intrinsic and Extraneous Cognitive Load. In Cognitive Load Theory; Explorations in the Learning Sciences, Instructional Systems and Performance Technologies; Springer: New York, NY, USA, 2011; Volume 1, pp. 57–69. [Google Scholar]
- Paas, F.; Renkl, A.; Sweller, J. Cognitive load theory and instructional design: Recent developments. Educ. Psychol. 2003, 38, 1–4. [Google Scholar] [CrossRef]
- Paas, F.; Renkl, A.; Sweller, J. Cognitive load theory: Instructional implications of the interaction between information structures and cognitive architecture. Instruct. Sci. 2004, 32, 1–8. [Google Scholar] [CrossRef]
- Yoghourdjian, V.; Yang, Y.; Dwyer, T.; Lawrence, L.; Wybrow, M.; Marriott, K. Scalability of network visualisation from a cognitive load perspective. IEEE Trans. Vis. Comput. Graph. 2020, 27, 1677–1687. [Google Scholar] [CrossRef] [PubMed]
- Means, B. Cognitive task analysis as a basis for instructional design. In Cognitive Science Foundations of Instruction; Rabinowitz, M., Ed.; Lawrence Erlbaum: Hillsdale, NJ, USA, 1993; pp. 97–118. [Google Scholar]
- Sheehan, B.; Kaufman, D.; Stetson, P.; Currie, L.M. Cognitive analysis of decision support for antibiotic prescribing at the point of ordering in a neonatal intensive care unit. AMIA Annu. Symp. Proc. 2009, 2009, 584–588. [Google Scholar] [PubMed]
- Kenett, Y.N.; Levi, E.; Anaki, D.; Faust, M. The semantic distance task: Quantifying semantic distance with semantic network path length. J. Exp. Psychol. Learn. Mem. Cogn. 2017, 43, 1470–1489. [Google Scholar] [CrossRef] [PubMed]
- Fox, S. Getting real about ICT: Applying critical realism to the framing of information and communication technologies. Manag. Res. Rev. 2013, 36, 296–319. [Google Scholar] [CrossRef]
- Yzer, M.; LoRusso, S.; Nagler, R.H. On the conceptual ambiguity surrounding perceived message effectiveness. Health Commun. 2015, 30, 125–134. [Google Scholar] [CrossRef] [PubMed]
- Creed, F.; Guthrie, E.; Fink, P.; Henningsen, P.; Rief, W.; Sharpe, M.; White, P. Is there a better term than “medically unexplained symptoms”? J. Psychosom. Res. 2010, 68, 5–8. [Google Scholar] [CrossRef] [PubMed]
- Kawai, C.; Zhang, Y.; Lukács, G.; Chu, W.; Zheng, C.; Gao, C.; Gozli, D.; Wang, Y.; Ansorge, U. The good, the bad, and the red: Implicit color-valence associations across cultures. Psychol. Res. 2023, 87, 704–724. [Google Scholar] [CrossRef]
- Ramarapu, N.K.; Frolick, M.N.; Wilkes, R.B.; Wetherbe, J.C. The emergence of hypertext and problem solving: An experimental explanation of accessing and using information from linear verus nonlinear systems. Dec. Sci. 1997, 28, 825–849. [Google Scholar] [CrossRef]
- Lemarie, J.; Eyrolle, H.; Cellier, J.-M. The segmented presentation of visually structured texts: Effects on comprehension. Comp. Hum. Behav. 2008, 24, 888–902. [Google Scholar] [CrossRef]
- Baker, K.L.; Franz, A.M.; Jordan, P.W. Coping with Ambiguity in Knowledge-Based Natural Language Analysis; Carnegie Mellon University: Pittsburgh, PA, USA, 2001. [Google Scholar]
- Frost, R.; Feldman, L.B.; Katz, L. Phonological ambiguity and lexical ambiguity: Effects on visual and auditory word recognition. J. Exp. Psychol. Learn. Mem. Cogn. 1990, 16, 569–580. [Google Scholar] [CrossRef]
- Braver, T.S.; Krug, M.K.; Chiew, K.S.; Kool, W.; Westbrook, J.A.; Clement, N.J.; Adcock, R.A.; Barch, D.M.; Botvinick, M.M.; Carver, C.S.; et al. Mechanisms of motivation–cognition interaction: Challenges and opportunities. Cogn. Affect. Behav. Neurosci. 2014, 14, 443–472. [Google Scholar] [CrossRef]
- Druckman, J.N.; McGrath, M.C. The evidence for motivated reasoning in climate change preference formation. Nat. Clim. Chang. 2019, 9, 111–119. [Google Scholar] [CrossRef]
- Nurse, M.S.; Grant, W.J. I’ll see it when I believe it: Motivated numeracy in perceptions of climate change risk. Environ. Commun. 2020, 14, 184–201. [Google Scholar] [CrossRef]
- Jost, J.T.; Glaser, J.; Sulloway, F.J.; Kruglanski AWJost, J.T.; Glaser, J.; Sulloway, F.J.; Kruglanski, A.W. Political conservatism as motivated social cognition. Psychol. Bull. 2018, 129, 339–375. [Google Scholar] [CrossRef]
- Perez, D.L.; Edwards, M.J.; Nielsen, G.; Kozlowska, K.; Hallett, M.; LaFrance, W.C., Jr. Decade of progress in motor functional neurological disorder: Continuing the momentum. J. Neurol. Neurosurg. Psychiatry 2021, 92, 668–677. [Google Scholar] [CrossRef]
- Issak, S.; Kanaan, R.; Nielsen, G.; Fini, N.A.; Williams, G. Functional gait disorders: Clinical presentations, phenotypes and implications for treatment. Brain Inj. 2023, 37, 437–445. [Google Scholar] [CrossRef]
- Da Costa, L.; Parr, T.; Sajid, N.; Veselic, S.; Neacsu, V.; Friston, K. Active inference on discrete state-spaces: A synthesis. J. Math. Psychol. 2020, 99, 102447. [Google Scholar] [CrossRef]
- Parr, T.; Pezzulo, G.; Friston, K.J. Active Inference: The Free Energy Principle in Mind, Brain, and Behavior; MIT Press: Cambridge, MA, USA, 2022. [Google Scholar]
- Pennycook, G.; Rand, D.G. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 2019, 188, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Padamsey, Z.; Rochefort, N.L. Paying the brain’s energy bill. Curr. Opin. Neurobiol. 2023, 78, 102668. [Google Scholar] [CrossRef] [PubMed]
- Peters, A.; McEwen, B.S.; Friston, K. Uncertainty and stress: Why it causes diseases and how it is mastered by the brain. Prog. Neurobiol. 2017, 156, 164–188. [Google Scholar] [CrossRef]
- Bennett, S.H.; Kirby, A.J.; Finnerty, G.T. Rewiring the connectome: Evidence and effects. Neurosci. Biobehav. Rev. 2018, 88, 51–62. [Google Scholar] [CrossRef]
- Bullmore, E.; Sporns, O. The economy of brain network organization. Nat. Rev. Neurosci. 2012, 13, 336–349. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Q.; Liao, X.; Zhou, C.; He, Y. Association of aerobic glycolysis with the structural connectome reveals a benefit–risk balancing mechanism in the human brain. Proc. Natl. Acad. Sci. USA 2021, 118, e2013232118. [Google Scholar] [CrossRef]
- Carston, R. A note on pragmatic principles of least effort. UCL Work. Pap. Linguist. 2005, 17, 271–278. [Google Scholar]
- Davies, B.L. Least Collaborative Effort or Least Individual Effort: Examining the Evidence; Working Papers in Linguistics and Phonetics, No.12; University of Leeds: Leeds, UK, 2007. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Larsson, S.; Bogusz, C.I.; Schwarz, J.A. Human-Centred AI in the EU: Trustworthiness as a Strategic Priority in the European Member States; Fores: Stockholm, Sweden, 2020. [Google Scholar]
- Zhu, J.; Liapis, A.; Risi, S.; Bidarra, R.; Youngblood, G.M. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. In Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games (CIG), Maastricht, The Netherlands, 14–17 August 2018; pp. 1–8. [Google Scholar]
- Arya, V.; Bellamy, R.K.; Chen, P.; Dhurandhar, A.; Hind, M.; Hoffman, S.C.; Houde, S.; Liao, Q.V.; Luss, R.; Mojsilovic, A.; et al. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv 2019, arXiv:1909.03012. [Google Scholar]
- Dey, S.; Chakraborty, P.; Kwon, B.C.; Dhurandhar, A.; Ghalwash, M.; Saiz, F.J.S.; Ng, K.; Sow, D.; Varshney, K.R.; Meyer, P. Human-centered explainability for life sciences, healthcare, and medical informatics. Patterns 2022, 3, 100493. [Google Scholar] [CrossRef]
- Shin, D. The effects of explainability and causability on perception, trust and acceptance: Implications for explainable AI. Int. J. Hum.-Comp. Stud. 2021, 146, 102551. [Google Scholar] [CrossRef]
- Holzinger, A.; Langs, G.; Denk, H.; Zatloukal, K.; Mueller, H. Causability and explainability of artificial intelligence in medicine. Data Min. Knowl. Discov. 2019, 9, e1312. [Google Scholar] [CrossRef]
- Stein, N. Causation and explanation in Aristotle. Philos. Compass 2011, 6, 699–707. [Google Scholar] [CrossRef]
- Lipton, P. Causation and explanation. In The Oxford Handbook of Causation; Beebee, H., Hitchcock, C., Menzies, P., Eds.; Oxford University Press: Oxford, UK, 2009. [Google Scholar]
- Alonso, V.; De La Puente, P. System transparency in shared autonomy: A mini review. Front. Neurorobot. 2018, 12, 83. [Google Scholar] [CrossRef] [PubMed]
- Du Boulay, B.; O’shea, T.; Monk, J. The black box inside the glass box: Presenting computing concepts to novices. Int. J. Hum.-Comp. Stud. 1999, 51, 265–277. [Google Scholar] [CrossRef]
- Rai, A. Explainable AI: From black box to glass box. J. Acad. Market. Sci. 2020, 48, 137–141. [Google Scholar] [CrossRef]
- Castelvecchi, D. Can we open the black box of AI? Nature 2016, 538, 20–23. [Google Scholar] [CrossRef]
- Endsley, M.R. Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Comput. Hum. Behav. 2023, 140, 107574. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, J.; Liu, X.; Zhu, T. Detecting depression through gait data: Examining the contribution of gait features in recognizing depression. Front. Psychiatry 2021, 12, 661213. [Google Scholar] [CrossRef]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable AI: A review of machine learning interpretability methods. Entropy 2021, 23, 18. [Google Scholar] [CrossRef]
- Costa, V.G.; Pedreira, C.E. Recent advances in decision trees: An updated survey. Artif. Intell. Rev. 2023, 56, 4765–4800. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
- Gerjets, P.; Scheiter, K.; Catrambone, R. Can learning from molar and modular worked examples be enhanced by providing instructional explanations and prompting self-explanations? Learn. Instr. 2006, 16, 104–121. [Google Scholar] [CrossRef]
- Sweller, J.; Ayres, P.; Kalyuga, S. (Eds.) The Redundancy Effect. In Cognitive Load Theory; Explorations in the Learning Sciences, Instructional Systems and Performance Technologies; Springer: New York, NY, USA, 2011; Volume 1, pp. 141–154. [Google Scholar]
- Hohwy, J. The self-evidencing brain. Noûs 2016, 250, 259–285. [Google Scholar] [CrossRef]
- Friston, K.J.; Daunizeau, J.; Kilner, J.; Kiebel, S.J. Action and behavior: A free-energy formulation. Biol. Cybern. 2010, 102, 227–260. [Google Scholar] [CrossRef] [PubMed]
- Boothroyd, G.; Alting, L. Design for assembly and disassembly. CIRP Ann. 1992, 41, 625–636. [Google Scholar] [CrossRef]
- Boothroyd, G.; Dewhurst, P.; Knight, W.A. Product Design for Manufacture and Assembly; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fox, S.; Rey, V.F. A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability. Mach. Learn. Knowl. Extr. 2024, 6, 1494-1509. https://doi.org/10.3390/make6030071
Fox S, Rey VF. A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability. Machine Learning and Knowledge Extraction. 2024; 6(3):1494-1509. https://doi.org/10.3390/make6030071
Chicago/Turabian StyleFox, Stephen, and Vitor Fortes Rey. 2024. "A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability" Machine Learning and Knowledge Extraction 6, no. 3: 1494-1509. https://doi.org/10.3390/make6030071
APA StyleFox, S., & Rey, V. F. (2024). A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability. Machine Learning and Knowledge Extraction, 6(3), 1494-1509. https://doi.org/10.3390/make6030071