Abstract
Due to recent development and improvements, methods from the field of machine learning (ML) are increasingly adopted in various domains, including historical research. However, state-of-the-art ML models are usually black-boxes that lack transparency and interpretability. Therefore, explainable AI (XAI) methods try to make black-box models more transparent in order to inspire trust of the user. Despite numerous opportunities to apply XAI in digital history, they have not been adopted widely. Moreover, most of the XAI methods applied to generate historical insights are static and not user-centric. In this paper, we propose an architecture for applying XAI in digital history, which can be used for various tasks like optical character recognition (OCR), text embeddings or ink detection. The architecture proposed will produce interactive explanations to incrementally co-construct understanding of a user about the output of the AI system, instead of providing one-shot explanations. Due to a lack of ground truth in many tasks of digital history research, verification of model outputs is a difficult task for historical researchers. Therefore, we propose a user-centric framework to enhance user trust into the system, which is also crucial to verify given outputs from a black-box model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Media Monitoring of the past, https://impresso-project.ch/.
- 2.
See for example letters in the State Archives of Belgium, https://arch.be.
- 3.
Vesuvius Challenge: https://scrollprize.org/.
- 4.
E.g. the Vesuvius challenge: www.scrollprize.org.
References
Anders, C.J., Weber, L., Neumann, D., Samek, W., Müller, K.R., Lapuschkin, S.: Finding and removing clever Hans: using explanation methods to debug and improve deep models. Inf. Fusion 77, 261–295 (2022)
Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)
Argyrou, A., Agapiou, A.: A review of artificial intelligence and remote sensing for archaeological research. Remote Sens. 14(23), 6000 (2022)
Babu, N., Soumya, A.: Character recognition in historical handwritten documents–a survey. In: 2019 International Conference on Communication and Signal Processing (ICCSP), pp. 0299–0304. IEEE (2019)
Bod, R.: A New History of the Humanities. Oxford University Press, Oxford (2010)
Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)
Byrne, R.M.: Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-2019), pp. 6276–6282. AAAI Press, Macao, China (2019)
Chapinal-Heras, D., Díaz-Sánchez, C.: A review of AI applications in human sciences research. Digit. Appl. Archaeol. Cult. Heritage, e00288 (2023)
Eberle, O., Büttner, J., El-Hajj, H., Montavon, G., Müller, K.R., Valleriani, M.: Insightful analysis of historical sources at scales beyond human capabilities using unsupervised machine learning and XAI. arXiv preprint arXiv:2310.09091 (2023)
Fickers, A.: Veins filled with the diluted sap of rationality: a critical reply to Rens Bod. Low Countries Hist. Rev. 128(4), 155–163 (2013)
Fickers, A., Tatarinov, J., van der Heijden, T.: Digital history and hermeneutics–between theory and practice: an introduction. In: Digital History and Hermeneutics Between Theory and Practice, pp. 1–22. De Gruyter, Berlin, Boston (2022)
Hempel, C.G.: Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. The Free Press, New York (1965)
Hulstijn, J., Tchappi, I., Najjar, A., Aydoğan, R.: Metrics for evaluating explainable recommender systems. In: Calvaresi, D., et al. (eds.) Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2023. LNCS, vol. 14127, pp. 212–230. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-40878-6_12
Jacovi, A.: Trends in explainable AI (XAI) literature (2023)
Jain, V., Ratnam, K., Skariah, S.M.: Intervention of artificial intelligence in history, historical excavations and archaeology. In: 2021 International Conference on Technological Advancements and Innovations (ICTAI), pp. 127–132. IEEE (2021)
Joshi, G., Walambe, R., Kotecha, K.: A review on explainability in multimodal deep neural nets. IEEE Access 9, 59800–59821 (2021)
Martínek, J., Lenc, L., Král, P.: Building an efficient OCR system for historical documents with little training data. Neural Comput. Appl. 32, 17209–17227 (2020)
McGillivray, B., Tóth, G.M.: Applying Language Technology in Humanities Research: Design, Application, and the Underlying Logic. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-46493-6
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Parker, C.S., Parsons, S., Bandy, J., Chapman, C., Coppens, F., Seales, W.B.: From invisibility to readability: recovering the ink of Herculaneum. PLoS ONE 14(5), e0215775 (2019). https://doi.org/10.1371/journal.pone.0215775
Parsons, S., Parker, C.S., Chapman, C., Hayashida, M., Seales, W.B.: EduceLab-scrolls: verifiable recovery of text from Herculaneum papyri using X-ray CT. arXiv preprint arXiv:2304.02084 (2023)
Rodis, N., Sardianos, C., Papadopoulos, G.T., Radoglou-Grammatikis, P., Sarigiannidis, P., Varlamis, I.: Multimodal explainable artificial intelligence: a comprehensive review of methodological advances and future research directions. arXiv preprint arXiv:2306.05731 (2023)
Rohlfing, K., et al.: Explanation as a social practice: toward a conceptual framework for the social design of AI systems. IEEE Trans. Cogn. Dev. Syst. 13, 717–728 (2021). https://doi.org/10.1109/TCDS.2020.3044366
Schmid, U., Wrede, B.: What is missing in XAI so far? An interdisciplinary perspective. KI-Künstliche Intelligenz 36(3), 303–315 (2022)
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Sommerschield, T., et al.: Machine learning for ancient languages: a survey. Comput. Linguist. 49(3), 703–747 (2023)
Somrak, M., Džeroski, S., Kokalj, Ž: Learning to classify structures in ALS-derived visualizations of ancient Maya settlements with CNN. Remote Sens. 12(14), 2215 (2020)
Son, J., Jin, J., Yoo, H., Bak, J., Cho, K., Oh, A.: Translating Hanja historical documents to contemporary Korean and English. arXiv preprint arXiv:2205.10019 (2022)
Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp. 3319–3328. PMLR (2017)
Underwood, T.: A genealogy of distant reading. Digit. Humanit. Q. 11(2) (2017)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harvard J. Law Technol. 31(2), 841–887 (2018)
Walton, D.: Dialogical Models of Explanation, pp. 1–9. AAAI, Menlo Park, California (2007)
Acknowledgements
This work is supported by the EXPECTATION project (CHIST-ERA-19-XAI-005) and by Deep Data Science of Digital History (D4H).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Albrecht, R., Hulstijn, J., Tchappi, I., Najjar, A. (2024). Towards Interactive and Social Explainable Artificial Intelligence for Digital History. In: Calvaresi, D., et al. Explainable and Transparent AI and Multi-Agent Systems. EXTRAAMAS 2024. Lecture Notes in Computer Science(), vol 14847. Springer, Cham. https://doi.org/10.1007/978-3-031-70074-3_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-70074-3_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-70073-6
Online ISBN: 978-3-031-70074-3
eBook Packages: Computer ScienceComputer Science (R0)