Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3529190.3535693acmotherconferencesArticle/Chapter ViewAbstractPublication PagespetraConference Proceedingsconference-collections
research-article

Towards FAIR Explainable AI: a standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems

Published: 11 July 2022 Publication History

Abstract

Several useful taxonomies have been published that survey the eXplainable AI (XAI) research field. However, these taxonomies typically do not show the relation between XAI solutions and several use case aspects, such as the explanation goal or the task context. In order to better connect the field of XAI research with concrete use cases and user needs, we designed the ASCENT (Ai System use Case Explanation oNTology) framework, which is a new ontology and corresponding metadata standard with three complementary modules for different aspects of an XAI solution: one for aspects of AI systems, another for use case aspects, and yet another for explanation properties. The descriptions of XAI solutions in this framework include whether the XAI solution has a positive, negative, inconclusive or unresearched relation with use case elements. Descriptions in ASCENT thus emphasize the (user) evaluation of XAI solutions in order to support finding validated practices for application in industry, as well as being helpful for identifying research gaps. Describing XAI solutions according to the proposed common metadata standard is an important step towards the FAIR (Findable, Accessible, Interoperable, Reusable) usage of XAI solutions.

References

[1]
Ajaya Adhikari, David M. J. Tax, 2019. LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models. In 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). IEEE, 1–7.
[2]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[3]
Vaishak Belle and Ioannis Papantonis. 2020. Principles and Practice of Explainable Machine Learning. CoRR abs/2009.11698(2020). arXiv:2009.11698https://arxiv.org/abs/2009.11698
[4]
Christopher M. Bishop. 2006. Pattern recognition. Machine learning 128, 9 (2006).
[5]
Tolga Bolukbasi, Kai-Wei Chang, 2016. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. Advances in neural information processing systems 29 (2016), 4349–4357.
[6]
Shruthi Chari, Oshani Seneviratne, 2020. Explanation Ontology: A Model of Explanations for User-Centered AI. CoRR abs/2010.01479(2020). arXiv:2010.01479https://arxiv.org/abs/2010.01479
[7]
Sanjoy Dasgupta, Nave Frost, 2020. Explainable k-means clustering: theory and practice. In XXAI Workshop. ICML.
[8]
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608(2017).
[9]
Upol Ehsan and Mark O. Riedl. 2020. Human-centered explainable AI: Towards a reflective sociotechnical approach. In International Conference on Human-Computer Interaction. Springer, 449–466.
[10]
European Commission. 2021. Proposal for a regulation of the European Parliament and of the Council Laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
[11]
Timnit Gebru, Jamie Morgenstern, 2018. Datasheets for Datasets. CoRR abs/1803.09010(2018). arXiv:1803.09010http://arxiv.org/abs/1803.09010
[12]
Robert R. Hoffman, Shane T. Mueller, 2018. Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608(2018).
[13]
Weina Jin, Sheelagh Carpendale, 2019. Bridging AI Developers and End Users: an End-User-Centred Explainable AI Taxonomy and Visual Vocabularies.
[14]
Weina Jin, Jianyu Fan, 2021. EUCA: A Practical Prototyping Framework towards End-User-Centered Explainable Artificial Intelligence. CoRR abs/2102.02437(2021). arXiv:2102.02437https://arxiv.org/abs/2102.02437
[15]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence 267 (2019), 1–38.
[16]
Margaret Mitchell, Simone Wu, 2018. Model Cards for Model Reporting. CoRR (2018). arXiv:1810.03993http://arxiv.org/abs/1810.03993
[17]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining Explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency (Atlanta, GA, USA) (FAT* ’19). Association for Computing Machinery, New York, NY, USA, 279–288. https://doi.org/10.1145/3287560.3287574
[18]
Patrick R. Nicolas. 2015. Scala for machine learning. Packt Publishing Ltd.
[19]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier. arxiv:1602.04938 [cs.LG]
[20]
Denis Rothman. 2020. Hands-On Explainable AI (XAI) with Python: Interpret, visualize, explain, and integrate reliable AI for fair, secure, and trustworthy AI apps. Packt Publishing Ltd.
[21]
Cynthia Rudin. 2019. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arXiv:1811.10154 [stat.ML]
[22]
Tjeerd Schoonderwoerd, Wiard Jorritsma, Mark A. Neerincx, and Karel van den Bosch. 2021. Human-Centered XAI: Developing Design Patterns for Explanations of Clinical Decision Support Systems. International Journal of Human-Computer Studies (2021), 102684.
[23]
Ben Shneiderman. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction 36, 6(2020), 495–504.
[24]
Ilaria Tiddi, Mathieu D’Aquin, and Enrico Motta. 2015. An ontology design pattern to define explanations. In Proceedings of the 8th International Conference on Knowledge Capture, K-CAP 2015(Proceedings of the 8th International Conference on Knowledge Capture, K-CAP 2015). Association for Computing Machinery, Inc. https://doi.org/10.1145/2815833.2815844 8th International Conference on Knowledge Capture, K-CAP 2015 ; Conference date: 07-10-2015 Through 10-10-2015.
[25]
Nava Tintarev and Judith Masthoff. 2011. Designing and evaluating explanations for recommender systems. In Recommender systems handbook. Springer, 479–510.
[26]
Jasper van der Waa, Elisabeth Nieuwburg, 2021. Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence 291 (2021), 103404.
[27]
Jasper van der Waa, Tjeerd Schoonderwoerd, 2020. Interpretable confidence measures for decision support systems. International Journal of Human-Computer Studies 144 (2020), 102493.
[28]
Mythreyi Velmurugan and Chun others Ouyang. 2021. Evaluating Fidelity of Explainable Methods for Predictive Process Analytics. In International Conference on Advanced Information Systems Engineering. Springer, 64–72.
[29]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech. 31(2017), 841.
[30]
Mark D. Wilkinson, Michel Dumontier, 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 3, 1 (2016), 160018. https://doi.org/10.1038/sdata.2016.18
[31]
Kelvin Xu, Jimmy Ba, 2015. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. PMLR, 2048–2057.
[32]
Rex Ying, Dylan Bourgeois, 2019. Gnn explainer: A tool for post-hoc explanation of graph neural networks. arXiv preprint arXiv:1903.03894(2019).

Cited By

View all
  • (2024)"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCIProceedings of the ACM on Human-Computer Interaction10.1145/36869968:CSCW2(1-25)Online publication date: 8-Nov-2024
  • (2024)Unpacking Human-AI interactions: From Interaction Primitives to a Design SpaceACM Transactions on Interactive Intelligent Systems10.1145/366452214:3(1-51)Online publication date: 8-Jun-2024
  • (2024)Optimal Neighborhood Contexts in Explainable AI: An Explanandum-Based EvaluationIEEE Open Journal of the Computer Society10.1109/OJCS.2024.33897815(181-194)Online publication date: 2024
  • Show More Cited By

Index Terms

  1. Towards FAIR Explainable AI: a standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    PETRA '22: Proceedings of the 15th International Conference on PErvasive Technologies Related to Assistive Environments
    June 2022
    704 pages
    ISBN:9781450396318
    DOI:10.1145/3529190
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 July 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. ASCENT
    2. FAIR
    3. XAI ontology
    4. user-centered

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    PETRA '22

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)136
    • Downloads (Last 6 weeks)14
    Reflects downloads up to 27 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)"You Can either Blame Technology or Blame a Person..." --- A Conceptual Model of Users' AI-Risk Perception as a Tool for HCIProceedings of the ACM on Human-Computer Interaction10.1145/36869968:CSCW2(1-25)Online publication date: 8-Nov-2024
    • (2024)Unpacking Human-AI interactions: From Interaction Primitives to a Design SpaceACM Transactions on Interactive Intelligent Systems10.1145/366452214:3(1-51)Online publication date: 8-Jun-2024
    • (2024)Optimal Neighborhood Contexts in Explainable AI: An Explanandum-Based EvaluationIEEE Open Journal of the Computer Society10.1109/OJCS.2024.33897815(181-194)Online publication date: 2024
    • (2024)Ethical and preventive legal technologyAI and Ethics10.1007/s43681-023-00413-2Online publication date: 18-Mar-2024
    • (2024)Digitale VerantwortungVerbraucherinformatik10.1007/978-3-662-68706-2_5(203-260)Online publication date: 25-Mar-2024

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media