Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3450337.3483487acmconferencesArticle/Chapter ViewAbstractPublication Pageschi-playConference Proceedingsconference-collections
Work in Progress

Explainability via Interactivity? Supporting Nonexperts’ Sensemaking of pre-trained CNN by Interacting with Their Daily Surroundings

Published: 15 October 2021 Publication History

Abstract

Current research on Explainable AI (XAI) heavily targets on expert users (data scientists or AI developers). However, increasing importance has been argued for making AI more understandable to nonexperts, who are expected to leverage AI techniques, but have limited knowledge about AI. We present a mobile application to support nonexperts to interactively make sense of Convolutional Neural Networks (CNN); it allows users to play with a pre-trained CNN by taking pictures of their surrounding objects. We use an up-to-date XAI technique (Class Activation Map) to intuitively visualize the model’s decision (the most important image regions that lead to a certain result). Deployed in a university course, this playful learning tool was found to support design students to gain vivid understandings about the capabilities and limitations of pre-trained CNNs in real-world environments. Concrete examples of students’ playful explorations are reported to characterize their sensemaking processes reflecting different depths of thought.

Supplementary Material

VTT File (p274-Video.vtt)
MP4 File (p274-Video.mp4)
Supplemental video and captions

References

[1]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, and Michael Isard. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX symposium on operating systems design and implementation (OSDI 16). 265–283.
[2]
TensorFlow authors. 2021. MobileNet. https://github.com/tensorflow/tfjs--models/tree/mas
[3]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115. https://doi.org/10.1016/j.inffus.2019.12.012 arxiv:1910.10045
[4]
Michelle Carney, Barron Webster, Irene Alvarado, Kyle Phillips, Noura Howell, Jordan Griffith, Jonas Jongejan, Amit Pitaru, and Alexander Chen. 2020. Teachable machine: Approachable Web-based tool for exploring machine learning classification. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 1–8.
[5]
Graham Dove, Kim Halskov, Jodi Forlizzi, and John Zimmerman. 2017. UX design innovation: Challenges for working with machine learning as a design material. In Proceedings of the 2017 chi conference on human factors in computing systems. 278–288.
[6]
Evgeny Demidov. 2019. Interactive Heat map demo. https://www.ibiblio.org/e-notes/ml/heatmap.htm
[7]
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Dino Pedreschi, and Fosca Giannotti. 2018. A survey of methods for explaining black box models. arXiv 51, 5 (2018), 1–42. arxiv:1802.01933
[8]
Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the 22nd ACM International Conference on Multimedia(MM ’14). Association for Computing Machinery, New York, NY, USA, 675–678. https://doi.org/10.1145/2647868.2654889
[9]
Min Lin, Qiang Chen, and Shuicheng Yan. 2014. Network in network. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings(2014). arxiv:1312.4400
[10]
Drew Linsley, Dan Shiebler, Sven Eberhardt, and Thomas Serre. 2019. Learning what and where to attend. 7th International Conference on Learning Representations, ICLR 2019 (2019), 1–21. arxiv:1805.08819
[11]
Swati Mishra and Jeffrey M Rzeszotarski. 2021. Designing Interactive Transfer Learning Tools for ML Non-Experts. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445096
[12]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. (2017).
[13]
Donald A Schon. 1984. The reflective practitioner: How professionals think in action. Vol. 5126. Basic books.
[14]
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2020. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision 128, 2 (2020), 336–359. https://doi.org/10.1007/s11263-019-01228-7 arxiv:1610.02391
[15]
Karen Simonyan and Andrew Zisserman. 2015. Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings(2015). arxiv:1409.1556
[16]
Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, and David Soergel. 2019. Tensorflow. js: Machine learning for the web and beyond. arXiv preprint arXiv:1901.05350(2019).
[17]
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B Viégas, and Martin Wattenberg. 2016. Embedding projector: Interactive visualization and interpretation of embeddings. arXiv preprint arXiv:1611.05469(2016).
[18]
Qian Yang, Alex Scuito, John Zimmerman, Jodi Forlizzi, and Aaron Steinfeld. 2018. Investigating how experienced UX designers effectively work with machine learning. In Proceedings of the 2018 Designing Interactive Systems Conference. 585–596.
[19]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-ai interaction is uniquely difficult to design. In Proceedings of the 2020 chi conference on human factors in computing systems. 1–13.
[20]
Qian Yang, Jina Suh, Nan-Chen Chen, and Gonzalo Ramos. 2018. Grounding interactive machine learning tool design in how non-experts actually build models. In Proceedings of the 2018 Designing Interactive Systems Conference. 573–584.
[21]
Matthew D. Zeiler, Dilip Krishnan, Graham W. Taylor, and Rob Fergus. 2010. Deconvolutional networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE, 2528–2535. https://doi.org/10.1109/CVPR.2010.5539957
[22]
Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus. 2011. Adaptive deconvolutional networks for mid and high level feature learning. Proceedings of the IEEE International Conference on Computer Vision (2011), 2018–2025. https://doi.org/10.1109/ICCV.2011.6126474
[23]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem (2016), 2921–2929. https://doi.org/10.1109/CVPR.2016.319 arxiv:1512.04150

Cited By

View all
  • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
  • (2024)Predicting and Presenting Task Difficulty for Crowdsourcing Food Rescue PlatformsProceedings of the ACM Web Conference 202410.1145/3589334.3648155(4686-4696)Online publication date: 13-May-2024

Index Terms

  1. Explainability via Interactivity? Supporting Nonexperts’ Sensemaking of pre-trained CNN by Interacting with Their Daily Surroundings
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI PLAY '21: Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in Play
      October 2021
      414 pages
      ISBN:9781450383561
      DOI:10.1145/3450337
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 15 October 2021

      Check for updates

      Author Tags

      1. Class Activation Map
      2. Convolutional Neural Networks
      3. Explainable AI
      4. Mobile Application

      Qualifiers

      • Work in progress
      • Research
      • Refereed limited

      Conference

      CHI PLAY '21
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 421 of 1,386 submissions, 30%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)34
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 19 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)User‐Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature ReviewHuman Behavior and Emerging Technologies10.1155/2024/46288552024:1Online publication date: 15-Jul-2024
      • (2024)Predicting and Presenting Task Difficulty for Crowdsourcing Food Rescue PlatformsProceedings of the ACM Web Conference 202410.1145/3589334.3648155(4686-4696)Online publication date: 13-May-2024

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media