Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3411764.3445188acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Public Access

Expanding Explainability: Towards Social Transparency in AI systems

Published: 07 May 2021 Publication History

Abstract

As AI-powered systems increasingly mediate consequential decision-making, their explainability is critical for end-users to take informed and accountable actions. Explanations in human-human interactions are socially-situated. AI systems are often socio-organizationally embedded. However, Explainable AI (XAI) approaches have been predominantly algorithm-centered. We take a developmental step towards socially-situated XAI by introducing and exploring Social Transparency (ST), a sociotechnically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making. To explore ST conceptually, we conducted interviews with 29 AI users and practitioners grounded in a speculative design scenario. We suggested constitutive design elements of ST and developed a conceptual framework to unpack ST’s effect and implications at the technical, decision-making, and organizational level. The framework showcases how ST can potentially calibrate trust in AI, improve decision-making, facilitate organizational collective actions, and cultivate holistic explainability. Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–18.
[2]
Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376615
[3]
Mark S Ackerman, Juri Dachtera, Volkmar Pipek, and Volker Wulf. 2013. Sharing knowledge and expertise: The CSCW view of knowledge management. Computer Supported Cooperative Work (CSCW) 22, 4-6 (2013), 531–573.
[4]
P Agre. 1997. Toward a critical technical practice: Lessons learned in trying to reform AI in Bowker. G., Star, S., Turner, W., and Gasser, L., eds, Social Science, Technical Systems and Cooperative Work: Beyond the Great Divide, Erlbaum (1997).
[5]
Philip Agre and Philip E Agre. 1997. Computation and human experience. Cambridge University Press.
[6]
Ahmed Alqaraawi, Martin Schuessler, Philipp Weiß, Enrico Costanza, and Nadia Berthouze. 2020. Evaluating saliency map explanations for convolutional neural networks: a user study. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 275–285.
[7]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58(2020), 82–115.
[8]
Vijay Arya, Rachel KE Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C Hoffman, Stephanie Houde, Q Vera Liao, Ronny Luss, Aleksandra Mojsilović, 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. CoRR abs/1909.03012(2019). arxiv:1909.03012http://arxiv.org/abs/1909.03012
[9]
John R Austin. 2003. Transactive memory in organizational groups: the effects of content, consensus, specialization, and accuracy on group performance.Journal of applied psychology 88, 5 (2003), 866.
[10]
David P Brandon and Andrea B Hollingshead. 2004. Transactive memory systems in organizations: Matching tasks, expertise, and people. Organization science 15, 6 (2004), 633–644.
[11]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101.
[12]
Zana Buçinca, Phoebe Lin, Krzysztof Z. Gajos, and Elena L. Glassman. 2020. Proxy Tasks and Subjective Measures Can Be Misleading in Evaluating Explainable AI Systems. In Proceedings of the 25th International Conference on Intelligent User Interfaces (Cagliari, Italy) (IUI ’20). Association for Computing Machinery, New York, NY, USA, 454–464. https://doi.org/10.1145/3377325.3377498
[13]
Carrie J Cai, Jonas Jongejan, and Jess Holbrook. 2019. The effects of example-based explanations in a machine learning interface. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 258–262.
[14]
Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8 (2019), 832.
[15]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–12.
[16]
EunJeong Cheon and Norman Makoto Su. 2016. Integrating roboticist values into a Value Sensitive Design framework for humanoid robots. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 375–382.
[17]
EunJeong Cheon and Norman Makoto Su. 2018. Futuristic autobiographies: Weaving participant narratives to elicit values around robots. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 388–397.
[18]
Laura Dabbish, Colleen Stuart, Jason Tsay, and Jim Herbsleb. 2012. Social coding in GitHub: transparency and collaboration in an open software repository. In Proceedings of the ACM 2012 conference on computer supported cooperative work. 1277–1286.
[19]
Ben Dattner, Tomas Chamorro-Premuzic, Richard Buchband, and Lucinda Schettler. 2019. The Legal and Ethical Implications of Using AI in Hiring. Harvard Business Review (25 April 2019). Retrieved 26-August-2019 from https://hbr.org/2019/04/the-legal-and-ethical-implications-of-using-ai-in-hiring
[20]
Daniel Clement Dennett. 1989. The intentional stance. MIT press.
[21]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err.Journal of Experimental Psychology: General 144, 1 (2015), 114.
[22]
Jonathan Dodge, Q Vera Liao, Yunfeng Zhang, Rachel KE Bellamy, and Casey Dugan. 2019. Explaining models: an empirical study of how explanations impact fairness judgment. In Proceedings of the 24th International Conference on Intelligent User Interfaces. 275–285.
[23]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. stat 1050(2017), 2.
[24]
Paul Dourish. 2004. Where the action is: the foundations of embodied interaction. MIT press.
[25]
Paul Dourish, Janet Finlay, Phoebe Sengers, and Peter Wright. 2004. Reflective HCI: Towards a critical technical practice. In CHI’04 extended abstracts on Human factors in computing systems. 1727–1728.
[26]
Upol Ehsan and Mark O Riedl. 2020. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach. arXiv preprint arXiv:2002.01092(2020).
[27]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl. 2019. Automated Rationale Generation: A Technique for Explainable AI and Its Effects on Human Perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI ’19). Association for Computing Machinery, New York, NY, USA, 263–274. https://doi.org/10.1145/3301275.3302316
[28]
Thomas Erickson and Wendy A Kellogg. 2003. Social translucence: using minimalist visualisations of social activity to support collective interaction. In Designing information spaces: The social navigation approach. Springer, 17–41.
[29]
Bhavya Ghai, Q Vera Liao, Yunfeng Zhang, Rachel Bellamy, and Klaus Mueller. 2021. Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers. Proceedings of the ACM on Human-Computer InteractionCSCW (2021).
[30]
Eric Gilbert. 2012. Designing Social Translucence over Social Networks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Austin, Texas, USA) (CHI ’12). Association for Computing Machinery, New York, NY, USA, 2731–2740. https://doi.org/10.1145/2207676.2208670
[31]
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. 2018. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, 80–89.
[32]
Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 19–31.
[33]
Jonathan Grudin. 1988. Why CSCW Applications Fail: Problems in the Design and Evaluationof Organizational Interfaces. In Proceedings of the 1988 ACM Conference on Computer-Supported Cooperative Work (Portland, Oregon, USA) (CSCW ’88). Association for Computing Machinery, New York, NY, USA, 85–93. https://doi.org/10.1145/62266.62273
[34]
David Gunning. 2017. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), nd Web 2 (2017), 2.
[35]
Carl Gutwin and Saul Greenberg. 2002. A descriptive framework of workspace awareness for real-time groupware. Computer supported cooperative work 11, 3-4 (2002), 411–446.
[36]
Carl Gutwin, Reagan Penner, and Kevin Schneider. 2004. Group awareness in distributed software development. In Proceedings of the 2004 ACM conference on Computer supported cooperative work. 72–81.
[37]
Karen Hao. 2019. AI is sending people to jail – and getting it wrong. MIT Technology Review (21 January 2019). Retrieved 26-August-2019 from https://www.technologyreview.com/s/612775/algorithms- criminal-justice-ai/
[38]
Fritz Heider. 1958. The psychology of interpersonal relations Wiley. New York (1958).
[39]
Denis J Hilton. 1996. Mental models and causal explanation: Judgements of probable cause and explanatory relevance. Thinking & Reasoning 2, 4 (1996), 273–308.
[40]
Michael Hind, Dennis Wei, Murray Campbell, Noel CF Codella, Amit Dhurandhar, Aleksandra Mojsilović, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2019. TED: Teaching AI to explain its decisions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 123–129.
[41]
Robert R Hoffman and Gary Klein. 2017. Explaining explanation, part 1: theoretical foundations. IEEE Intelligent Systems 32, 3 (2017), 68–73.
[42]
Andrea B Hollingshead and David P Brandon. 2003. Potential benefits of communication in transactive memory systems. Human communication research 29, 4 (2003), 607–615.
[43]
Sungsoo Ray Hong, Jessica Hullman, and Enrico Bertini. 2020. Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1(2020), 1–26.
[44]
Shih-Wen Huang and Wai-Tat Fu. 2013. Don’t hide in the crowd! Increasing social transparency between peer workers improves crowdsourcing outcomes. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 621–630.
[45]
David Hume. 2000. An enquiry concerning human understanding: A critical edition. Vol. 3. Oxford University Press.
[46]
Edwin Hutchins. 1991. The social organization of distributed cognition.(1991).
[47]
Andrew JI Jones, Alexander Artikis, and Jeremy Pitt. 2013. The design of intelligent socio-technical systems. Artificial Intelligence Review 39, 1 (2013), 5–20.
[48]
Daniel Kahneman, Stewart Paul Slovic, Paul Slovic, and Amos Tversky. 1982. Judgment under uncertainty: Heuristics and biases. Cambridge university press.
[49]
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Daniella Raz, and PM Krafft. 2020. Toward situated interventions for algorithmic equity: lessons from the field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 45–55.
[50]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376219
[51]
Jennifer G Kim, Ha-Kyung Kong, Hwajung Hong, and Karrie Karahalios. 2020. Enriched Social Translucence in Medical Crowdfunding. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1465–1477.
[52]
Roderick M Kramer. 1999. Trust and distrust in organizations: Emerging perspectives, enduring questions. Annual review of psychology 50, 1 (1999), 569–598.
[53]
Vivian Lai, Han Liu, and Chenhao Tan. 2020. ” Why is’ Chicago’deceptive?” Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[54]
Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, and Ariel D. Procaccia. 2019. WeBuildAI: Participatory Framework for Algorithmic Governance. Proc. ACM Hum.-Comput. Interact. 3, CSCW, Article 181 (Nov. 2019), 35 pages. https://doi.org/10.1145/3359283
[55]
Paul M Leonardi. 2014. Social media, knowledge sharing, and innovation: Toward a theory of communication visibility. Information systems research 25, 4 (2014), 796–816.
[56]
Paul M Leonardi. 2015. Ambient awareness and knowledge acquisition: using social media to learn ‘who knows what’and ‘who knows whom’. Mis Quarterly 39, 4 (2015), 747–762.
[57]
David K Lewis. 1986. Causal explanation. (1986).
[58]
Q Vera Liao, Daniel Gruen, and Sarah Miller. 2020. Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM.
[59]
Brian Y. Lim and Anind K. Dey. 2010. Toolkit to Support Intelligibility in Context-Aware Applications. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing (Copenhagen, Denmark) (UbiComp ’10). Association for Computing Machinery, New York, NY, USA, 13–22. https://doi.org/10.1145/1864349.1864353
[60]
Brian Y Lim, Anind K Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2119–2128.
[61]
Brian Y Lim, Qian Yang, Ashraf M Abdul, and Danding Wang. 2019. Why these Explanations? Selecting Intelligibility Types for Explanation Goals. In IUI Workshops.
[62]
Zachary C Lipton. 2018. The mythos of model interpretability. Queue 16, 3 (2018), 31–57.
[63]
Tyler J. Loftus, Patrick J. Tighe, Amanda C. Filiberto, Philip A. Efron, Scott C. Brakenridge, Alicia M. Mohr, Parisa Rashidi, Jr Upchurch, Gilbert R., and Azra Bihorac. 2020. Artificial Intelligence and Surgical Decision-making. JAMA Surgery 155, 2 (02 2020), 148–158. https://doi.org/10.1001/jamasurg.2019.4917
[64]
Tania Lombrozo. 2011. The instrumental value of explanations. Philosophy Compass 6, 8 (2011), 539–551.
[65]
Tania Lombrozo. 2012. Explanation and abductive inference.(2012).
[66]
Erin E Makarius, Debmalya Mukherjee, Joseph D Fox, and Alexa K Fox. 2020. Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research 120 (2020), 262–273.
[67]
David W McDonald, Stephanie Gokhman, and Mark Zachry. 2012. Building for social translucence: a domain analysis and prototype system. In Proceedings of the ACM 2012 conference on computer supported cooperative work. 637–646.
[68]
Miriam J Metzger and Andrew J Flanagin. 2013. Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of pragmatics 59(2013), 210–220.
[69]
Miriam J Metzger, Andrew J Flanagin, and Ryan B Medders. 2010. Social and heuristic approaches to credibility evaluation online. Journal of communication 60, 3 (2010), 413–439.
[70]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38.
[71]
Brent Mittelstadt, Chris Russell, and Sandra Wachter. 2019. Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency. 279–288.
[72]
Shakir Mohamed, Marie-Therese Png, and William Isaac. 2020. Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology(2020), 1–26.
[73]
Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. 2018. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Trans. Interact. Intell. Syst.iv(2018). https://doi.org/10.1145/3387166arxiv:1811.11839
[74]
Richard L Moreland and L Thompson. 2006. Transactive memory: Learning who knows what in work groups and organizations. Small groups: Key readings 327 (2006).
[75]
Michael Muller and Q Vera Liao. 2017. Exploring AI Ethics and Values through Participatory Design Fictions. Human Computer Interaction Consortium(2017).
[76]
John Murawski. 2019. Mortgage Providers Look to AI to Process Home Loans Faster. Wall Street Journal (18 March 2019). Retrieved 16-September-2020 from https://www.wsj.com/articles/mortgage-providers-look-to-ai-to-process-home-loans-faster-11552899212
[77]
Bonnie A Nardi, Steve Whittaker, and Heinrich Schwarz. 2002. NetWORKers and their activity in intensional networks. Computer Supported Cooperative Work (CSCW) 11, 1-2 (2002), 205–242. https://doi.org/10.1023/A:1015241914483
[78]
Duyen T Nguyen, Laura A Dabbish, and Sara Kiesler. 2015. The perverse effects of social transparency on online advice taking. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 207–217.
[79]
DJ Pangburn. 2019. Schools are using software to help pick who gets in. What could go wrong?Fast Company (17 May 2019). Retrieved 16-September-2020 from https://www.fastcompany.com/90342596/schools-are-quietly-turning-to-ai-to-help-pick-who-gets-in-what-could-go-wrong
[80]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810(2018).
[81]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1–13.
[82]
Gabriëlle Ras, Marcel van Gerven, and Pim Haselager. 2018. Explanation methods in deep learning: Users, values, concerns and challenges. In Explainable and Interpretable Models in Computer Vision and Machine Learning. Springer, 19–36.
[83]
Paul Resnick, Ko Kuwabara, Richard Zeckhauser, and Eric Friedman. 2000. Reputation systems. Commun. ACM 43, 12 (2000), 45–48.
[84]
Mary Beth Rosson and John M Carroll. 2009. Scenario based design. Human-computer interaction. boca raton, FL(2009), 145–162.
[85]
Selma Šabanović. 2010. Robots in society, society in robots. International Journal of Social Robotics 2, 4 (2010), 439–450.
[86]
Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What does it mean to’solve’the problem of discrimination in hiring? social, technical and legal perspectives from the UK on automated hiring systems. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 458–468.
[87]
Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 59–68.
[88]
Phoebe Sengers, Kirsten Boehner, Shay David, and Joseph’Jofish’ Kaye. 2005. Reflective design. In Proceedings of the 4th decennial conference on Critical computing: between sense and sensibility. 49–58.
[89]
Ben Shneiderman. 2020. Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction 36, 6(2020), 495–504.
[90]
Alison Smith-Renner, Ron Fan, Melissa Birchfield, Tongshuang Wu, Jordan Boyd-Graber, Daniel S Weld, and Leah Findlater. 2020. No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[91]
Thilo Spinner, Udo Schlegel, Hanna Schäfer, and Mennatallah El-Assady. 2019. explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics 26, 1(2019), 1064–1074. https://doi.org/10.1109/TVCG.2019.2934629
[92]
Susan Leigh Star and Anselm Strauss. 1999. Layers of silence, arenas of voice: The ecology of visible and invisible work. Computer supported cooperative work (CSCW) 8, 1-2 (1999), 9–30.
[93]
Anselm Strauss and Juliet Corbin. 1994. Grounded theory methodology. Handbook of qualitative research 17, 1 (1994), 273–285.
[94]
H Colleen Stuart, Laura Dabbish, Sara Kiesler, Peter Kinnaird, and Ruogu Kang. 2012. Social transparency in networked information exchange: a theoretical framework. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work. 451–460.
[95]
Simone Stumpf, Adrian Bussone, and Dympna O’sullivan. 2016. Explanations considered harmful? user interactions with machine learning systems. In ACM SIGCHI Workshop on Human-Centered Machine Learning.
[96]
Lucy Suchman. 1995. Making work visible. Commun. ACM 38, 9 (1995), 56–64.
[97]
Lucy A Suchman. 1987. Plans and situated actions: The problem of human-machine communication. Cambridge university press.
[98]
S Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative.
[99]
Harini Suresh and John V Guttag. 2019. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002(2019).
[100]
Jennifer Wortman Vaughan and Hanna Wallach. 2020. A human-centered agenda for intelligible machine learning. Machines We Trust: Getting Along with Artificial Intelligence (2020).
[101]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[102]
Daniel M Wegner, Ralph Erber, and Paula Raymond. 1991. Transactive memory in close relationships.Journal of personality and social psychology 61, 6(1991), 923.
[103]
Karl E Weick and Karlene H Roberts. 1993. Collective mind in organizations: Heedful interrelating on flight decks. Administrative science quarterly(1993), 357–381.
[104]
Daniel S. Weld and Gagan Bansal. 2019. The Challenge of Crafting Intelligible Intelligence. Commun. ACM 62, 6 (May 2019), 70–79. https://doi.org/10.1145/3282486
[105]
Daniel A Wilkenfeld and Tania Lombrozo. 2015. Inference to the best explanation (IBE) versus explaining for the best inference (EBI). Science & Education 24, 9-10 (2015), 1059–1077.
[106]
Christine Wolf and Jeanette Blomberg. 2019. Evaluating the promise of human-algorithm collaborations in everyday work practices. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–23.
[107]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 189–201.
[108]
Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable ai: Fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–11.
[109]
Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F Antaki. 2016. Investigating the heart pump implant decision process: opportunities for decision support tools to help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4477–4488.
[110]
Youngjin Yoo and Prasert Kanawattanachai. 2001. Developments of transactive memory systems and collective mind in virtual teams. International Journal of Organizational Analysis 9, 2 (2001), 187–208.
[111]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Barcelona, Spain) (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 295–305. https://doi.org/10.1145/3351095.3372852
[112]
Haiyi Zhu, Bowen Yu, Aaron Halfaker, and Loren Terveen. 2018. Value-sensitive algorithm design: Method, case study, and lessons. Proceedings of the ACM on Human-Computer Interaction 2, CSCW(2018), 1–23.

Cited By

View all
  • (2024)The Role of Transparency in AI-Driven Technologies: Targeting HealthcareAI - Ethical and Legal Challenges [Working Title]10.5772/intechopen.1007444Online publication date: 12-Nov-2024
  • (2024)How Does Cost Leadership Strategy Suppress the Performance Benefits of Explainability of AI Applications in Organizations?Journal of Global Information Management10.4018/JGIM.35406232:1(1-23)Online publication date: 12-Sep-2024
  • (2024)Renewable Energy Deployment and Guidelines for Responsible AI AdoptionExplainable Artificial Intelligence and Solar Energy Integration10.4018/979-8-3693-7822-9.ch013(363-392)Online publication date: 25-Oct-2024
  • Show More Cited By

Index Terms

  1. Expanding Explainability: Towards Social Transparency in AI systems
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
    May 2021
    10862 pages
    ISBN:9781450380966
    DOI:10.1145/3411764
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Badges

    • Honorable Mention

    Author Tags

    1. Artificial Intelligence
    2. Explainable AI
    3. explanations
    4. human-AI interaction
    5. social transparency
    6. socio-organizational context
    7. sociotechnical

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    CHI '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4,497
    • Downloads (Last 6 weeks)535
    Reflects downloads up to 21 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)The Role of Transparency in AI-Driven Technologies: Targeting HealthcareAI - Ethical and Legal Challenges [Working Title]10.5772/intechopen.1007444Online publication date: 12-Nov-2024
    • (2024)How Does Cost Leadership Strategy Suppress the Performance Benefits of Explainability of AI Applications in Organizations?Journal of Global Information Management10.4018/JGIM.35406232:1(1-23)Online publication date: 12-Sep-2024
    • (2024)Renewable Energy Deployment and Guidelines for Responsible AI AdoptionExplainable Artificial Intelligence and Solar Energy Integration10.4018/979-8-3693-7822-9.ch013(363-392)Online publication date: 25-Oct-2024
    • (2024)AI in ResearchUtilizing AI Tools in Academic Research Writing10.4018/979-8-3693-1798-3.ch014(216-231)Online publication date: 12-Apr-2024
    • (2024)Navigating the Fourth Industrial RevolutionData-Driven Business Intelligence Systems for Socio-Technical Organizations10.4018/979-8-3693-1210-0.ch001(1-27)Online publication date: 23-Feb-2024
    • (2024)Charting the Ethical CourseThe Role of Generative AI in the Communication Classroom10.4018/979-8-3693-0831-8.ch011(214-261)Online publication date: 12-Feb-2024
    • (2024)Responsible Automation: Exploring Potentials and Losses through Automation in Human–Computer Interaction from a Psychological PerspectiveInformation10.3390/info1508046015:8(460)Online publication date: 2-Aug-2024
    • (2024)Waning Breakthroughs? Investigating the State of Innovation in the Field of NLP2024 Portland International Conference on Management of Engineering and Technology (PICMET)10.23919/PICMET64035.2024.10653413(1-28)Online publication date: 4-Aug-2024
    • (2024)Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative StudyJMIR Mental Health10.2196/6267911(e62679)Online publication date: 25-Sep-2024
    • (2024)Experiential AI: Between Arts and Explainable AILeonardo10.1162/leon_a_0252457:3(298-306)Online publication date: 1-Jun-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media