Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Open access

Improving Human-AI Collaboration With Descriptions of AI Behavior

Published: 16 April 2023 Publication History

Abstract

People work with AI systems to improve their decision making, but often under- or over-rely on AI predictions and perform worse than they would have unassisted. To help people appropriately rely on AI aids, we propose showing them behavior descriptions, details of how AI systems perform on subgroups of instances. We tested the efficacy of behavior descriptions through user studies with 225 participants in three distinct domains: fake review detection, satellite image classification, and bird classification. We found that behavior descriptions can increase human-AI accuracy through two mechanisms: helping people identify AI failures and increasing people's reliance on the AI when it is more accurate. These findings highlight the importance of people's mental models in human-AI collaboration and show that informing people of high-level AI behaviors can significantly improve AI-assisted decision making.

References

[1]
Naveed Akhtar and Ajmal Mian. 2018. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. IEEE Access, Vol. 6 (2018), 14410--14430. https://doi.org/10.1109/ACCESS.2018.2807385
[2]
John R. Anderson, Michael Matessa, and Christian Lebiere. 1997. ACT-R: A Theory of Higher Level Cognition and Its Relation to Visual Attention. Human--Computer Interaction, Vol. 12, 4 (Dec. 1997), 439--462. https://doi.org/10.1207/s15327051hci1204_5
[3]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S Lasecki, Daniel S Weld, and Eric Horvitz. 2019a. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7, 1 (2019), 19.
[4]
Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019b. Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33 (July 2019), 2429--2437. https://doi.org/10.1609/aaai.v33i01.33012429
[5]
Gagan Bansal, Tongshuang Wu, and Joyce Zhou. 2021. Does the Whole Exceed Its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. New York, NY, USA, 1--26. https://doi.org/10.1145/3411764.3445717
[6]
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, Vol. 33. Curran Associates, Inc., 1877--1901. ISSN: 10495258.
[7]
Adrian Bussone, Simone Stumpf, and Dympna O'Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. Proceedings - 2015 IEEE International Conference on Healthcare Informatics, ICHI 2015 (2015), 160--169. https://doi.org/10.1109/ICHI.2015.26 Publisher: IEEE ISBN: 9781467395489.
[8]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (April 2021), 1--21. https://doi.org/10.1145/3449287
[9]
Ángel Alexander Cabrera, Abraham J. Druck, Jason I. Hong, and Adam Perer. 2021. Discovering and Validating AI Errors With Crowdsourced Failure Reports. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (Oct. 2021), 1--22. https://doi.org/10.1145/3479569
[10]
Ángel Alexander Cabrera, Will Epperson, Fred Hohman, Minsuk Kahng, Jamie Morgenstern, and Duen Horng Chau. 2019. FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning. In 2019 IEEE Conference on Visual Analytics Science and Technology (VAST). 46--56. https://doi.org/10.1109/VAST47406.2019.8986948
[11]
Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, Rob DeLine, Adam Perer, and Steven M. Drucker. 2022. What Did My AI Learn? How Data Scientists Make Sense of Model Behavior. ACM Transactions on Computer-Human Interaction (June 2022), 3542921. https://doi.org/10.1145/3542921
[12]
Carrie J Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc. ACM Hum.-Comput. Interact., Vol. 3, CSCW (Nov. 2019). https://doi.org/10.1145/3359206
[13]
Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics Derived Automatically from Language Corpora Contain Human-like Biases. Science, Vol. 356, 6334 (April 2017), 183--186. https://doi.org/10.1126/science.aal4230
[14]
Jared J. Cash. 2009. Alert Fatigue. American Journal of Health-System Pharmacy, Vol. 66, 23 (Dec. 2009), 2098--2101. https://doi.org/10.2146/ajhp090181
[15]
Nan-Chen Chen, Jina Suh, Johan Verwey, Gonzalo Ramos, Steven Drucker, and Patrice Simard. 2018. AnchorViz: Facilitating Classifier Error Discovery through Interactive Semantic Data Exploration. In 23rd International Conference on Intelligent User Interfaces. ACM, Tokyo Japan, 269--280. https://doi.org/10.1145/3172944.3172950
[16]
Gong Cheng, Junwei Han, and Xiaoqiang Lu. 2017. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE, Vol. 105, 10 (Oct. 2017), 1865--1883. https://doi.org/10.1109/JPROC.2017.2675998
[17]
Eric Chu, Deb Roy, and Jacob Andreas. 2020. Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction. arXiv:2007.12248 [cs, stat] (July 2020). http://arxiv.org/abs/2007.12248 arXiv: 2007.12248.
[18]
Yeounoh Chung, Tim Kraska, Neoklis Polyzotis, Ki Hyun Tae, and Steven Euijong Whang. 2019. Slice Finder: Automated Data Slicing for Model Validation. In 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, Macao, Macao, 1550--1553. https://doi.org/10.1109/ICDE.2019.00139
[19]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, Honolulu HI USA, 1--12. https://doi.org/10.1145/3313831.3376638
[20]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, Vol. 14, 2 (2020), 627--660. https://doi.org/10.5465/annals.2018.0057
[21]
Pinjia He, Clara Meister, and Zhendong Su. 2020. Structure-Invariant Testing for Machine Translation. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. ACM, Seoul South Korea, 961--973. https://doi.org/10.1145/3377811.3380339
[22]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1--16. https://doi.org/10.1145/3290605.3300830
[23]
Aspen Hopkins and Serena Booth. 2021. Machine Learning Practices Outside Big Tech: How Resource Constraints Challenge Responsible Development. In AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, Inc, 134--145. https://doi.org/10.1145/3461702.3462527
[24]
Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational Psychiatry, Vol. 11, 1 (June 2021), 108. https://doi.org/10.1038/s41398-021-01224-x
[25]
Ruogu Kang, Laura Dabbish, Nathaniel Fruchter, and Sara Kiesler. 2015. My Data Just Goes Everywhere: User Mental Models of the Internet and Implications for Privacy and Security. In Eleventh Symposium On Usable Privacy and Security (SOUPS 2015). USENIX Association, Ottawa, 39--52. https://www.usenix.org/conference/soups2015/proceedings/presentation/kang
[26]
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020. Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning. CHI Conference on Human Factors in Computing Systems Proceedings (2020), 1--14. https://doi.org/10.1145/3313831.3376219 ISBN: 9781450367080.
[27]
Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In CHI Conference on Human Factors in Computing Systems. ACM, New Orleans LA USA, 1--18. https://doi.org/10.1145/3491102.3517439
[28]
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T. Hancock, and Michael S. Bernstein. 2020. Conceptual Metaphors Impact Perceptions of Human-AI Collaboration. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020). https://doi.org/10.1145/3415234
[29]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1--14. https://doi.org/10.1145/3290605.3300641
[30]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more?: the effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, Austin Texas USA, 1--10. https://doi.org/10.1145/2207676.2207678
[31]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng Keen Wong. 2013. Too Much, Too Little, or Just Right? Ways Explanations Impact End Users' Mental Models. Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC (2013), 3--10. https://doi.org/10.1109/VLHCC.2013.6645235
[32]
Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. 2020. The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. International Journal of Computer Vision, Vol. 128, 7 (2020), 1956--1981. https://doi.org/10.1007/s11263-020-01316-z
[33]
Vivian Lai, Han Liu, and Chenhao Tan. 2020. "Why is 'Chicago' Deceptive?" Towards Building Model-Driven Tutorials for Humans. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1--13. https://doi.org/10.1145/3313831.3376873
[34]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, Atlanta GA USA, 29--38. https://doi.org/10.1145/3287560.3287590
[35]
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and Customizable Explanations of Black Box Models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Honolulu HI USA, 131--138. https://doi.org/10.1145/3306618.3314229
[36]
Laura Lascau, Sandy J. J. Gould, Anna L. Cox, Elizaveta Karmannaya, and Duncan P. Brumby. 2019. Monotasking or Multitasking: Designing for Crowdworkers' Preferences. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1--14. https://doi.org/10.1145/3290605.3300649
[37]
J. D. Lee and K. A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, Vol. 46, 1 (Jan. 2004), 50--80. https://doi.org/10.1518/hfes.46.1.50_30392
[38]
Anthony Liu, Santiago Guerra, Isaac Fung, Gabriel Matute, Ece Kamar, and Walter Lasecki. 2020. Towards Hybrid Human-AI Workflows for Unknown Unknown Detection. In Proceedings of The Web Conference 2020. ACM, Taipei Taiwan, 2432--2442. https://doi.org/10.1145/3366423.3380306
[39]
Tomaz Logar, Joseph Bullock, Edoardo Nemni, Lars Bromley, John A. Quinn, and Miguel Luengo-Oroz. 2020. PulseSatellite: A tool using human-AI feedback loops for satellite image analysis in humanitarian contexts. arXiv:2001.10685 [cs, eess] (Jan. 2020). http://arxiv.org/abs/2001.10685 arXiv: 2001.10685.
[40]
Ewa Luger and Abigail Sellen. 2016. "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, San Jose California USA, 5286--5297. https://doi.org/10.1145/2858036.2858288
[41]
Jonathan Martinez, Kobi Gal, Ece Kamar, and Levi H. S. Lelis. 2020. Personalization in Human-AI Teams: Improving the Compatibility-Accuracy Tradeoff. arXiv:2004.02289 [cs] (Aug. 2020). http://arxiv.org/abs/2004.02289
[42]
Hussein Mozannar, Arvind Satyanarayan, and David Sontag. 2021. Teaching Humans When To Defer to a Classifier via Exemplars. arXiv:2111.11297 [cs] (Dec. 2021). http://arxiv.org/abs/2111.11297 arXiv: 2111.11297.
[43]
Donald A Norman. 2014. Some Observations on Mental Models. In Mental Models 0 ed.), Dedre Gentner and Albert L. Stevens (Eds.). Psychology Press, 15--22. https://www.taylorfrancis.com/books/9781317769408/chapters/10.4324/9781315802725--5
[44]
Besmira Nushi, Ece Kamar, and Eric Horvitz. 2018. Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6. 10.
[45]
Kazuo Okamura and Seiji Yamada. 2020. Adaptive trust calibration for human-AI collaboration. PLoS ONE, Vol. 15, 2 (2020), 1--20. https://doi.org/10.1371/journal.pone.0229132
[46]
Myle Ott, Claire Cardie, and Jeffrey T Hancock. 2013. Negative Deceptive Opinion Spam. 9--14.
[47]
Myle Ott, Yejin Choi, Claire Cardie, and Jeffrey T. Hancock. 2011. Finding Deceptive Opinion Spam by Any Stretch of the Imagination. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1 (HLT '11). Association for Computational Linguistics, USA, 309--319.
[48]
Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Wortman Vaughan, and Hanna Wallach. 2021. Manipulating and Measuring Model Interpretability. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, Yokohama Japan, 1--52. https://doi.org/10.1145/3411764.3445315
[49]
Iyad Rahwan, Manuel Cebrian, Nick Obradovich, Josh Bongard, Jean-François Bonnefon, Cynthia Breazeal, Jacob W. Crandall, Nicholas A. Christakis, Iain D. Couzin, Matthew O. Jackson, Nicholas R. Jennings, Ece Kamar, Isabel M. Kloumann, Hugo Larochelle, David Lazer, Richard McElreath, Alan Mislove, David C. Parkes, Alex ?Sandy' Pentland, Margaret E. Roberts, Azim Shariff, Joshua B. Tenenbaum, and Michael Wellman. 2019. Machine Behaviour. Nature, Vol. 568, 7753 (April 2019), 477--486. https://doi.org/10.1038/s41586-019--1138-y
[50]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, San Francisco California USA, 1135--1144. https://doi.org/10.1145/2939672.2939778
[51]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, 1 (April 2018). https://ojs.aaai.org/index.php/AAAI/article/view/11491
[52]
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond Accuracy: Behavioral Testing of NLP Models with CheckŁist. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Online, 4902--4912. https://doi.org/10.18653/v1/2020.acl-main.442
[53]
Steven Ritter, John R. Anderson, Kenneth R. Koedinger, and Albert Corbett. 2007. Cognitive Tutor: Applied research in mathematics education. Psychonomic Bulletin & Review, Vol. 14, 2 (April 2007), 249--255. https://doi.org/10.3758/BF03194060
[54]
Cynthia Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, Vol. 1, 5 (May 2019), 206--215. https://doi.org/10.1038/s42256-019-0048-x
[55]
John Sweller. 2010. Element Interactivity and Intrinsic, Extraneous, and Germane Cognitive Load. Educational Psychology Review, Vol. 22, 2 (June 2010), 123--138. https://doi.org/10.1007/s10648-010--9128--5
[56]
Phoey Lee Teh, Paul Rayson, Irina Pak, and Scott Piao. 2015. Sentiment Analysis Tools Should Take Account of the Number of Exclamation Marks!!!. In Proceedings of the 17th International Conference on Information Integration and Web-based Applications & Services. ACM, Brussels Belgium, 1--6. https://doi.org/10.1145/2837185.2837216
[57]
Richard Tomsett, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, and Lance Kaplan. 2020. Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI. Patterns, Vol. 1, 4 (2020), 100049. https://doi.org/10.1016/j.patter.2020.100049 Publisher: Elsevier Inc.
[58]
Tamara Van Gog, Liesbeth Kester, and Fred Paas. 2011. Effects of concurrent monitoring on cognitive load and performance as a function of task complexity. Applied Cognitive Psychology, Vol. 25, 4 (July 2011), 584--587. https://doi.org/10.1002/acp.1726
[59]
Grant Van Horn, Steve Branson, Ryan Farrell, Scott Haber, Jessie Barry, Panos Ipeirotis, Pietro Perona, and Serge Belongie. 2015. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Boston, MA, USA, 595--604. https://doi.org/10.1109/CVPR.2015.7298658
[60]
P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. 2010. Caltech-UCSD Birds 200. Technical Report CNS-TR-2010-001. California Institute of Technology.
[61]
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019a. Errudite: Scalable, Reproducible, and Testable Error Analysis. Proceedings of the 57th Conference of the Association for Computational Linguistics (2019), 747--763. https://www.aclweb.org/anthology/P19--1073
[62]
Tongshuang Wu, Daniel S. Weld, and Jeffrey Heer. 2019b. Local Decision Pitfalls in Interactive Machine Learning: An Investigation into Feature Selection in Sentiment Analysis. ACM Transactions on Computer-Human Interaction, Vol. 26, 4 (July 2019), 1--27. https://doi.org/10.1145/3319616
[63]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How do Visual Explanations Foster End Users' Appropriate Trust in Machine Learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, Cagliari Italy, 189--201. https://doi.org/10.1145/3377325.3377480
[64]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow Scotland Uk, 1--12. https://doi.org/10.1145/3290605.3300509
[65]
Rui Zhang, Nathan J. McNeese, Guo Freeman, and Geoff Musick. 2021. "An Ideal Human": Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW3 (Jan. 2021), 1--25. https://doi.org/10.1145/3432945
[66]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 295--305. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2024)The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of MisinformationInformation10.3390/info1506029915:6(299)Online publication date: 23-May-2024
  • (2024)AI Chat GPT technologies in health sciences: A study with mathematical statistic correlation and probabilistic mission inductionJournal of Education Technology in Health Sciences10.18231/j.jeths.2024.01211:2(55-60)Online publication date: 15-Jul-2024
  • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 7, Issue CSCW1
CSCW
April 2023
3836 pages
EISSN:2573-0142
DOI:10.1145/3593053
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 April 2023
Published in PACMHCI Volume 7, Issue CSCW1

Check for updates

Author Tags

  1. human-AI collaboration
  2. machine learning

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,011
  • Downloads (Last 6 weeks)325
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)The Era of Artificial Intelligence Deception: Unraveling the Complexities of False Realities and Emerging Threats of MisinformationInformation10.3390/info1506029915:6(299)Online publication date: 23-May-2024
  • (2024)AI Chat GPT technologies in health sciences: A study with mathematical statistic correlation and probabilistic mission inductionJournal of Education Technology in Health Sciences10.18231/j.jeths.2024.01211:2(55-60)Online publication date: 15-Jul-2024
  • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)The Algorithm and the Org Chart: How Algorithms Can Conflict with Organizational StructuresProceedings of the ACM on Human-Computer Interaction10.1145/36869038:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)When Should I Lead or Follow: Understanding Initiative Levels in Human-AI Collaborative GameplayProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661583(2037-2056)Online publication date: 1-Jul-2024
  • (2024)The Impact of Imperfect XAI on Human-AI Decision-MakingProceedings of the ACM on Human-Computer Interaction10.1145/36410228:CSCW1(1-39)Online publication date: 26-Apr-2024
  • (2024)Transparency in the Wild: Navigating Transparency in a Deployed AI System to Broaden Need-Finding ApproachesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658985(1494-1514)Online publication date: 3-Jun-2024
  • (2024)“Are You Really Sure?” Understanding the Effects of Human Self-Confidence Calibration in AI-Assisted Decision MakingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642671(1-20)Online publication date: 11-May-2024
  • (2024)When combinations of humans and AI are useful: A systematic review and meta-analysisNature Human Behaviour10.1038/s41562-024-02024-1Online publication date: 28-Oct-2024
  • (2024)A risk-based model for human-artificial intelligence conflict resolution in process systemsDigital Chemical Engineering10.1016/j.dche.2024.10019413(100194)Online publication date: Dec-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media