Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3544548.3581340acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

One AI Does Not Fit All: A Cluster Analysis of the Laypeople’s Perception of AI Roles

Published: 19 April 2023 Publication History

Abstract

Artificial intelligence (AI) applications have become an integral part of our society. However, studying AI as one entity or studying idiosyncratic applications separately both have limitations. Thus, this study used computational methods to categorize ten different AI roles prevalent in our everyday life and compared laypeople’s perceptions of them using online survey data (N = 727). Based on theoretical factors related to the fundamental nature of AI, the principal component analysis revealed two dimensions that categorize AI: human involvement and AI autonomy. K-means clustering identified four AI role clusters: tools (low in both dimensions), servants (high human involvement and low AI autonomy), assistants (low human involvement and high AI autonomy), and mediators (high in both dimensions). Multivariate analyses of covariances revealed that people assessed AI mediators the most and AI tools the least favorably. Demographics also influenced laypeople’s assessments of AI. The implications of these results are discussed.

Supplementary Material

MP4 File (3544548.3581340-talk-video.mp4)
Pre-recorded Video Presentation

References

[1]
Hussein A Abbass. 2019. Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cognitive Computation 11, 2 (2019), 159–171.
[2]
Gavin Abercrombie, Amanda Cercas Curry, Mugdha Pandya, and Verena Rieser. 2021. Alexa, Google, Siri: What are Your Pronouns? Gender and Anthropomorphism in the Design and Perception of Conversational Assistants. In Proceedings of the 3rd Workshop on Gender Bias in Natural Language Processing. Association for Computational Linguistics, Online, 24–33. https://doi.org/10.18653/v1/2021.gebnlp-1.4
[3]
Joison Adam. 2003. Understanding the Psychology of Internet Behavior.
[4]
Ritu Agarwal and Jayesh Prasad. 1999. Are individual differences germane to the acceptance of new information technologies?Decision sciences 30, 2 (1999), 361–391.
[5]
Icek Ajzen. 2018. Consumer attitudes and behavior. In Handbook of consumer psychology. Routledge, Oxfordshire, England, UK, 529–552.
[6]
Shahriar Akter, Grace McCarthy, Shahriar Sajib, Katina Michael, Yogesh K Dwivedi, John D’Ambra, and Kathy Ning Shen. 2021. Algorithmic bias in data-driven innovation in the age of AI., 102387 pages.
[7]
Dwain D Allan, Andrew J Vonasch, and Christoph Bartneck. 2022. The doors of social robot perception: The influence of implicit self-theories. International Journal of Social Robotics 14, 1 (2022), 127–140.
[8]
Neil Anderson, Colin Lankshear, Carolyn Timms, and Lyn Courtney. 2008. ‘Because it’s boring, irrelevant and I don’t like computers’: Why high school girls avoid professionally-oriented ICT subjects. Computers & Education 50, 4 (2008), 1304–1318.
[9]
Markus Appel, Birgit Lugrin, Mayla Kühle, and Corinna Heindl. 2021. The emotional robotic storyteller: On the influence of affect congruency on narrative transportation, robot perception, and persuasion. Computers in Human Behavior 120 (2021), 106749.
[10]
Theo Araujo, Natali Helberger, Sanne Kruikemeier, and Claes H De Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY 35, 3 (2020), 611–623.
[11]
Mohammad Babamiri, Rashid Heidarimoghadam, Fakhradin Ghasemi, Leili Tapak, and Alireza Mortezapour. 2022. Insights into the relationship between usability and willingness to use a robot in the future workplaces: Studying the mediating role of trust and the moderating roles of age and STARA. PloS one 17, 6 (2022), e0268942.
[12]
Richard P Bagozzi and Robert E Burnkrant. 1979. Attitude organization and the attitude–behavior relationship.Journal of personality and social psychology 37, 6(1979), 913.
[13]
Richard P Bagozzi and Robert E Burnkrant. 1985. Attitude organization and the attitude-behavior relation: A reply to Dillon and Kumar. Journal of Personality and Social Psychology 49, 1(1985), 47–57.
[14]
Chang Seok Bang, Jae Jun Lee, and Gwang Ho Baik. 2020. Artificial intelligence for the prediction of Helicobacter pylori infection in endoscopic images: systematic review and meta-analysis of diagnostic test accuracy. Journal of medical Internet research 22, 9 (2020), e21983.
[15]
Jaime Banks. 2019. A perceived moral agency scale: development and validation of a metric for humans and social machines. Computers in Human Behavior 90 (2019), 363–371.
[16]
Russell Beale and Chris Creed. 2009. Affective interaction: How emotional agents affect users. International journal of human-computer studies 67, 9 (2009), 755–776.
[17]
Sandra Bedaf, Patrizia Marti, Farshid Amirabdollahian, and Luc de Witte. 2018. A multi-perspective evaluation of a service robot for seniors: the voice of different stakeholders. Disability and rehabilitation: assistive technology 13, 6(2018), 592–599.
[18]
Sylvia Beyer. 2008. Gender differences and intra-gender differences amongst management information systems students. Journal of Information Systems Education 19, 3 (2008), 301.
[19]
Yochanan E Bigman and Kurt Gray. 2018. People are averse to machines making moral decisions. Cognition 181(2018), 21–34.
[20]
Mriganka Biswas, Marta Romeo, Angelo Cangelosi, and Ray B Jones. 2020. Are older people any different from younger people in the way they want to interact with robots? Scenario based survey. Journal on Multimodal User Interfaces 14, 1 (2020), 61–72.
[21]
Selmer Bringsjord, Konstantine Arkoudas, and Paul Bello. 2006. Toward a general logicist methodology for engineering ethically correct robots. IEEE Intelligent Systems 21, 4 (2006), 38–44.
[22]
Kieran Browne. 2022. Who (or what) is an AI Artist?Leonardo 55, 2 (2022), 130–134.
[23]
Bengisu Cagiltay, Hui-Ru Ho, Joseph E Michaelis, and Bilge Mutlu. 2020. Investigating Family Perceptions and Design Preferences for an In-Home Robot. In Proceedings of the Interaction Design and Children Conference (London, United Kingdom) (IDC ’20). Association for Computing Machinery, New York, NY, USA, 229–242. https://doi.org/10.1145/3392063.3394411
[24]
Matt Carlson. 2019. News algorithms, photojournalism and the assumption of mechanical objectivity in journalism. Digital Journalism 7, 8 (2019), 1117–1133.
[25]
Giuseppe Casalicchio, Christoph Molnar, and Bernd Bischl. 2019. Visualizing the Feature Importance for Black Box Models. In Machine Learning and Knowledge Discovery in Databases, Michele Berlingerio, Francesco Bonchi, Thomas Gärtner, Neil Hurley, and Georgiana Ifrim (Eds.). Springer International Publishing, Cham, 655–670.
[26]
Simone Casiraghim, James Peter Burgess, and Kristoffer Lidén. 2021. Social acceptance and border control technologies. In Border Control and New Technologies: Addressing Integrated Impact Assessment. ASP Academic and Scientific Publishers, Brussels, 99––115.
[27]
Ka Yin Chau, Michael Huen Sum Lam, Man Lai Cheung, Ejoe Kar Ho Tso, Stuart W Flint, David R Broom, Gary Tse, and Ka Yiu Lee. 2019. Smart technology for healthcare: Exploring the antecedents of adoption intention of healthcare wearable technology. Health psychology research 7, 1 (2019), 8099.
[28]
Cheng Chen, Carlina DiRusso, Hyun Yang, Rousi Shao, Michael Krieger, and S. Shyam Sundar. 2019. ALEXA, NETFLIX, AND SIRI: User Perceptions of AI-Driven Technologies. In AEJMC 2019 Conference. Association for Education in Journalism and Mass Communication, Toronto, Canada, 0.
[29]
Mengyuan Chen, Morris Siu-Yung, Ching Sing Chai, Chunping Zheng, and Moon-Young Park. 2021. A Pilot Study of Students’ Behavioral Intention to Use AI for Language Learning in Higher Education. In 2021 International Symposium on Educational Technology (ISET). IEEE, Tokai, Nagoya, Japan, 182–184.
[30]
Tsai-Wei Chen and S Shyam Sundar. 2018. This app would like to use your current location to better serve you: Importance of user assent and system transparency in personalized mobile services. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, Montrèal, Canada, 1–13.
[31]
Zhi Cheng, Angelika Dimoka, and Paul A Pavlou. 2016. Context may be King, but generalizability is the Emperor!Journal of Information Technology 31, 3 (2016), 257–264.
[32]
Fanny Cheung and Wing-Fai Leung. 2021. Virtual influencer as celebrity endosers. University of South Florida M3 Center Publishing 5, 2021(2021), 44.
[33]
Sung-En Chien, Li Chu, Hsing-Hao Lee, Chien-Chun Yang, Fo-Hui Lin, Pei-Ling Yang, Te-Mei Wang, and Su-Ling Yeh. 2019. Age difference in perceived ease of use, curiosity, and implicit negative attitude toward robots. ACM Transactions on Human-Robot Interaction (THRI) 8, 2 (2019), 1–19.
[34]
Michael Chmielewski and Sarah C Kucker. 2020. An MTurk crisis? Shifts in data quality and the impact on study results. Social Psychological and Personality Science 11, 4 (2020), 464–473.
[35]
Hyesun Choung, Prabu David, and Arun Ross. 2022. Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human–Computer Interaction 0, 0(2022), 1–13. https://doi.org/10.1080/10447318.2022.2050543
[36]
Lee J Cronbach. 1951. Coefficient alpha and the internal structure of tests. psychometrika 16, 3 (1951), 297–334.
[37]
Kerstin Dautenhahn, Sarah Woods, Christina Kaouri, Michael L Walters, Kheng Lee Koay, and Iain Werry. 2005. What is a robot companion-friend, assistant or butler?. In 2005 IEEE/RSJ international conference on intelligent robots and systems. IEEE, Alberta, Canada, 1192–1197.
[38]
Fred D Davis, Richard P Bagozzi, and Paul R Warshaw. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management science 35, 8 (1989), 982–1003.
[39]
Tiago de Carvalho Delgado Marques, Eline Noels, Marlies Wakkee, Andreea Udrea, and Tamar Nijsten. 2019. Development of smartphone apps for skin cancer risk assessment: Progress and promise. Journal of Medical Internet Research 21, 7 (2019), e13376.
[40]
Nick Deschacht. 2021. The digital revolution and the labour economics of automation: A review. ROBONOMICS: The Journal of the Automated Economy 1 (2021), 8–8.
[41]
Chris Ding and Xiaofeng He. 2004. K-means clustering via principal component analysis. In Proceedings of the twenty-first international conference on Machine learning. ACM, Alberta, Canada, 29.
[42]
Hyo Jin Do and Wai-Tat Fu. 2016. Empathic virual assistant for healthcare information with positive emotional experience. In 2016 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, Chicago, IL, USA, 318–318.
[43]
Nagat Drawel, Hongyang Qu, Jamal Bentahar, and Elhadi Shakshuki. 2020. Specification and automatic verification of trust-based multi-agent systems. Future Generation Computer Systems 107 (2020), 1047–1060.
[44]
Louise Dulude. 2002. Automated telephone answering systems and aging. Behaviour & Information Technology 21, 3 (2002), 171–184.
[45]
Alan Durndell, Zsolt Haag, and Heather Laithwaite. 2000. Computer self efficacy and gender: a cross cultural study of Scotland and Romania. Personality and individual differences 28, 6 (2000), 1037–1044.
[46]
Karen T D’Alonzo. 2004. The Johnson-Neyman procedure as an alternative to ANCOVA. Western journal of nursing research 26, 7 (2004), 804–812.
[47]
Alice H Eagly and Shelly Chaiken. 1993. The psychology of attitudes.Harcourt brace Jovanovich college publishers, San Diego, CA, USA.
[48]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User attitudes towards algorithmic opacity and transparency in online reviewing platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow, Scotland, UK, 1–14.
[49]
Michael Eysenck. 2012. Attention and arousal: Cognition and performance. Springer Science & Business Media, Berlin, Germany.
[50]
Ethan Fast and Eric Horvitz. 2017. Long-term trends in the public perception of artificial intelligence. In Proceedings of the AAAI conference on artificial intelligence, Vol. 31. AAAI, San Francisco, CA, USA, 0.
[51]
Andrew J Flanagin and Miriam J Metzger. 2000. Perceptions of Internet information credibility. Journalism & mass communication quarterly 77, 3 (2000), 515–540.
[52]
Peter Fleming. 2019. Robots and organization studies: Why robots might not want to steal your job. Organization Studies 40, 1 (2019), 23–38.
[53]
Brian J Fogg, Jonathan Marshall, Othman Laraki, Alex Osipovich, Chris Varma, Nicholas Fang, Jyoti Paul, Akshay Rangnekar, John Shon, Preeti Swani, 2001. What makes web sites credible? A report on a large quantitative study. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, Seattle, WA, USA, 61–68.
[54]
Brian J Fogg and Hsiang Tseng. 1999. The elements of computer credibility. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, Pittsburgh, PA, USA, 80–87.
[55]
Asbjørn Følstad, Cecilie Bertinussen Nordheim, and Cato Alexander Bjørkli. 2018. What makes users trust a chatbot for customer service? An exploratory interview study. In International conference on internet science. Springer, Berlin, Germany, 194–208.
[56]
Catarina Fontes, Ellen Hohma, Caitlin C Corrigan, and Christoph Lütge. 2022. AI-powered public surveillance systems: why we (might) need them and how we want them. Technology in Society 71(2022), 102137.
[57]
Changzeng Fu, Yuichiro Yoshikawa, Takamasa Iio, and Hiroshi Ishiguro. 2021. Sharing experiences to help a robot present its mind and sociability. International Journal of Social Robotics 13, 2 (2021), 341–352.
[58]
Janet Fulk, Joseph Schmitz, Charles W Steinfield, 1990. A social influence model of technology use. Organizations and communication technology 117 (1990), 140.
[59]
A Gailums 2010. Office automation in rural area origins, development, and current situation. In Applied information and communication technologies. Proceedings of the International Scientific Conference. Atlantis Press, Jelgava, Latvia, 116–123.
[60]
Esperanza García-Sancho, JM Salguero, and P Fernández-Berrocal. 2014. Relationship between emotional intelligence and aggression: A systematic review. Aggression and violent behavior 19, 5 (2014), 584–591.
[61]
David Gefen, Elena Karahanna, and Detmar W Straub. 2003. Trust and TAM in online shopping: An integrated model. MIS Quarterly 27, 1 (2003), 51–90. https://doi.org/10.5555/2017181.2017185
[62]
Alessandra Schirin Gessl, Stephan Schlögl, and Nils Mevenkamp. 2019. On the perceptions and acceptance of artificially intelligent robotics and the psychology of the future elderly. Behaviour & Information Technology 38, 11 (2019), 1068–1087.
[63]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627–660.
[64]
Carlos A Gomez-Uribe and Neil Hunt. 2016. The netflix recommender system: Algorithms, business value, and innovation. ACM Transactions on Management Information Systems (TMIS) 6, 4(2016), 13.
[65]
Margot J Goot, Laura Hafkamp, and Zoë Dankfort. 2020. Customer service chatbots: A qualitative interview study into the communication journey of customers. In International Workshop on Chatbot Research and Design. Springer, Virtual Event, 190–204.
[66]
Roberto Gozalo-Brizuela and Eduardo C Garrido-Merchan. 2023. ChatGPT is not all you need. A State of the Art Review of large Generative AI models. arXiv preprint arXiv:2301.04655(2023).
[67]
Andreas Graefe and Nina Bohlken. 2020. Automated journalism: A meta-analysis of readers’ perceptions of human-written in comparison to automated news. Media and Communication 8, 3 (2020), 50–59.
[68]
Heather M Gray, Kurt Gray, and Daniel M Wegner. 2007. Dimensions of mind perception. science 315, 5812 (2007), 619–619.
[69]
Kurt Gray, Adrianna C Jenkins, Andrea S Heberlein, and Daniel M Wegner. 2011. Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences 108, 2 (2011), 477–479.
[70]
Kurt Gray and Daniel M Wegner. 2012. Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 125, 1 (2012), 125–130.
[71]
Jeffrey T Hancock, Mor Naaman, and Karen Levy. 2020. AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication 25, 1 (2020), 89–100.
[72]
Rajibul Hasan, Riad Shams, and Mizan Rahman. 2021. Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. Journal of Business Research 131 (2021), 591–597.
[73]
David J Hauser, Aaron J Moss, Cheskie Rosenzweig, Shalom N Jaffe, Jonathan Robinson, and Leib Litman. 2022. Evaluating CloudResearch’s Approved Group as a solution for problematic data quality on MTurk. Behavior Research Methods 0, 0 (2022), 1–12.
[74]
Denise Hebesberger, Tobias Koertner, Christoph Gisinger, and Jürgen Pripfl. 2017. A long-term autonomous robot at a care hospital: A mixed methods study on social acceptance and experiences of staff and older adults. International Journal of Social Robotics 9, 3 (2017), 417–429.
[75]
Bert Heinrichs. 2022. Discrimination in the age of artificial intelligence. AI & society 37, 1 (2022), 143–154.
[76]
Charlie Hewitt, Ioannis Politis, Theocharis Amanatidis, and Advait Sarkar. 2019. Assessing public perception of self-driving cars: The autonomous vehicle acceptance model. In Proceedings of the 24th international conference on intelligent user interfaces. ACM, Marina del Ray, CA, USA, 518–527.
[77]
Ting Hin Ho, Dewi Tojib, and Yelena Tsarenko. 2020. Human staff vs. service robot vs. fellow customer: Does it matter who helps your customer following a service failure incident?International Journal of Hospitality Management 87 (2020), 102501.
[78]
Katsuhiro Honda, Akira Notsu, and Hidetomo Ichihashi. 2009. Fuzzy PCA-guided robust k-means clustering. IEEE Transactions on Fuzzy Systems 18, 1 (2009), 67–79.
[79]
Joo-Wha Hong, Yunwen Wang, and Paulina Lanz. 2020. Why is artificial intelligence blamed more? Analysis of faulting artificial intelligence for self-driving car accidents in experimental settings. International Journal of Human–Computer Interaction 36, 18(2020), 1768–1774.
[80]
Weiyin Hong, Frank KY Chan, James YL Thong, Lewis C Chasalow, and Gurpreet Dhillon. 2014. A framework and guidelines for context-specific theorizing in information systems research. Information systems research 25, 1 (2014), 111–136.
[81]
Adrian A Hopgood. 2005. The state of artificial intelligence. Advances in computers 65(2005), 1–75.
[82]
Bradley E Huitema. 1980. The Analysis of Covariance and Alternatives. John Wiley & Sons, Hoboken, NJ, USA.
[83]
Magid Igbaria and Saroj Parasuraman. 1989. A path analytic study of individual characteristics, computer anxiety and attitudes toward microcomputers. Journal of Management 15, 3 (1989), 373–388.
[84]
Ryan Blake Jackson and Tom Williams. 2019. On perceived social and moral agency in natural language capable robots. In 2019 HRI workshop on the dark side of human-robot interaction. Jackson, RB, and Williams. ACM, Daegu, South Korea, 401–410.
[85]
Maurice Jakesch, Megan French, Xiao Ma, Jeffrey T Hancock, and Mor Naaman. 2019. AI-mediated communication: How the perception that profile text was written by AI affects trustworthiness. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, Glasgow, Scotland, UK.
[86]
Chenyan Jia. 2020. Chinese automated journalism: A comparison between expectations and perceived quality. International Journal of Communication 14 (2020), 22.
[87]
Ian T Jolliffe and Jorge Cadima. 2016. Principal component analysis: a review and recent developments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374, 2065 (2016), 20150202.
[88]
Peter H Kahn, Nathan G Freier, Batya Friedman, Rachel L Severson, and Erika N Feldman. 2004. Social and moral relationships with robotic others?. In RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication (IEEE Catalog No. 04TH8759). IEEE, Kurashiki, Okayama, Japan, 545–550.
[89]
Masao Kanamori, Mizue Suzuki, Hajime Oshiro, Misayo Tanaka, Tomoko Inoguchi, Hidenari Takasugi, Yoshiko Saito, and Tomoji Yokoyama. 2003. Pilot study on improvement of quality of life among elderly using a pet-type robot. In Proceedings 2003 IEEE international symposium on computational intelligence in robotics and automation. computational intelligence in robotics and automation for the new millennium (Cat. No. 03EX694), Vol. 1. IEEE, Kobe, Japan, 107–112.
[90]
Alexandra D Kaplan, Theresa T Kessler, J Christopher Brill, and PA Hancock. 2021. Trust in artificial intelligence: Meta-analytic findings. Human Factors 0, 0 (2021), 00187208211013988.
[91]
Davinder Kaur, Suleyman Uslu, Kaley J Rittichier, and Arjan Durresi. 2022. Trustworthy artificial intelligence: a review. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–38.
[92]
Boyoung Kim, Ruchen Wen, Qin Zhu, Tom Williams, and Elizabeth Phillips. 2021. Robots as moral advisors: The effects of deontological, virtue, and confucian role ethics on encouraging honest behavior. In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. ACM/IEEE, Virtual Event, 10–18.
[93]
Jihyun Kim, Kelly Merrill Jr, and Chad Collins. 2021. AI as a friend or assistant: The mediating role of perceived usefulness in social AI vs. functional AI. Telematics and Informatics 64 (2021), 101694.
[94]
Jihyun Kim, Kelly Merrill Jr, Kun Xu, and Stephanie Kelly. 2022. Perceived credibility of an AI instructor in online education: The role of social presence and voice features. Computers in Human Behavior 136 (2022), 107383.
[95]
Taenyun Kim and Hayeon Song. 2021. How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics 61 (2021), 101595.
[96]
Taenyun Kim and Hayeon Song. 2022. Communicating the Limitations of AI: The Effect of Message Framing and Ownership on Trust in Artificial Intelligence. International Journal of Human–Computer Interaction 0, 0(2022), 1–11.
[97]
Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan. 2021. Bad machines corrupt good morals. Nature Human Behaviour 5, 6 (2021), 679–685.
[98]
Nils Köbis and Luca D Mossink. 2021. Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in human behavior 114 (2021), 106553.
[99]
Joost N Kok, Egbert J Boers, Walter A Kosters, Peter Van der Putten, and Mannes Poel. 2009. Artificial intelligence: definition, trends, techniques, and cases. Artificial intelligence 1 (2009), 270–299.
[100]
Enikő Kubinyi, Ádám Miklósi, Frédéric Kaplan, Márta Gácsi, József Topál, and Vilmos Csányi. 2004. Social behaviour of dogs encountering AIBO, an animal-like robot in a neutral and in a feeding situation. Behavioural processes 65, 3 (2004), 231–239.
[101]
Lynette Kvasny, KD Joshi, and Eileen Trauth. 2011. The influence of self-efficacy, gender stereotypes and the importance of it skills on college students’ intentions to pursue IT careers. In Proceedings of the 2011 iConference. ACM, Seattle, WA, USA, 508–513.
[102]
Markus Langer, Cornelius J König, Diana Ruth-Pelipez Sanchez, and Sören Samadi. 2020. Highly automated interviews: Applicant reactions and the organizational context. Journal of Managerial Psychology 35, 4 (2020), 301–314.
[103]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
[104]
Jae-Gil Lee, Ki Joon Kim, Sangwon Lee, and Dong-Hee Shin. 2015. Can autonomous vehicles be safe and trustworthy? Effects of appearance and autonomy of unmanned driving systems. International Journal of Human-Computer Interaction 31, 10(2015), 682–691.
[105]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684.
[106]
Min Kyung Lee and Su Baykal. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. ACM, Portland, OR, USA, 1035–1048.
[107]
Shane Legg, Marcus Hutter, 2007. A collection of definitions of intelligence. Frontiers in Artificial Intelligence and applications 157 (2007), 17.
[108]
Xin Li, Traci J Hess, and Joseph S Valacich. 2008. Why do we trust new technology? A study of initial trust formation with organizational information systems. The Journal of Strategic Information Systems 17, 1 (2008), 39–71.
[109]
Yuli Liang, Seung-Hee Lee, and Jane E Workman. 2020. Implementation of artificial intelligence in fashion: Are consumers ready?Clothing and Textiles Research Journal 38, 1 (2020), 3–18.
[110]
Bingjie Liu and S Shyam Sundar. 2018. Should machines express sympathy and empathy? Experiments with a health advice chatbot. Cyberpsychology, Behavior, and Social Networking 21, 10(2018), 625–636.
[111]
Jennifer Marie Logg. 2017. Theory of machine: When do people rely on algorithms?Harvard Business School working paper series# 17-086 0, 0(2017).
[112]
Wing-Yue Geoffrey Louie, Derek McColl, and Goldie Nejat. 2014. Acceptance and attitudes toward a human-like socially assistive robot by older adults. Assistive Technology 26, 3 (2014), 140–150.
[113]
Vinh Nhat Lu, Jochen Wirtz, Werner H Kunz, Stefanie Paluch, Thorsten Gruber, Antje Martins, and Paul G Patterson. 2020. Service robots, customers and service employees: what can we learn from the academic literature and where are the gaps?Journal of Service Theory and Practice 30, 3 (2020), 361–391.
[114]
Alison Lui and George William Lamb. 2018. Artificial intelligence and augmented intelligence collaboration: regaining trust and confidence in the financial sector. Information & Communication Technology Law 27, 3 (2018), 267–283. https://doi.org/10.1080/13600834.2018.1488659
[115]
Ingo Lütkebohle, Frank Hegel, Simon Schulz, Matthias Hackel, Britta Wrede, Sven Wachsmuth, and Gerhard Sagerer. 2010. The Bielefeld anthropomorphic robot head “Flobi”. In 2010 IEEE International Conference on Robotics and Automation. IEEE, Singapore, 3384–3391.
[116]
Xiao Ma, Jeffrey T Hancock, Kenneth Lim Mingjie, and Mor Naaman. 2017. Self-disclosure and perceived trustworthiness of Airbnb host profiles. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. ACM, Portland, OR, USA, 2397–2409.
[117]
Carol Ann Maher, Courtney Rose Davis, Rachel Grace Curtis, Camille Elizabeth Short, and Karen Joy Murphy. 2020. A physical activity and diet program delivered by artificially intelligent virtual health coach: proof-of-concept study. JMIR mHealth and uHealth 8, 7 (2020), e17558.
[118]
Bertram Malle. 2019. How many dimensions of mind perception really are there?. In CogSci. Montreal, Canada, 2268–2274.
[119]
James Manyika, Michael Chui, Jacques Bughin, Richard Dobbs, Peter Bisson, and Alex Marrs. 2013. Disruptive technologies: Advances that will transform life, business, and the global economy. Vol. 180. McKinsey Global Institute, San Francisco, CA, USA.
[120]
Jane Margolis and Allan Fisher. 2002. Unlocking the clubhouse: Women in computing. MIT press, Cambridge, MA, USA.
[121]
Kirsten Martin. 2019. Ethical implications and accountability of algorithms. Journal of business ethics 160, 4 (2019), 835–850.
[122]
Thomas H Martin. 1983. Office automation technology and functions: an overview. Journal of the American Society for Information Science 34, 3(1983), 210–214.
[123]
David M Marx and Jasmin S Roman. 2002. Female role models: Protecting women’s math test performance. Personality and Social Psychology Bulletin 28, 9 (2002), 1183–1193.
[124]
John D Mayer. 1997. What is emotional intelligence? P Salovey, DJ Sluyter,(Eds.), Emotional Development and Emotional Intelligence. Basic Books, New York 3(1997), 34.
[125]
Kate K Mays, Yiming Lei, Rebecca Giovanetti, and James E Katz. 2021. AI as a boss? A national US survey of predispositions governing comfort with expanded AI roles in society. AI & SOCIETY 37(2021), 1–14.
[126]
Niall McCarthy. 2019. America’s Most & Least Trusted Professions [Infographic]. https://www.forbes.com/sites/niallmccarthy/2019/01/11/americas-most-least-trusted-professions-infographic/?sh=57ff925b7e94
[127]
James C McCroskey and Jason J Teven. 1999. Goodwill: A reexamination of the construct and its measurement. Communications Monographs 66, 1 (1999), 90–103.
[128]
James C McCroskey and Thomas J Young. 1981. Ethos and credibility: The construct and its measurement after three decades. Communication Studies 32, 1 (1981), 24–34.
[129]
William J McGuire. 1985. Attitudes and attitude change. Random House, New York, NY, USA. 233–346 pages.
[130]
Scott Mayer McKinney, Marcin Sieniek, Varun Godbole, Jonathan Godwin, Natasha Antropova, Hutan Ashrafian, Trevor Back, Mary Chesus, Greg C Corrado, Ara Darzi, 2020. International evaluation of an AI system for breast cancer screening. Nature 577, 7788 (2020), 89–94.
[131]
Peter F Merenda. 1997. A guide to the proper use of factor analysis in the conduct and reporting of research: Pitfalls to avoid. Measurement and Evaluation in counseling and Development 30, 3(1997), 156–164.
[132]
RC Meyer, JH Davis, and F David Schoorman. 1995. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
[133]
D Douglas Miller and Eric W Brown. 2018. Artificial intelligence in medical practice: the question to the answer?The American journal of medicine 131, 2 (2018), 129–133.
[134]
Delyana Ivanova Miller, France Aube, Vincent Talbot, Michèle Gagnon, and Claude Messier. 2014. Older people’s attitudes toward interactive voice response systems. Telemedicine and e-Health 20, 2 (2014), 152–156.
[135]
Delyana Ivanova Miller, Halina Bruce, Michele Gagnon, Vincent Talbot, and Claude Messier. 2011. Improving older adults’ experience with interactive voice response systems. Telemedicine and e-Health 17, 6 (2011), 452–455.
[136]
Agata Mirowska and Laura Mesnet. 2022. Preferring the devil you know: Potential applicant reactions to artificial intelligence evaluation of interviews. Human Resource Management Journal 32, 2 (2022), 364–383.
[137]
Maria D Molina and S Shyam Sundar. 2022. When AI moderates online content: effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication 27, 4 (2022), zmac010.
[138]
Michael Montemerlo, Joelle Pineau, Nicholas Roy, Sebastian Thrun, and Vandi Verma. 2002. Experiences with a mobile robotic guide for the elderly. AAAI/IAAI 2002(2002), 587–592.
[139]
Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020. Explaining machine learning classifiers throuagh diverse counterfactual explanations. In Proceedings of the 2020 conference on fairness, accountability, and transparency. ACM, Barcelona, Spain, 607–617.
[140]
Alissar Nasser, Denis Hamad, and Chaiban Nasr. 2006. K-means clustering algorithm in projected spaces. In 2006 9th International Conference on Information Fusion. IEEE, Florence, Italy, 1–6.
[141]
Nabi Nazari, Muhammad Salman Shabbir, and Roy Setiawan. 2021. Application of Artificial Intelligence powered digital writing assistant in higher education: randomized controlled trial. Heliyon 7, 5 (2021), e07014.
[142]
Verena Nitsch and Michael Popp. 2014. Emotions in robot psychology. Biological cybernetics 108, 5 (2014), 621–629.
[143]
Tatsuya Nomura, Takayuki Kanda, and Tomohiro Suzuki. 2006. Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. Ai & Society 20, 2 (2006), 138–150.
[144]
Tatsuya Nomura, Tomohiro Suzuki, Takayuki Kanda, and Kensuke Kato. 2006. Measurement of negative attitudes toward robots. Interaction Studies 7, 3 (2006), 437–454.
[145]
Jum C Nunnally. 1994. Psychometric Theory(third ed.). McGraw-Hill, New York, NY.
[146]
Roobina Ohanian. 1990. Construction and Validation of a Scale to Measure Celebrity Endorsers’ Perceived Expertise, Trustworthiness, and Attractiveness. Journal of Advertising 19, 3 (1990), 39–52. https://doi.org/10.1080/00913367.1990.10673191
[147]
Tim O’Reilly. 2017. WTF?: What’s the Future and why It’s Up to Us. Random House, New York, NY, USA.
[148]
Maike Paetzel-Prüsmann, Giulia Perugia, and Ginevra Castellano. 2021. The influence of robot personality on the development of uncanny feelings. Computers in Human Behavior 120 (2021), 106756.
[149]
Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics 4 (2016), 61–74.
[150]
Corina Pelau, Dan-Cristian Dabija, and Irina Ene. 2021. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior 122 (2021), 106855.
[151]
Dorian Peters, Rafael A Calvo, and Richard M Ryan. 2018. Designing for motivation, engagement and wellbeing in digital experience. Frontiers in psychology 9 (2018), 797.
[152]
Robert A Peterson. 2000. A meta-analysis of variance accounted for and factor loadings in exploratory factor analysis. Marketing letters 11, 3 (2000), 261–275.
[153]
Florian Pethig and Julia Kroenung. 2022. Biased humans,(un) biased algorithms?Journal of Business Ethics 0 (2022), 1–16.
[154]
Erwin Prassler, Mario E Munich, Paolo Pirjanian, and Kazuhiro Kosuge. 2016. Domestic robotics. In Springer handbook of robotics. Springer, Berlin, Germany, 1729–1758.
[155]
Sarvapali D Ramchurn, Feng Wu, Wenchao Jiang, Joel E Fischer, Steve Reece, Stephen Roberts, Tom Rodden, Chris Greenhalgh, and Nicholas R Jennings. 2016. Human–agent collaboration for disaster response. Autonomous Agents and Multi-Agent Systems 30, 1 (2016), 82–111.
[156]
Werner Rammert. 2008. Where the Action is: Distributed Agency between Humans, Machines, and Programs. 62–91. https://doi.org/10.14361/9783839408421-004
[157]
PL Patrick Rau, Ye Li, and Dingjun Li. 2009. Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior 25, 2 (2009), 587–595.
[158]
Céline Ray, Francesco Mondada, and Roland Siegwart. 2008. What do people expect from robots?. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, Nice, France, 3816–3821.
[159]
Ulrich Reiser, Theo Jacobs, Georg Arbeiter, Christopher Parlitz, and Kerstin Dautenhahn. 2013. Care-O-bot® 3–Vision of a robot butler. In Your virtual butler. Springer, Berlin, Germany, 97–116.
[160]
Astrid M Rosenthal-von der Pütten, Nicole C Krämer, and Jonathan Herrmann. 2018. The effects of humanlike and robot-specific affective nonverbal behavior on perception, emotion, and behavior. International Journal of Social Robotics 10, 5 (2018), 569–582.
[161]
Denise M Rousseau and Yitzhak Fried. 2001. Location, location, location: Contextualizing organizational research. Journal of organizational behavior(2001), 1–13.
[162]
Peter AM Ruijten, Jacques MB Terken, and Sanjeev N Chandramouli. 2018. Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behavior. Multimodal Technologies and Interaction 2, 4 (2018), 62.
[163]
Nina Savela, Tuuli Turja, and Atte Oksanen. 2018. Social acceptance of robots in different occupational fields: A systematic literature review. International Journal of Social Robotics 10, 4 (2018), 493–502.
[164]
Anna-Maria Seeger and Armin Heinzl. 2021. Chatbots often Fail! Can Anthropomorphic Design Mitigate Trust Loss in Conversational Agents for Customer Service?. In European Conference on Information Systems. AIS, Virtual Event.
[165]
Daniel B Shank, Christopher Graves, Alexander Gott, Patrick Gamez, and Sophia Rodriguez. 2019. Feeling our way to machine minds: People’s emotions when perceiving mind in artificial intelligence. Computers in Human Behavior 98 (2019), 256–266.
[166]
Shared, Digital Mobility Committee, 2018. Taxonomy and Definitions for Terms Related to Shared Mobility and Enabling Technologies. SAE International 0, 0 (2018).
[167]
Steven J Sherman and Eric Corty. 1984. Cognitive heuristics. In Handbook of social cognition, R. S. Jr. Wyer and T. K. Srull (Eds.). Lawrence Erlbaum Associates Publishers, Berlin, Germany, 189––286.
[168]
Lauralee Sherwood. 2015. Human physiology: from cells to systems. Cengage learning, Boston, MA, USA. 157––162 pages.
[169]
Henry Shevlin, Karina Vold, Matthew Crosby, and Marta Halina. 2019. The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge. EMBO reports 20, 10 (2019), e49177.
[170]
Donghee Shin. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146 (2021), 102551.
[171]
Donghee Shin, Joon Soo Lim, Norita Ahmad, and Mohammed Ibahrine. 2022. Understanding user sensemaking in fairness and transparency in algorithms: algorithmic sensemaking in over-the-top platform. AI & SOCIETY 0(2022), 1–14.
[172]
Jake Silva. 2019. Increasing Perceived Agency in Human-AI Interactions: Learnings from Piloting a Voice User Interface with Drivers on Uber. In Ethnographic Praxis in Industry Conference Proceedings, Vol. 2019. Wiley Online Library, Hoboken, NJ, USA, 441–456.
[173]
Cory-Ann Smarr, Akanksha Prakash, Jenay M Beer, Tracy L Mitzner, Charles C Kemp, and Wendy A Rogers. 2012. Older adults’ preferences for and acceptance of robot assistance for everyday living tasks. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 56. Sage Publications, Los Angeles, CA, USA, 153–157.
[174]
Aaron Smith. 2018. Public attitudes toward computer algorithms.
[175]
Hyeonjin Soh, Leonard N Reid, and Karen Whitehill King. 2009. Measuring trust in advertising. Journal of advertising 38, 2 (2009), 83–104.
[176]
Kwonsang Sohn and Ohbyung Kwon. 2020. Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics 47 (2020), 101324.
[177]
Mengmeng Song, Xinyu Xing, Yucong Duan, Jason Cohen, and Jian Mou. 2022. Will artificial intelligence replace human customer service? The impact of communication quality and privacy risks on adoption intention. Journal of Retailing and Consumer Services 66 (2022), 102900.
[178]
Nicolas Spatola and Olga A Wudarczyk. 2021. Ascribing emotions to robots: Explicit and implicit attribution of emotions and perceived robot anthropomorphism. Computers in Human Behavior 124 (2021), 106934.
[179]
Nancy Spears and Surendra N Singh. 2004. Measuring attitude toward the brand and purchase intentions. Journal of current issues & research in advertising 26, 2(2004), 53–66.
[180]
Robert J Sternberg. 2000. Handbook of intelligence. Cambridge University Press, Cambridge, UK.
[181]
Arwin Datumaya Wahyudi Sumari and Adang Suwandi Ahmad. 2016. Cogitive artificial intelligence: The fusion of artificial intelligence and information fusion. In 2016 International Symposium on Electronics and Smart Devices (ISESD). IEEE, Bandung, Indonesia, 1–6.
[182]
S Shyam Sundar. 2008. The MAIN model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media and Learning Initiative, Cambridge, MA, USA.
[183]
S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human–AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
[184]
S Shyam Sundar, Haiyan Jia, T Franklin Waddell, and Yan Huang. 2015. Toward a theory of interactive media effects (TIME) four models for explaining how interface features affect user psychology. Wiley Online Library, Hoboken, NJ, USA. 47–86 pages.
[185]
S Shyam Sundar and Jinyoung Kim. 2019. Machine heuristic: When we trust computers more than humans with our personal information. In Proceedings of the 2019 CHI Conference on human factors in computing systems. ACM, Glasgow, Scotland, UK, 1–9.
[186]
S Shyam Sundar and Eun-Ju Lee. 2022. Rethinking Communication in the Era of Artificial Intelligence. Human Communication Research 48, 3 (2022), 379–385.
[187]
S Shyam Sundar and Sampada S Marathe. 2010. Personalization versus customization: The importance of agency, privacy, and power usage. Human communication research 36, 3 (2010), 298–322.
[188]
Leila Takayama. 2015. Telepresence and Apparent Agency in Human–Robot Interaction. John Wiley & Sons, Ltd, Hoboken, NJ, USA, Chapter 7, 160–175. https://doi.org/10.1002/9781118426456.ch7 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/9781118426456.ch7
[189]
Leila Takayama, Wendy Ju, and Clifford Nass. 2008. Beyond dirty, dangerous and dull: what everyday people think robots should do. In 2008 3rd ACM/IEEE international conference on human-robot interaction (HRI). IEEE, Amsterdam, The Netherlands, 25–32.
[190]
Carmen Tanner and Markus Christen. 2014. Moral intelligence–A framework for understanding moral competences. In Empirically informed ethics: Morality between facts and norms. Springer, Berlin, Germany, 119–136.
[191]
Neil Thurman, Judith Moeller, Natali Helberger, and Damian Trilling. 2019. My friends, editors, algorithms, and I: Examining audience attitudes to news selection. Digital journalism 7, 4 (2019), 447–469.
[192]
Lachlan Urquhart and Diana Miranda. 2022. Policing faces: the present and future of intelligent facial surveillance. Information & Communications Technology Law 31, 2 (2022), 194–219.
[193]
Jerry J Vaske, Jay Beaman, and Carly C Sponarski. 2017. Rethinking internal consistency in Cronbach’s alpha. Leisure sciences 39, 2 (2017), 163–173.
[194]
Viswanath Venkatesh, James YL Thong, and Xin Xu. 2012. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly 36, 1 (2012), 157–178.
[195]
Jordan Joseph Wadden. 2021. Defining the undefinable: the black box problem in healthcare artificial intelligence. Journal of Medical Ethics 48 (2021), 764–768.
[196]
Judy Wajcman. 1991. Feminism confronts technology. Penn State Press, University Park, PA, USA.
[197]
Shijun Wang and Ronald M Summers. 2012. Machine learning and radiology. Medical image analysis 16, 5 (2012), 933–951.
[198]
Anne L Washington. 2018. How to argue with an algorithm: Lessons from the COMPAS-ProPublica debate. Colo. Tech. LJ 17(2018), 131.
[199]
Adam Waytz, Joy Heafner, and Nicholas Epley. 2014. The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology 52 (2014), 113–117.
[200]
Yueh-Hsuan Weng, Chien-Hsun Chen, and Chuen-Tsai Sun. 2009. Toward the human–robot co-existence society: On safety intelligence for next generation robots. International Journal of Social Robotics 1, 4 (2009), 267–282.
[201]
Anja Wölker and Thomas E Powell. 2021. Algorithms in the newsroom? News readers’ perceived credibility and selection of automated journalism. Journalism 22, 1 (2021), 86–103.
[202]
Shan Xu and Wenbo Li. 2022. A tool or a social being? A dynamic longitudinal investigation of functional use and relational use of AI voice assistants. New Media & Society 0(2022), 14614448221108112.
[203]
Hyun Yang and S. Shyam Sundar. 2020. Machine heuristic: A concept explication and development of a scale. In The 70th annual conference of the International Communication Association. ICA, Gold Coast, Australia, 0.
[204]
Hee-dong Yang and Youngjin Yoo. 2004. It’s all about attitude: revisiting the technology acceptance model. Decision support systems 38, 1 (2004), 19–31.
[205]
Ruqaiijah Yearby. 2020. Structural racism and health disparities: Reconfiguring the social determinants of health framework to include the root cause. Journal of Law, Medicine & Ethics 48, 3 (2020), 518–526.
[206]
Andrew W Young and A Mike Burton. 2018. Are we face experts?Trends in cognitive sciences 22, 2 (2018), 100–110.
[207]
Betty J Young. 2000. Gender differences in student attitudes toward computers. Journal of research on computing in education 33, 2 (2000), 204–216.
[208]
Bo Zhang and S Shyam Sundar. 2019. Proactive vs. reactive personalization: Can customization of privacy enhance user experience?International journal of human-computer studies 128 (2019), 86–99.
[209]
Tingru Zhang, Weisheng Zeng, Yanxuan Zhang, Da Tao, Guofa Li, and Xingda Qu. 2021. What drives people to use automated vehicles? A meta-analytic review. Accident Analysis & Prevention 159 (2021), 106270.
[210]
H Zheng and J Yang. 2018. Functional Modules Sharing and Blockchain Based Validation in Office Automation Systems. In IOP Conference Series: Materials Science and Engineering, Vol. 466. IOP Publishing, Bristol, UK, 012007.
[211]
Bolei Zhou, Yiyou Sun, David Bau, and Antonio Torralba. 2018. Interpretable basis decomposition for visual explanation. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, Munich, Germany, 119–134.
[212]
James Zou and Londa Schiebinger. 2018. AI can be sexist and racist—it’s time to make it fair.

Cited By

View all
  • (2024)Facing LLMs: Robot Communication Styles in Mediating Health Information between Parents and Young AdultsProceedings of the ACM on Human-Computer Interaction10.1145/36870368:CSCW2(1-37)Online publication date: 8-Nov-2024
  • (2024)Hiring an AI: Incorporating Personnel Selection Methods in User-Centered Design to Design AI Agents for Safety-Critical DomainsAdjunct Proceedings of the 2024 Nordic Conference on Human-Computer Interaction10.1145/3677045.3685418(1-9)Online publication date: 13-Oct-2024
  • (2024)Mediating Culture: Cultivating Socio-cultural Understanding of AI in Children through Participatory DesignProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661515(1805-1822)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
April 2023
14911 pages
ISBN:9781450394215
DOI:10.1145/3544548
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 April 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AI Autonomy
  2. Artificial Intelligence (AI)
  3. Attitude
  4. Clustering Analysis
  5. Credibility
  6. Human Involvement
  7. Human-AI Interaction (HAII)
  8. Social Approval

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Brandt Fellowship

Conference

CHI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI '25
CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,389
  • Downloads (Last 6 weeks)156
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Facing LLMs: Robot Communication Styles in Mediating Health Information between Parents and Young AdultsProceedings of the ACM on Human-Computer Interaction10.1145/36870368:CSCW2(1-37)Online publication date: 8-Nov-2024
  • (2024)Hiring an AI: Incorporating Personnel Selection Methods in User-Centered Design to Design AI Agents for Safety-Critical DomainsAdjunct Proceedings of the 2024 Nordic Conference on Human-Computer Interaction10.1145/3677045.3685418(1-9)Online publication date: 13-Oct-2024
  • (2024)Mediating Culture: Cultivating Socio-cultural Understanding of AI in Children through Participatory DesignProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661515(1805-1822)Online publication date: 1-Jul-2024
  • (2024)Examining Humanness as a Metaphor to Design Voice User InterfacesProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665535(1-15)Online publication date: 8-Jul-2024
  • (2024)Theory of Mind in Human-AI InteractionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3636308(1-6)Online publication date: 11-May-2024
  • (2024)Charting the Future of AI in Project-Based Learning: A Co-Design Exploration with StudentsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642807(1-19)Online publication date: 11-May-2024
  • (2024)The Promise and Peril of ChatGPT in Higher Education: Opportunities, Challenges, and Design ImplicationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642785(1-21)Online publication date: 11-May-2024
  • (2024)Design Principles for Generative AI ApplicationsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642466(1-22)Online publication date: 11-May-2024
  • (2024)User Experience Design Professionals’ Perceptions of Generative Artificial IntelligenceProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642114(1-18)Online publication date: 11-May-2024
  • (2024)Trust in AI-assisted Decision Making: Perspectives from Those Behind the System and Those for Whom the Decision is MadeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642018(1-14)Online publication date: 11-May-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media