Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3491102.3517527acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Open access

“Look! It’s a Computer Program! It’s an Algorithm! It’s AI!”: Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems?

Published: 29 April 2022 Publication History

Abstract

In the media, in policy-making, but also in research articles, algorithmic decision-making (ADM) systems are referred to as algorithms, artificial intelligence, and computer programs, amongst other terms. We hypothesize that such terminological differences can affect people’s perceptions of properties of ADM systems, people’s evaluations of systems in application contexts, and the replicability of research as findings may be influenced by terminological differences. In two studies (N = 397, N = 622), we show that terminology does indeed affect laypeople’s perceptions of system properties (e.g., perceived complexity) and evaluations of systems (e.g., trust). Our findings highlight the need to be mindful when choosing terms to describe ADM systems, because terminology can have unintended consequences, and may impact the robustness and replicability of HCI research. Additionally, our findings indicate that terminology can be used strategically (e.g., in communication about ADM systems) to influence people’s perceptions and evaluations of these systems.

Supplemental Material

References

[1]
Herman Aguinis and Kyle J Bradley. 2014. Best practice recommendations for designing and implementing experimental vignette methodology studies. Organizational Research Methods 17, 4 (2014), 351–371. https://doi.org/10.1177/1094428114547952
[2]
Kirk Allen, Teri Reed-Rhoads, Robert A Terry, Teri J Murphy, and Andrea D Stone. 2008. Coefficient alpha: An engineer’s interpretation of test reliability. Journal of Engineering Education 97, 1 (2008), 87–94. https://doi.org/10.1002/j.2168-9830.2008.tb00956.x
[3]
Theo Araujo. 2018. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior 85 (2018), 183–189. https://doi.org/10.1016/j.chb.2018.03.051
[4]
Christiane Atzmüller and Peter M Steiner. 2010. Experimental vignette studies in survey research. Methodology 6(2010), 128–138. https://doi.org/10.1027/1614-2241/a000014
[5]
Christoph Bartneck, Dana Kulić, Elizabeth Croft, and Susana Zoghbi. 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics 1, 1 (2009), 71–81. https://doi.org/10.1007/s12369-008-0001-3
[6]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s reducing a human being to a percentage’ perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 1–14. https://doi.org/10.31235/osf.io/9wqxr
[7]
Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 632–642. https://doi.org/10.18653/v1/d15-1075
[8]
Noah Castelo, Maarten W Bos, and Donald R Lehmann. 2019. Task-dependent algorithm aversion. Journal of Marketing Research 56, 5 (2019), 809–825. https://doi.org/10.1177/0022243719851788
[9]
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. 2017. Semantic textual similarity-multilingual and cross-lingual focused evaluation. Proceedings of the 2017 SEMVAL International Workshop on Semantic Evaluation (2017). https://doi.org/10.18653/v1/s17-2001
[10]
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Céspedes, Steve Yuan, Chris Tar, 2018. Universal sentence encoder. arXiv:1803.11175 (2018).
[11]
Jason A Colquitt. 2001. On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology 86, 3 (2001), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
[12]
Kimberly E Culley and Poornima Madhavan. 2013. A note of caution regarding anthropomorphism in HCI agents. Computers in Human Behavior 29, 3 (2013), 577–579. https://doi.org/10.1016/j.chb.2012.11.023
[13]
Oren Danieli, Andrew Hillis, and Michael Luca. 2016. How to hire with algorithms. Retrieved July, 17, 2021 from https://hbr.org/2016/10/how-to-hire-with-algorithms
[14]
Maartje Ma De Graaf, Somaya Ben Allouch, and Tineke Klamer. 2015. Sharing a life with Harvey: Exploring the acceptance of and relationship-building with a social robot. Computers in Human Behavior 43 (2015), 1–14. https://doi.org/10.1016/j.chb.2014.10.030
[15]
Ewart J De Visser, Samuel S Monfort, Ryan McKendrick, Melissa AB Smith, Patrick E McKnight, Frank Krueger, and Raja Parasuraman. 2016. Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied 22, 3 (2016), 331–349. https://doi.org/10.1037/xap0000092
[16]
Berkeley J Dietvorst and Soaham Bharti. 2020. People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science 31, 10 (2020), 1302–1314. https://doi.org/10.1177/0956797620948841
[17]
Berkeley J Dietvorst, Joseph P Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1 (2015), 114–126. https://doi.org/10.2139/ssrn.2466040
[18]
Rob Eisinga, Manfred Te Grotenhuis, and Ben Pelzer. 2013. The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?International Journal of Public Health 58, 4 (2013), 637–642. https://doi.org/10.1007/s00038-012-0416-3
[19]
Melissa V Eitzel, Jessica L Cappadonna, Chris Santos-Lang, Ruth Ellen Duerr, Arika Virapongse, Sarah Elizabeth West, Christopher Kyba, Anne Bowser, Caren Beth Cooper, Andrea Sforzi, 2017. Citizen science terminology matters: Exploring key terms. Citizen Science: Theory and Practice 2, 1 (2017), 1–20. https://doi.org/10.5334/cstp.96
[20]
Kimberly D Elsbach and Ileana Stigliani. 2019. New information technology and implicit bias. Academy of Management Perspectives 33, 2 (2019), 185–206. https://doi.org/10.5465/amp.2017.0079
[21]
Nicholas Epley, Adam Waytz, and John T Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review 114, 4 (2007), 864–886. https://doi.org/10.1037/0033-295x.114.4.864
[22]
Doug Ertz. 2021. Eight in ten leaders want intelligent systems success in five years but the time to start blueprinting is now. Retrieved July, 17, 2021 from https://www.forbes.com/sites/windriver/2021/05/01/eight-in-ten-leaders-want-intelligent-systems-success-in-five-years-but-the-time-to-start-blueprinting-is-now/
[23]
Thomas Franke, Christiane Attig, and Daniel Wessel. 2019. A personal resource for technology interaction: Development and validation of the affinity for technology interaction (ATI) scale. International Journal of Human–Computer Interaction 35, 6(2019), 456–467. https://doi.org/10.1080/10447318.2018.1456150
[24]
Murray Gell-Mann. 2002. What is complexity?In Complexity and industrial clusters, Alberto Quadrio Curzio and Marco Fortis (Eds.). Springer, Heidelberg, Germany, 13–24. https://doi.org/10.1007/978-3-642-50007-7_2
[25]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals 14, 2 (2020), 627–660. https://doi.org/10.5465/annals.2018.0057
[26]
Manuel F Gonzalez, John F Capman, Frederick L Oswald, Evan R Theys, and David L Tomczak. 2019. “Where’s the IO?” Artificial intelligence and machine learning in talent management systems. Personnel Assessment and Decisions 5, 3 (2019), 5. https://doi.org/10.25035/pad.2019.03.005
[27]
Nina Grgić-Hlača, Christoph Engel, and Krishna P Gummadi. 2019. Human decision making with machine assistance: An experiment on bailing and jailing. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–25. https://doi.org/10.2139/ssrn.3465622
[28]
Nina Grgić-Hlača, Elissa M Redmiles, Krishna P Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. In Proceedings of the 2018 WWW World Wide Web Conference. ACM, 903–912. https://doi.org/10.1145/3178876.3186138
[29]
Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57, 3 (2015), 407–434. https://doi.org/10.1177/0018720814547570
[30]
Yoyo Tsung-Yu Hou and Malte F Jung. 2021. Who is the expert? Reconciling algorithm aversion and algorithm appreciation in AI-supported decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2(2021), 1–25. https://doi.org/10.1145/3479864
[31]
Frederick M Howard, Catherine A Gao, and Christopher Sankey. 2020. Implementation of an automated scheduling tool improves schedule quality and resident satisfaction. PloS One 15, 8 (2020), e0236952. https://doi.org/10.1371/journal.pone.0236952
[32]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in ai. In Proceedings of the 2021 FAccT Conference on Fairness, Accountability, and Transparency. ACM, 624–635. https://doi.org/10.1145/3442188.3445923
[33]
Stuart Keel, Pei Ying Lee, Jane Scheetz, Zhixi Li, Mark A Kotowicz, Richard J MacIsaac, and Mingguang He. 2018. Feasibility and patient acceptability of a novel artificial intelligence-based screening model for diabetic retinopathy at endocrinology outpatient services: A pilot study. Scientific Reports 8, 1 (2018). https://doi.org/10.1038/s41598-018-22612-2
[34]
Markus Langer, Cornelius J König, and Victoria Hemsing. 2020. Is anybody listening? The impact of automatically evaluated job interviews on impression management and applicant reactions. Journal of Managerial Psychology 35, 4 (2020), 271–284. https://doi.org/10.1108/jmp-03-2019-0156
[35]
Markus Langer, Cornelius J König, and Maria Papathanasiou. 2019. Highly automated job interviews: Acceptance under the influence of stakes. International Journal of Selection and Assessment 27, 3(2019), 217–234. https://doi.org/10.1111/ijsa.12246
[36]
Markus Langer and Richard N Landers. 2021. The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior(2021), 106878. https://doi.org/10.1016/j.chb.2021.106878
[37]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human Factors 46, 1 (2004), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
[38]
Kwan Min Lee, Younbo Jung, Jaywoo Kim, and Sang Ryong Kim. 2006. Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot interaction. International Journal of Human-Computer Studies 64, 10 (2006), 962–973. https://doi.org/10.1016/j.ijhcs.2006.05.002
[39]
Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684. https://doi.org/10.1177/2053951718756684
[40]
Min Kyung Lee and Katherine Rich. 2021. Who Is Included in human perceptions of AI?: Trust and perceived fairness around healthcare AI and cultural mistrust. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, 1–14. https://doi.org/10.1145/3411764.3445570
[41]
Jamy Li. 2015. The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents. International Journal of Human-Computer Studies 77 (2015), 23–37. https://doi.org/10.1016/j.ijhcs.2015.01.001
[42]
LiveCareer. 2018. 7 Things to Know about the role of robots in recruitment. Retrieved July, 17, 2021 from https://careerenlightenment.com/7-things-to-know-about-the-role-of-robots-in-recruitment
[43]
Chiara Longoni, Andrea Bonezzi, and Carey K Morewedge. 2019. Resistance to medical artificial intelligence. Journal of Consumer Research 46, 4 (2019), 629–650. https://doi.org/10.1093/jcr/ucz013
[44]
Niklas Luhmann. 2000. Familiarity, confidence, trust: Problems and alternatives. In Trust: Making and breaking cooperative relations, Diego Gambetta (Ed.). Department of Sociology, University of Oxford, Oxford, UK, 94–107.
[45]
Frank Marcinkowski, Kimon Kieslich, Christopher Starke, and Marco Lünich. 2020. Implications of AI (un-) fairness in higher education admissions: The effects of perceived AI (un-) fairness on exit, voice and organizational reputation. In Proceedings of the 2020 FAT* Conference on Fairness, Accountability, and Transparency. ACM, 122–130. https://doi.org/10.1145/3351095.3372867
[46]
Enid NH Montague, Brian M Kleiner, and Woodrow W Winchester III. 2009. Empirically understanding trust in medical technology. International Journal of Industrial Ergonomics 39, 4(2009), 628–634. https://doi.org/10.1016/j.ergon.2009.01.004
[47]
Rosanna Nagtegaal. 2021. The impact of using algorithms for managerial decisions on public employees’ procedural justice. Government Information Quarterly 38, 1 (2021), 101536. https://doi.org/10.1016/j.giq.2020.101536
[48]
David T Newman, Nathanael J Fast, and Derek J Harmon. 2020. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes 160 (2020), 149–167. https://doi.org/10.1016/j.obhdp.2020.03.008
[49]
Tatsuya Nomura, Tomohiro Suzuki, Takayuki Kanda, and Kensuke Kato. 2006. Altered attitudes of people toward robots: Investigation through the Negative Attitudes toward Robots Scale. In Proceedings of the 2006 AAAI workshop on Human Implications of Human-Robot Interaction. 29–35.
[50]
High-Level Expert Group on Artificial Intelligence. 2021. Ethics guidelines for trustworthy AI. Retrieved July, 17, 2021 from https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[51]
Sonja K Ötting and Günter W Maier. 2018. The importance of procedural justice in human–machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior 89 (2018), 27–39. https://doi.org/10.1016/j.chb.2018.07.022
[52]
Thomas O’Neill, Nathan McNeese, Amy Barron, and Beau Schelble. 2020. Human–Autonomy teaming: A review and analysis of the empirical literature. Human Factors (2020), 0018720820960865. https://doi.org/10.1177/0018720820960865
[53]
Robert A Peterson. 1994. A meta-analysis of Cronbach’s coefficient alpha. Journal of Consumer Research 21, 2 (1994), 381–391.
[54]
R Puhl, JL Peterson, and J Luedicke. 2013. Motivating or stigmatizing? Public perceptions of weight-related language used by health providers. International Journal of Obesity 37, 4 (2013), 612–619. https://doi.org/10.1038/ijo.2012.110
[55]
Rebecca M Puhl. 2020. What words should we use to talk about weight? A systematic review of quantitative and qualitative studies examining preferences for weight-related terminology. Obesity Reviews 21, 6 (2020), e13008. https://doi.org/10.1111/obr.13008
[56]
Victoria A Shaffer, C Adam Probst, Edgar C Merkle, Hal R Arkes, and Mitchell A Medow. 2013. Why do patients derogate physicians who use a computer-based diagnostic support system?Medical Decision Making 33, 1 (2013), 108–118. https://doi.org/10.1177/0272989x12453501
[57]
Daniel B Shank, Madison Bowen, Alexander Burns, and Matthew Dew. 2021. Humans are perceived as better, but weaker, than artificial intelligence: A comparison of affective impressions of humans, AIs, and computer systems in roles on teams. Computers in Human Behavior Reports 3 (2021), 100092. https://doi.org/10.1016/j.chbr.2021.100092
[58]
Daniel B Shank, Alexander Burns, Sophia Rodriguez, and Madison Bowen. 2020. Software program, bot, or artificial intelligence? Affective sentiments across general technology labels.Current Research in Social Psychology(2020). https://crisp.org.uiowa.edu/sites/crisp.org.uiowa.edu/files/2020-06/crisp_28_4_shank.pdf
[59]
Daniel B Shank and Alyssa DeSanti. 2018. Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior 86 (2018), 401–411. https://doi.org/10.1016/j.chb.2018.05.014
[60]
Rania Shibl, Meredith Lawley, and Justin Debuse. 2013. Factors influencing decision support system acceptance. Decision Support Systems 54, 2 (2013), 953–961. https://doi.org/10.1016/j.dss.2012.09.018
[61]
Stephen M Smith and Richard E Petty. 1996. Message framing and persuasion: A message processing analysis. Personality and Social Psychology Bulletin 22, 3 (1996), 257–268. https://doi.org/10.1177/0146167296223004
[62]
Alarith Uhde, Nadine Schlicker, Dieter P Wallach, and Marc Hassenzahl. 2020. Fairness and decision-making in collaborative shift scheduling systems. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–13. https://doi.org/10.1145/3313831.3376656
[63]
Viswanath Venkatesh, Michael G Morris, Gordon B Davis, and Fred D Davis. 2003. User acceptance of information technology: Toward a unified view. MIS Quarterly 27, 3 (2003), 425–478. https://doi.org/10.2307/30036540
[64]
Katrien Verbert, Denis Parra, Peter Brusilovsky, and Erik Duval. 2013. Visualizing recommendations to support exploration, transparency and controllability. In Proceedings of the 2013 International Conference on Intelligent User Interfaces. ACM, 351–362. https://doi.org/10.1145/2449396.2449442
[65]
James Vincent. 2019. Forty percent of ‘AI startups’ in Europe don’t actually use AI, claims report. Retrieved July, 23, 2021 from https://www.theverge.com/2019/3/5/18251326/ai-startups-europe-fake-40-percent-mmc-report
[66]
Pauli Virtanen, Ralf Gommers, Travis E. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, Stéfan J. van der Walt, Matthew Brett, Joshua Wilson, K. Jarrod Millman, Nikolay Mayorov, Andrew R. J. Nelson, Eric Jones, Robert Kern, Eric Larson, C J Carey, İlhan Polat, Yu Feng, Eric W. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E. A. Quintero, Charles R. Harris, Anne M. Archibald, Antônio H. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors. 2020. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17(2020), 261–272. https://doi.org/10.1038/s41592-019-0686-2
[67]
Ruotong Wang, F Maxwell Harper, and Haiyi Zhu. 2020. Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM, 1–14. https://doi.org/10.1145/3313831.3376813
[68]
David Windley. 2021. Is AI the answer to recruiting effectiveness?Retrieved July, 23, 2021 from https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/06/16/is-ai-the-answer-to-recruiting-effectiveness/

Cited By

View all
  • (2025)Preventing algorithm aversion: People are willing to use algorithms with a learning labelJournal of Business Research10.1016/j.jbusres.2024.115032187(115032)Online publication date: Jan-2025
  • (2025)You, Me, and the AI: The Role of Third‐Party Human Teammates for Trust Formation Toward AI TeammatesJournal of Organizational Behavior10.1002/job.2857Online publication date: Jan-2025
  • (2024)Embracing LLM Feedback: the role of feedback providers and provider information for feedback effectivenessFrontiers in Education10.3389/feduc.2024.14613629Online publication date: 16-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
April 2022
10459 pages
ISBN:9781450391573
DOI:10.1145/3491102
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 April 2022

Check for updates

Author Tags

  1. human-centered AI
  2. replicability
  3. research methodology
  4. terminology

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

CHI '22
Sponsor:
CHI '22: CHI Conference on Human Factors in Computing Systems
April 29 - May 5, 2022
LA, New Orleans, USA

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)893
  • Downloads (Last 6 weeks)108
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Preventing algorithm aversion: People are willing to use algorithms with a learning labelJournal of Business Research10.1016/j.jbusres.2024.115032187(115032)Online publication date: Jan-2025
  • (2025)You, Me, and the AI: The Role of Third‐Party Human Teammates for Trust Formation Toward AI TeammatesJournal of Organizational Behavior10.1002/job.2857Online publication date: Jan-2025
  • (2024)Embracing LLM Feedback: the role of feedback providers and provider information for feedback effectivenessFrontiers in Education10.3389/feduc.2024.14613629Online publication date: 16-Oct-2024
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI ResearchProceedings of the ACM on Human-Computer Interaction10.1145/36410098:CSCW1(1-43)Online publication date: 26-Apr-2024
  • (2024)Powered by AIProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36314147:4(1-24)Online publication date: 12-Jan-2024
  • (2024)From "AI" to Probabilistic Automation: How Does Anthropomorphization of Technical Systems Descriptions Influence Trust?Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659040(2322-2347)Online publication date: 3-Jun-2024
  • (2024)Transparent AI Disclosure Obligations: Who, What, When, Where, Why, HowExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650750(1-11)Online publication date: 11-May-2024
  • (2024)Twenty-four years of empirical research on trust in AI: a bibliometric review of trends, overlooked issues, and future directionsAI & SOCIETY10.1007/s00146-024-02059-yOnline publication date: 2-Oct-2024
  • (2023)Servant by default? How humans perceive their relationship with conversational AICyberpsychology: Journal of Psychosocial Research on Cyberspace10.5817/CP2023-3-917:3Online publication date: 30-Jun-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media