Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3411764.3445415acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

How Should AI Systems Talk to Users when Collecting their Personal Information? Effects of Role Framing and Self-Referencing on Human-AI Interaction

Published: 07 May 2021 Publication History

Abstract

AI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.

References

[1]
Russell Ames and Sing Lau. 1982. An attributional analysis of student help-seeking in academic settings. Journal of Educational Psychology 74, 3 (1982), 414–423.
[2]
Naveen Farag Awad and M. S. Krishnan. 2006. The personalization privacy paradox: an empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly 30, 1 (2006), 13–28.
[3]
Veena Chattaraman, Wi-Suk Kwon, Juan E. Gilbert, and Kassandra Ross. 2019. Should AI-based, conversational digital assistants employ social- or task-oriented interaction style? A task-competency and reciprocity perspective for older adults. Computers in Human Behavior 90, (2019), 315–330.
[4]
Erin K. Chiou, John D. Lee, and Tianshuo Su. 2019. Negotiated and reciprocal exchange structures in human-agent cooperation. Computers in Human Behavior 90, (January 2019), 288–297.
[5]
Jaewon Choi, Hong Joo Lee, and Yong Cheol Kim. 2011. The influence of social presence on customer intention to reuse online recommender systems: The roles of personalization and product type. International Journal of Electronic Commerce 16, 1 (October 2011), 129–154.
[6]
Mary J. Culnan and Pamela K. Armstrong. 1999. Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization Science 10, 1 (February 1999), 104–115.
[7]
Bella M. DePaulo and Jeffrey D. Fisher. 1980. The costs of asking for help. Basic and Applied Social Psychology 1, 1 (March 1980), 23–35.
[8]
Peter R. Druian and Bella M. DePaulo. 1977. Asking a child for help. Social Behavior and Personality: An International Journal 5, 1 (1977), 33–39.
[9]
Håkan Gulliksson. 2012. Human - information - thing interaction: technology and design. Retrieved October 29, 2019 from http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-62946
[10]
Andrew F. Hayes. 2017. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. (2nd. ed.).Guilford Publications, New York, NY.
[11]
Carolin Ischen, Theo Araujo, Hilde Voorveld, Guda van Noort, and Edith Smit. 2020. Privacy concerns in chatbot interactions. In Chatbot Research and Design (Lecture Notes in Computer Science), Springer International Publishing, Cham, 34–48.
[12]
Stuart A. Karabenick and John R. Knapp. 1988. Help seeking and the need for academic assistance. Journal of Educational Psychology 80, 3 (1988), 406–408.
[13]
Kapil Kaushik, Nikunj Kumar Jain, and Alok Kumar Singh. 2018. Antecedents and outcomes of information privacy concerns: Role of subjective norm and social presence. Electronic Commerce Research and Applications 32, (November 2018), 57–68.
[14]
Flavius Kehr, Tobias Kowatsch, Daniel Wentzel, and Elgar Fleisch. 2015. Blissfully ignorant: the effects of general privacy concerns, general institutional trust, and affect in the privacy calculus. Information Systems Journal 25, 6 (2015), 607–635.
[15]
Ki Joon Kim, Eunil Park, and S. Shyam Sundar. 2013. Caregiving role in human–robot interaction: A study of the mediating effects of perceived benefit and social presence. Computers in Human Behavior 29, 4 (July 2013), 1799–1806.
[16]
Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. Explaining the user experience of recommender systems. User Model User-Adap Inter 22, 4 (October 2012), 441–504.
[17]
Robert S. Laufer and Maxine Wolfe. 1977. Privacy as a concept and a social issue: A multidimensional developmental theory. Journal of Social Issues 33, 3 (1977), 22–42.
[18]
Fiona Lee. 2002. The social costs of seeking help. The Journal of Applied Behavioral Science 38, 1 (March 2002), 17–35.
[19]
Kwan Min Lee, Wei Peng, Seung-A. Jin, and Chang Yan. 2006. Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in Human–Robot Interaction. Journal of Communication 56, 4 (2006), 754–772.
[20]
Kimberly Ling, Gerard Beenen, Pamela Ludford, Xiaoqing Wang, Klarissa Chang, Xin Li, Dan Cosley, Dan Frankowski, Loren Terveen, Al Mamunur Rashid, Paul Resnick, and Robert Kraut. 2005. Using social psychology to motivate contributions to online communities. Journal of Computer-Mediated Communication 10, 4 (2005), 00–00.
[21]
Maria Madsen and Shirley Gregor. 2000. Measuring human-computer trust. In Proceedings of the 11th Australasian Conference on Information Systems, (Sydney, AU), Springer-Verlag, 6–8.
[22]
Naresh Malhotra, Sung Kim, and James Agarwal. 2004. Internet Users’ Information Privacy Concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research 15, (December 2004), 336–355.
[23]
Sampada Sameer Marathe, S. Shyam Sundar, Marije Nije Bijvank, Henriette van Vugt, and Jolanda Veldhuis (2007). Who are these power users anyway? Building a psychological profile. Paper presented at the 57th annual conference of the International Communication Association, San Francisco, CA.
[24]
Roger C. Mayer, James H. Davis, and F. David Schoorman. 1995. An Iintegrative model of organizational trust. The Academy of Management Review 20, 3 (1995), 709–734.
[25]
Ann Scheck McAlearney, Sharon B. Schweikhart, and Mitchell A. Medow. 2004. Doctors’ experience with handheld computers in clinical practice: qualitative study. BMJ 328, 7449 (May 2004), 1162.
[26]
Linda D. Molm, Nobuyuki Takahashi, and Gretchen Peterson. 2000. Risk and trust in social exchange: An experimental test of a classical proposition. American Journal of Sociology 105, 5 (March 2000), 1396–1427.
[27]
Yunhe Pan. 2016. Heading toward Artificial Intelligence 2.0. Engineering 2, 4 (December 2016), 409–413.
[28]
David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and Brain Sciences 1, 4 (December 1978), 515–526.
[29]
Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press, New York, NY, US.
[30]
Sarah Strohkorb Sebo, Margaret Traeger, Malte Jung, and Brian Scassellati. 2018. The ripple effects of vulnerability: The effects of a robot's vulnerable behavior on trust in Human-Robot teams. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), Association for Computing Machinery, Chicago, IL, USA, 178–186.
[31]
Rosanne M. Siino, Justin Chung, and Pamela J. Hinds. 2008. Colleague vs. tool: Effects of disclosure in Human-Robot collaboration. In RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication, 558–562.
[32]
S. Shyam Sundar. 2020. Rise of Machine Agency: A Framework for studying the psychology of Human–AI Interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (March 2020), 74–88.
[33]
S. Shyam Sundar and Jinyoung Kim. 2019. Machine Heuristic: when we trust computers more than humans with our personal information. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–9. https://doi.org/10.1145/3290605.3300768
[34]
S. Shyam Sundar and Sampada S. Marathe. 2010. Personalization versus Customization: The importance of agency, privacy, and power usage. Human Communication Research 36, 3 (2010), 298–322.
[35]
Bill Underwood and Bert Moore. 1982. Perspective-taking and altruism. Psychological Bulletin 91, 1 (1982), 143–173.
[36]
Gerben A. Van Kleef, Carsten K. W. De Dreu, and Antony S. R. Manstead. 2010. Chapter 2 - An Interpersonal approach to emotion in social decision making: The emotions as social information model. In Advances in Experimental Social Psychology. Academic Press, 45–96.
[37]
Susan M. Weinschenk. 2009. Neuro web design: What makes them click? (1st ed.). New Riders Publishing, USA.
[38]
Valarie Zeithaml, A Parsu Parasuraman, and Arvind Malhotra. 2000. A conceptual framework for understanding e-service quality: Implications for future research and managerial practice. Marketing Science Institute Working Paper Report No (January 2000).
[39]
Bo Zhang and S. Shyam Sundar. 2019. Proactive vs. reactive personalization: Can customization of privacy enhance user experience? International Journal of Human-Computer Studies 128, (August 2019), 86–99.
[40]
Jakub Złotowski, Kumar Yogeeswaran, and Christoph Bartneck. 2017. Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources. International Journal of Human-Computer Studies 100, (2017), 48–54.

Cited By

View all
  • (2024)Beyond the ScreenHyperautomation in Business and Society10.4018/979-8-3693-3354-9.ch014(266-285)Online publication date: 28-Jun-2024
  • (2024)How do AI and human users interact? Positioning of AI and human users in customer serviceText & Talk10.1515/text-2023-0116Online publication date: 27-Sep-2024
  • (2024)Reconfiguring Participatory Design to Resist AI RealismProceedings of the Participatory Design Conference 2024: Exploratory Papers and Workshops - Volume 210.1145/3661455.3669867(31-36)Online publication date: 11-Aug-2024
  • Show More Cited By

Index Terms

  1. How Should AI Systems Talk to Users when Collecting their Personal Information? Effects of Role Framing and Self-Referencing on Human-AI Interaction
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
    May 2021
    10862 pages
    ISBN:9781450380966
    DOI:10.1145/3411764
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Human-AI Interaction
    2. power usage
    3. social cues
    4. social presence of AI

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHI '21
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI '25
    CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)369
    • Downloads (Last 6 weeks)32
    Reflects downloads up to 21 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Beyond the ScreenHyperautomation in Business and Society10.4018/979-8-3693-3354-9.ch014(266-285)Online publication date: 28-Jun-2024
    • (2024)How do AI and human users interact? Positioning of AI and human users in customer serviceText & Talk10.1515/text-2023-0116Online publication date: 27-Sep-2024
    • (2024)Reconfiguring Participatory Design to Resist AI RealismProceedings of the Participatory Design Conference 2024: Exploratory Papers and Workshops - Volume 210.1145/3661455.3669867(31-36)Online publication date: 11-Aug-2024
    • (2024)In Whose Voice?: Examining AI Agent Representation of People in Social Interaction through Generative SpeechProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661555(224-245)Online publication date: 1-Jul-2024
    • (2024)What influences users to provide explicit feedback? A case of food delivery recommendersUser Modeling and User-Adapted Interaction10.1007/s11257-023-09385-834:3(753-796)Online publication date: 1-Jul-2024
    • (2023)Towards Human-Centered Explainable AI: A Survey of User Studies for Model ExplanationsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.333184646:4(2104-2122)Online publication date: 13-Nov-2023
    • (2023)The Purposeful Presentation of AI Teammates: Impacts on Human Acceptance and PerceptionInternational Journal of Human–Computer Interaction10.1080/10447318.2023.225498440:20(6510-6527)Online publication date: 10-Sep-2023
    • (2023)The methodology of studying fairness perceptions in Artificial IntelligenceInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2022.102954170:COnline publication date: 1-Feb-2023
    • (2023)When the machine learns from users, is it helping or snooping?Computers in Human Behavior10.1016/j.chb.2022.107427138:COnline publication date: 1-Jan-2023
    • (2023)Beyond data transactions: a framework for meaningfully informed data donationAI & SOCIETY10.1007/s00146-023-01755-5Online publication date: 30-Aug-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media