Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3605390.3605397acmotherconferencesArticle/Chapter ViewAbstractPublication PageschitalyConference Proceedingsconference-collections
research-article
Open access

How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction

Published: 20 September 2023 Publication History

Abstract

Although deepfakes have a negative connotation in human-computer interaction (HCI) due to their risks, they also involve many opportunities, such as communicating user needs in the form of a “living, talking” deepfake persona. To scope and better understand these opportunities, we present a qualitative analysis of 46 participants’ think-aloud transcripts based on interacting with deepfake personas and human personas, representing a potentially beneficial application of deepfakes for HCI. Our qualitative analysis of 92 think-aloud records indicates five central user deepfake themes, including (1) Realism, (2) User Needs, (3) Distracting Properties, (4) Added Value, and (5) Rapport. The results indicate various challenges in deepfake user perception that technology developers need to address before the potential of deepfake applications can be realized for HCI.

References

[1]
Steven Luria Ablon, Daniel P. Brown, Edward J. Khantzian, and John E. Mack (Eds.). 2015. Human feelings: explorations in affect development and meaning (First issued in paperback ed.). Routledge, Taylor & Francis Group, New York London.
[2]
Simone Agostinelli, Federica Battaglini, Tiziana Catarci, Federica Dal Falco, and Andrea Marrella. 2019. Generating Personalized Narrative Experiences in Interactive Storytelling through Automated Planning. In CHITALY’19–Proceedings of the 13th Biannual Conference of the Italian Conference SIGCHI Chapter Designing the next interaction, Padova, 23–25.
[3]
Saifuddin Ahmed. 2021. Fooled by the fakes: Cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes. Personality and Individual Differences 182, (2021), 111074.
[4]
Saifuddin Ahmed, Sheryl Wei Ting Ng, and Adeline Bee Wei Ting. 2023. Understanding the role of fear of missing out and deficient self-regulation in sharing of deepfakes on social media: Evidence from eight countries. Frontiers in Psychology 14, (2023), 609.
[5]
J. An, H. Kwak, S. Jung, J. Salminen, M. Admad, and B. Jansen. 2018. Imaginary People Representing Real Numbers: Generating Personas from Online Social Media Data. ACM Trans. Web 12, 4 (November 2018), 1–26.
[6]
Soubhik Barari, Christopher Lucas, and Kevin Munger. 2021. Political deepfakes are as credible as other fake media and (sometimes) real media. (2021).
[7]
Barbara Rita Barricelli and Daniela Fogli. 2021. Virtual assistants for personalizing iot ecosystems: Challenges and opportunities. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5.
[8]
Sergi D. Bray, Shane D. Johnson, and Bennett Kleinberg. 2022. Testing Human Ability To Detect Deepfake Images of Human Faces. arXiv preprint arXiv:2212.05056 (2022).
[9]
Sergi D. Bray, Shane D. Johnson, and Bennett Kleinberg. 2022. Testing Human Ability To Detect Deepfake Images of Human Faces. arXiv preprint arXiv:2212.05056 (2022).
[10]
Fabio Catania, Pietro Crovari, Eleonora Beccaluva, Giorgio De Luca, Erica Colombo, Nicola Bombaci, and Franca Garzotto. 2021. Boris: a Spoken Conversational Agent for Music Production for People with Motor Disabilities. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5.
[11]
Bobby Chesney and Danielle Citron. 2019. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev. 107, (2019), 1753.
[12]
Keaunna Cleveland. 2022. Creepy or Cool? An Exploration of Non-Malicious Deepfakes through Analysis of Two Case Studies. M.S. University of Maryland, College Park, United States – Maryland. Retrieved August 23, 2022 from https://www.proquest.com/docview/2681852015/abstract/D819BB5EC1F54D76PQ/1
[13]
Alan Cooper. 1999. The Inmates are Running the Asylum. In Software-Ergonomie ’99, Udo Arend, Edmund Eberleh and Knut Pitschke (eds.). Vieweg+Teubner Verlag, Wiesbaden, 17–17.
[14]
Emily Cruse. 2006. Using educational video in the classroom: Theory, research and practice. Library Video Company 12, 4 (2006), 56–80.
[15]
Valdemar Danry, Joanne Leong, Pat Pataranutaporn, Pulkit Tandon, Yimeng Liu, Roy Shilkrot, Parinya Punpongsanon, Tsachy Weissman, Pattie Maes, and Misha Sra. 2022. AI-Generated Characters: Putting Deepfakes to Good Use. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, ACM, New Orleans LA USA, 1–5.
[16]
Nicholas Diakopoulos and Deborah Johnson. 2021. Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Media & Society 23, 7 (2021), 2072–2098.
[17]
Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese. 2021. Do (microtargeted) deepfakes have real effects on political attitudes? The International Journal of Press/Politics 26, 1 (2021), 69–91.
[18]
David Harrison Ii Harrison Ferrell, Giorgio Grando, and Massimo Zancanaro. 2021. The AI Style Experience: design and formative evaluation of a novel phygital technology for the retail environment. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–4.
[19]
Dilrukshi Gamage, Piyush Ghasiya, Vamshi Bonagiri, Mark E. Whiting, and Kazutoshi Sasahara. 2022. Are Deepfakes Concerning? Analyzing Conversations of Deepfakes on Reddit and Exploring Societal Implications. In CHI Conference on Human Factors in Computing Systems, 1–19.
[20]
Ella Glikson and Anita Williams Woolley. 2020. Human Trust in Artificial Intelligence: Review of Empirical Research. ANNALS 14, 2 (July 2020), 627–660.
[21]
Matthew Groh, Ziv Epstein, Chaz Firestone, and Rosalind Picard. 2022. Deepfake detection by human crowds, machines, and machine-informed crowds. Proc. Natl. Acad. Sci. U.S.A. 119, 1 (January 2022), e2110013119.
[22]
Jeffrey T. Hancock and Jeremy N. Bailenson. 2021. The Social Impact of Deepfakes. Cyberpsychology, Behavior, and Social Networking 24, 3 (March 2021), 149–152.
[23]
Shlomo Hareli, Konstantinos Kafetsios, and Ursula Hess. 2015. A cross-cultural study on emotion expression and the learning of social norms. Front. Psychol. 6, (October 2015).
[24]
Lorenz Harst, Lena Otto, Patrick Timpel, Peggy Richter, Hendrikje Lantzsch, Bastian Wollschlaeger, Katja Winkler, and Hannes Schlieter. 2022. An empirically sound telemedicine taxonomy–applying the CAFE methodology. Journal of Public Health 30, 11 (2022), 2729–2740.
[25]
Sean Hughes, Ohad Fried, Melissa Ferguson, Ciaran Hughes, Rian Hughes, Xinwei Yao, and Ian Hussey. 2023. Deepfaked Online Content is Highly Effective in Manipulating Attitudes & Intentions.
[26]
Yoori Hwang, Ji Youn Ryu, and Se-Hoon Jeong. 2021. Effects of disinformation using deepfake: The protective effect of media literacy education. Cyberpsychology, Behavior, and Social Networking 24, 3 (2021), 188–193.
[27]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, ACM, Virtual Event Canada, 624–635.
[28]
Bernard J. Jansen, Soon-Gyo Jung, Lene Nielsen, Kathleen W. Guan, and Joni Salminen. 2022. How to Create Personas: Three Persona Creation Methodologies with Implications for Practical Employment. Pacific Asia Journal of the Association for Information Systems 14, 3 (2022).
[29]
Jihyeon Kang, Sang-Keun Ji, Sangyeong Lee, Daehee Jang, and Jong-Uk Hou. 2022. Detection Enhancement for Various Deepfake Types Based on Residual Noise and Manipulation Traces. IEEE Access 10, (2022), 69031–69040.
[30]
Jan Kietzmann, Adam J. Mills, and Kirk Plangger. 2021. Deepfakes: perspectives on the future “reality” of advertising and branding. International Journal of Advertising 40, 3 (April 2021), 473–485.
[31]
Felix Kleine. Perception of Deepfake Technology.
[32]
Nils C. Köbis, Barbora Doležalová, and Ivan Soraperra. 2021. Fooled twice: People cannot detect deepfakes but think they can. Iscience 24, 11 (2021), 103364.
[33]
Pavel Korshunov and Sébastien Marcel. 2020. Deepfake detection: humans vs. machines. arXiv preprint arXiv:2009.03155 (2020).
[34]
Matthew B. Kugler and Carly Pace. 2021. Deepfake privacy: Attitudes and regulation. Nw. UL Rev. 116, (2021), 611.
[35]
YoungAh Lee, Kuo-Ting (Tim) Huang, Robin Blom, Rebecca Schriner, and Carl A. Ciccarelli. 2021. To Believe or Not to Believe: Framing Analysis of Content and Audience Response of Top 10 Deepfake Videos on YouTube. Cyberpsychology, Behavior, and Social Networking 24, 3 (March 2021), 153–158.
[36]
Andrew Lewis, Patrick Vu, and Areeq Chowdhury. 2022. Do content warnings help people spot a deepfake? Evidence from two experiments. (2022).
[37]
Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In ictu oculi: Exposing ai created fake videos by detecting eye blinking. In 2018 IEEE international workshop on information forensics and security (WIFS), IEEE, 1–7.
[38]
Siwei Lyu. 2020. Deepfake detection: Current challenges and next steps. In 2020 IEEE international conference on multimedia & expo workshops (ICMEW), IEEE, 1–6.
[39]
Edvinas Meskys, Julija Kalpokiene, Paulius Jurcys, and Aidas Liaudanskas. 2020. Regulating deep fakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice 15, 1 (2020), 24–31.
[40]
Metric. 2023. About METRIC. Retrieved March 15, 2023 from https://metric.qcri.org/about
[41]
Jaron Mink, Licheng Luo, Natã M. Barbosa, Olivia Figueira, Yang Wang, and Gang Wang. 2022. {DeepPhish}: Understanding User Trust Towards Artificially Generated Profiles in Online Social Networks. 1669–1686. Retrieved March 24, 2023 from https://www.usenix.org/conference/usenixsecurity22/presentation/mink
[42]
Masahiro Mori, Karl MacDorman, and Norri Kageki. 2012. The Uncanny Valley [From the Field]. IEEE Robot. Automat. Mag. 19, 2 (June 2012), 98–100.
[43]
Nicolas M. Müller, Karla Pizzi, and Jennifer Williams. 2021. Human Perception of Audio Deepfakes. (2021).
[44]
Nicolas M. Müller, Karla Pizzi, and Jennifer Williams. 2022. Human perception of audio deepfakes. In Proceedings of the 1st International Workshop on Deepfake Detection for Audio Multimedia, 85–91.
[45]
Mekhail Mustak, Joni Salminen, Matti Mäntymäki, Arafat Rahman, and Yogesh K. Dwivedi. 2023. Deepfakes: Deceptions, mitigations, and opportunities. Journal of Business Research 154, (January 2023), 113368.
[46]
Yu-Leung Ng. 2022. An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics. Curr Psychol (August 2022).
[47]
Paula M Niedenthal, Magdalena Rychlowska, and Adrienne Wood. 2017. Feelings and contexts: socioecological influences on the nonverbal expression of emotion. Current Opinion in Psychology 17, (October 2017), 170–175.
[48]
Lene Nielsen, Joni Salminen, Soon-Gyo Jung, and Bernard J. Jansen. 2021. Think-Aloud Surveys. In IFIP Conference on Human-Computer Interaction, Springer, Cham, 504–508.
[49]
World Health Organization. 2020. Quit tobacco today. Publisher Full Text (2020).
[50]
Chandra Kishor Pandey, Vinay Kumar Mishra, and Neeraj Kumar Tiwari. 2021. Deepfakes: when to use it. In 2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), IEEE, 80–84.
[51]
Ethan Preu, Mark Jackson, and Nazim Choudhury. 2022. Perception vs. Reality: Understanding and Evaluating the Impact of Synthetic Image Deepfakes over College Students. In 2022 IEEE 13th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE, 0547–0553.
[52]
Ethan Preu, Mark Jackson, and Nazim Choudhury. 2022. Perception vs. Reality: Understanding and Evaluating the Impact of Synthetic Image Deepfakes over College Students. In 2022 IEEE 13th Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE, New York, NY, NY, USA, 0547–0553.
[53]
Jiameng Pu, Neal Mangaokar, Lauren Kelly, Parantapa Bhattacharya, Kavya Sundaram, Mobin Javed, Bolun Wang, and Bimal Viswanath. 2021. Deepfake Videos in the Wild: Analysis and Detection. In Proceedings of the Web Conference 2021, ACM, Ljubljana Slovenia, 981–992.
[54]
Joni Salminen, Soon-Gyo Jung, Shammur Chowdhury, Sercan Sengün, and Bernard J. Jansen. 2020. Personas and Analytics: A Comparative User Study of Efficiency and Effectiveness for a User Identification Task. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, Honolulu HI USA, 1–13.
[55]
Joni Salminen, Soon-Gyo Jung, João M. Santos, Ahmed Mohamed Sayed Kamel, and Bernard J. Jansen. 2021. Picturing It!: The Effect of Image Styles on User Perceptions of Personas. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, ACM, Yokohama Japan, 1–16.
[56]
Joni Salminen, Ilkka Kaate, Ahmed Mohamed Sayed Kamel, Soon-Gyo Jung, and Bernard J. Jansen. 2021. How Does Personification Impact Ad Performance and Empathy? An Experiment with Online Advertising. International Journal of Human–Computer Interaction 37, 2 (January 2021), 141–155.
[57]
Joni Salminen, Lene Nielsen, Soon-Gyo Jung, Jisun An, Haewoon Kwak, and Bernard J. Jansen. 2018. “Is More Better?”: Impact of Multiple Photos on Perception of Persona Profiles. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, ACM, Montreal QC Canada, 1–13.
[58]
Albrecht Schmidt. 2021. The End of Serendipity: Will Artificial Intelligence Remove Chance and Choice in Everyday Life? In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–4.
[59]
Mike Seymour, Kai Riemer, Lingyao Yuan, and Alan Dennis. 2021. Beyond deep fakes: Conceptual framework, applications, and research agenda for neural rendering of realistic digital faces. (2021).
[60]
Farhana Shahid, Srujana Kamath, Annie Sidotam, Vivian Jiang, Alexa Batino, and Aditya Vashistha. 2022. ” It Matches My Worldview”: Examining Perceptions and Attitudes Around Fake Videos. In CHI Conference on Human Factors in Computing Systems, 1–15.
[61]
Jessica Silbey and Woodrow Hartzog. 2018. The upside of deep fakes. Md. L. Rev. 78, (2018), 960.
[62]
Stefan Sütterlin, Torvald F. Ask, Sophia Mägerle, Sandra Glöckler, Leandra Wolf, Julian Schray, Alaya Chandi, Teodora Bursac, Ali Khodabakhsh, and Benjamin J. Knox. 2021. Individual Deep Fake Recognition Skills Are Affected by Viewers’ Political Orientation, Agreement with Content and Device Used. (2021).
[63]
John Ternovski, Joshua Kalla, and Peter Aronow. 2022. The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety 1, 2 (2022).
[64]
Nyein Nyein Thaw, Thin July, Aye Nu Wai, Dion Hoe-Lian Goh, and Alton YK Chua. 2021. How Are Deepfake Videos Detected? An Initial User Study. In HCI International 2021-Posters: 23rd HCI International Conference, HCII 2021, Virtual Event, July 24–29, 2021, Proceedings, Part I 23, Springer, 631–636.
[65]
Pier Paolo Tricomi, Federica Nenna, Luca Pajola, Mauro Conti, and Luciano Gamberi. 2023. You can't hide behind your headset: User profiling in augmented and virtual reality. IEEE Access 11, (2023), 9859–9875.
[66]
Binderiya Usukhbayar and Sean Homer. 2020. Deepfake Videos: The Future of Entertainment. (2020).
[67]
Cristian Vaccari and Andrew Chadwick. 2020. Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society 6, 1 (2020), 2056305120903408.
[68]
Stefano Valtolina and Liliana Hu. 2021. Charlie: A chatbot to improve the elderly quality of life and to make them more active to fight their sense of loneliness. In CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter, 1–5.
[69]
Liansheng Wang, Lianyu Zhou, Wenxian Yang, and Rongshan Yu. 2022. Deepfakes: a new threat to image fabrication in scientific publications? Patterns 3, 5 (2022), 100509.
[70]
Soyoung Wang. 2021. How will users respond to the adversarial noise that prevents the generation of deepfakes? (2021).
[71]
Christopher Welker, David France, Alice Henty, and Thalia Wheatley. 2020. Trading faces: Complete AI face doubles avoid the uncanny valley.
[72]
Stephen R. Wester, David L. Vogel, Page K. Pressly, and Martin Heesacker. 2002. Sex Differences in Emotion: A Critical Review of the Literature and Implications for Counseling Psychology. The Counseling Psychologist 30, 4 (July 2002), 630–652.
[73]
Mika Westerlund. 2019. The emergence of deepfake technology: A review. Technology innovation management review 9, 11 (2019).
[74]
Lucas Whittaker, Kate Letheren, and Rory Mulcahy. 2021. The Rise of Deepfakes: A Conceptual Framework and Research Agenda for Marketing. Australasian Marketing Journal 29, 3 (August 2021), 204–214.
[75]
Chloe Wittenberg, Ben M. Tappin, Adam J. Berinsky, and David G. Rand. 2021. The (minimal) persuasive advantage of political video over text. Proceedings of the National Academy of Sciences 118, 47 (2021), e2114388118.

Cited By

View all
  • (2025)SLM-DFS: A systematic literature map of deepfake spread on social mediaAlexandria Engineering Journal10.1016/j.aej.2024.10.076111(446-455)Online publication date: Jan-2025
  • (2024)Getting Emotional Enough: Analyzing Emotional Diversity in Deepfake AvatarsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685398(1-12)Online publication date: 13-Oct-2024

Index Terms

  1. How Do Users Perceive Deepfake Personas? Investigating the Deepfake User Perception and Its Implications for Human-Computer Interaction
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CHItaly '23: Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter
    September 2023
    416 pages
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 September 2023

    Check for updates

    Author Tags

    1. Deepfakes
    2. HCI applications
    3. user experience
    4. user perceptions

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CHItaly 2023

    Acceptance Rates

    Overall Acceptance Rate 109 of 242 submissions, 45%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,568
    • Downloads (Last 6 weeks)254
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)SLM-DFS: A systematic literature map of deepfake spread on social mediaAlexandria Engineering Journal10.1016/j.aej.2024.10.076111(446-455)Online publication date: Jan-2025
    • (2024)Getting Emotional Enough: Analyzing Emotional Diversity in Deepfake AvatarsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685398(1-12)Online publication date: 13-Oct-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media