Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3059454.3059475acmconferencesArticle/Chapter ViewAbstractPublication Pagesc-n-cConference Proceedingsconference-collections
research-article

The Willful Marionette: Exploring Responses to Embodied Interaction

Published: 22 June 2017 Publication History

Abstract

This paper explores how participants constructed and re-constructed their relationship with an interactive art installation. The piece, textit{the willful marionette}, was developed in collaboration between artists and researchers. It explores the dynamics of non-verbal communication by building (from a scanned image of a human figure) a stringed marionette that responds to human movement. The intent was to challenge participants' expectations about communication, intelligence, emotion and the social role of the body. We show that participants' descriptions of their interactions vary along two axes: whether they or the marionette was perceived as leading the interaction, and whether they constructed a social or a technical mindset. We explore these differences using semi-structured interviews and a mix of qualitative and quantitative methods. We then present some implications for the design of both goal-directed and expressive embodied intelligent systems. textit{the willful marionette} has since been acquired by the Smithsonian American Art Museum as an example of cutting-edge art that deals with contemporary issues of machine intelligence and the social role of our bodies.

Supplementary Material

suppl.mov (ccfp191.m4v)
Supplemental video

References

[1]
Mostafa Ajallooeian, Ali Borji, Babak Nadjar Araabi, M Nili Ahmadabadi, and Hadi Moradi. 2009. Fast hand gesture recognition based on saliency maps: An application to interactive robotic marionette playing. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication. IEEE, 841--847.
[2]
Alissa N Antle, Paul Marshall, and Elise van den Hoven. 2011. Embodied interaction: Theory and practice in HCI. In Proc. CHI, Vol. 11.
[3]
Jeremy N Bailenson and Nick Yee. 2005. Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological science 16, 10 (2005), 814--819.
[4]
Karen Barad. 2003. Posthumanist performativity: Toward an understanding of how matter comes to matter. Signs 28, 3 (2003), 801--831.
[5]
Joseph Bates. 1994. The role of emotion in believable agents. Commun. ACM 37, 7 (1994), 122--125.
[6]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the royal statistical society. Series B (Methodological) (1995), 289--300.
[7]
AE Blandford. 2013. Semi-structured qualitative studies. In Encyclopedia of Human-Computer Interaction, 2nd Ed. Interaction Design Foundation.
[8]
Cynthia Breazeal, Andrew Brooks, Jesse Gray, Matt Hancher, John McBean, Dan Stiehl, and Joshua Strickon. 2003. Interactive robot theatre. Commun. ACM 46, 7 (2003), 76--85.
[9]
Stéphanie Buisine, Matthieu Courgeon, Aurélien Charles, Céline Clavel, Jean-Claude Martin, Ning Tan, and Ouriel Grynszpan. 2014. The role of body postures in the recognition of emotions in contextually rich scenarios. International Journal of Human-Computer Interaction 30, 1 (2014), 52--62.
[10]
Justine Cassell. 1999. Embodied conversation: integrating face and gesture into automatic spoken dialogue systems. (1999).
[11]
Justine Cassell. 2000. Embodied conversational agents. MIT press.
[12]
Justine Cassell, Timothy Bickmore, Mark Billinghurst, Lee Campbell, Kenny Chang, Hannes Vilhjálmsson, and Hao Yan. 1999a. Embodiment in conversational interfaces: Rea. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 520--527.
[13]
Justine Cassell, Obed E Torres, and Scott Prevost. 1999b. Turn taking versus discourse structure. In Machine conversations. Springer, 143--153.
[14]
I-Ming Chen, Shusong Xing, Raymond Tay, and Song Huat Yeo. 2005. Many strings attached: from conventional to robotic marionette manipulation. IEEE robotics & automation magazine 12, 1 (2005), 59--74.
[15]
Paul Dourish. 2001. Seeking a foundation for context-aware computing. Human--Computer Interaction 16, 2--4 (2001), 229--241.
[16]
Paul Dourish. 2004. What we talk about when we talk about context. Personal and ubiquitous computing 8, 1 (2004), 19--30.
[17]
Jodi Forlizzi and Carl DiSalvo. 2006. Service robots in the domestic environment: a study of the roomba vacuum in the home. In Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM, 258--265.
[18]
Christopher Fox. 1989. A stop list for general text. In ACM SIGIR Forum, Vol. 24. ACM, 19--21.
[19]
Victoria Groom, Clifford Nass, Tina Chen, Alexia Nielsen, James K Scarborough, and Erica Robles. 2009. Evaluating the effects of behavioral realism in embodied agents. International Journal of Human-Computer Studies 67, 10 (2009), 842--849.
[20]
David Hanson, Andrew Olney, Steve Prilliman, Eric Mathews, Marge Zielke, Derek Hammons, Raul Fernandez, and Harry Stephanou. 2005. Upending the uncanny valley. In Proceedings of the national conference on artificial intelligence, Vol. 20. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 1728.
[21]
Karen Henricksen, Jadwiga Indulska, and Andry Rakotonirainy. 2002. Modeling context information in pervasive computing systems. In International Conference on Pervasive Computing. Springer, 167--180.
[22]
Robin Hofe and Roger K Moore. 2008. Anton: an animatronic model of a human tongue and vocal tract. In INTERSPEECH. 2647--2650.
[23]
Kristina Höök. 2000. Steps to take before intelligent user interfaces become real. Interacting with computers 12, 4 (2000), 409--426.
[24]
Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. ACM, 159--166.
[25]
Mikhail Jacob and Brian Magerko. 2015. Viewpoints AI. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. ACM, 361--362.
[26]
Anthony David Jameson. 2009. Understanding and dealing with usability side effects of intelligent processing. AI Magazine 30, 4 (2009), 23.
[27]
Michael Johnston, Srinivas Bangalore, Gunaranjan Vasireddy, Amanda Stent, Patrick Ehlen, Marilyn Walker, Steve Whittaker, and Preetam Maloor. 2002. MATCH: An architecture for multimodal dialogue systems. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics, 376--383.
[28]
John Lasseter. 1987. Principles of traditional animation applied to 3D computer animation. In ACM Siggraph Computer Graphics, Vol. 21. ACM, 35--44.
[29]
Oliver Lemon. 2012. Conversational interfaces. In Data-Driven Methods for Adaptive Spoken Dialogue Systems. Springer, 1--4.
[30]
Oliver Lemon, Anne Bracy, Alexander Gruenstein, and Stanley Peters. 2001. The WITAS multi-modal dialogue system I. In INTERSPEECH. 1559--1562.
[31]
Oliver Lemon and Alexander Gruenstein. 2004. Multithreaded context for robust conversational interfaces: Context-sensitive speech recognition and interpretation of corrective fragments. ACM Transactions on Computer-Human Interaction (TOCHI) 11, 3 (2004), 241--267.
[32]
Geke DS Ludden, Hendrik NJ Schifferstein, and Paul Hekkert. 2008. Surprise as a design strategy. Design Issues 24, 2 (2008), 28--38.
[33]
Mohammad Mahzoon, Mary Lou Maher, Kazjon Grace, Lilla LoCurto, and Bill Outcault. 2016. The Willful Marionette: Modeling Social Cognition Using Gesture-Gesture Interaction Dialogue. In International Conference on Augmented Cognition. Springer, 402--413.
[34]
Michael Mateas. 2001. Expressive AI: A hybrid art and science practice. Leonardo 34, 2 (2001), 147--153.
[35]
Richard E Mayer and C Scott DaPra. 2012. An embodiment effect in computer-based learning with animated pedagogical agents. Journal of Experimental Psychology: Applied 18, 3 (2012), 239.
[36]
Carmel McNaught and Paul Lam. 2010. Using Wordle as a supplementary research tool. The qualitative report 15, 3 (2010), 630.
[37]
Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues 56, 1 (2000), 81--103.
[38]
Ephraim Nissan, Ricardo Cassinis, and Laura Maria Morelli. 2008. Have computation, animatronics, and robotic art anything to say about emotion, compassion, and how to model them?: The survivor project. Pragmatics & Cognition 16, 1 (2008), 3--36.
[39]
Andrew Ortony and Derek Partridge. 1987. Surprisingness and expectation failure: what's the difference?. In IJCAI. 106--108.
[40]
Tomislav Pejsa, Sean Andrist, Michael Gleicher, and Bilge Mutlu. 2015. Gaze and Attention Management for Embodied Conversational Agents. ACM Transactions on Interactive Intelligent Systems (TiiS) 5, 1 (2015), 3.
[41]
Rosalind Picard. 1997. Affective computing. Vol. 252. MIT press Cambridge.
[42]
Byron Reeves and Clifford Nass. 1996. How people treat computers, television, and new media like real people and places. CSLI Publications and Cambridge university press Cambridge, UK.
[43]
Selma Šabanović, Casey C Bennett, Wan-Ling Chang, and Lesa Huber. 2013. PARO robot affects diverse interaction modalities in group sensory therapy for older adults with dementia. In Rehabilitation Robotics (ICORR), 2013 IEEE International Conference on. IEEE, 1--6.
[44]
Dennis Schleicher, Peter Jones, and Oksana Kachur. 2010. Bodystorming as embodied designing. Interactions 17, 6 (2010), 47--51.
[45]
Jun'ichiro Seyama and Ruth S Nagayama. 2007. The uncanny valley: Effect of realism on the impression of artificial human faces. Presence 16, 4 (2007), 337--351.
[46]
Kristian T Simsarian. 2003. Take it to the next stage: the roles of role playing in the design process. In CHI'03 extended abstracts on Human factors in computing systems. ACM, 1012--1013.
[47]
Lucy Suchman. 2007. Human-machine reconfigurations: Plans and situated actions. Cambridge University Press.
[48]
Tomohito Takubo, Kenji Inoue, Tatsuo Arai, and Kazutoshi Nishii. 2006. Wholebody teleoperation for humanoid robot by marionette system. In 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 4459--4465.
[49]
Alex S Taylor. 2009. Machine intelligence. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2109--2118.
[50]
Randee I Tengi. 1998. Design and implementation of the WordNet lexical database and searching software. In WordNet: an electronic lexical database. MIT Press, 105--127.
[51]
Nava Tintarev and Judith Masthoff. 2012. Evaluating the effectiveness of explanations for recommender systems. User Modeling and User-Adapted Interaction 22, 4--5 (2012), 399--439.
[52]
David Traum and Jeff Rickel. 2002. Embodied agents for multi-party dialogue in immersive virtual worlds. In Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2. ACM, 766--773.
[53]
Kees Van Deemter, Brigitte Krenn, Paul Piwek, Martin Klesen, Marc Schröder, and Stefan Baumann. 2008. Fully generated scripted dialogue for embodied agents. Artificial Intelligence 172, 10 (2008), 1219--1244.
[54]
Noah Wardrip-Fruin. 2009. Expressive Processing: Digital fictions, computer games, and software studies. MIT press.
[55]
Benjamin Weiss, Ina Wechsung, Christine Kühnel, and Sebastian Möller. 2015. Evaluating embodied conversational agents in multimodal interfaces. Computational Cognitive Science 1, 1 (2015), 1.
[56]
Jacob O Wobbrock, Meredith Ringel Morris, and Andrew D Wilson. 2009. User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1083--1092.
[57]
Katsu Yamane, Jessica K Hodgins, and H Benjamin Brown. 2004. Controlling a motorized marionette with human motion capture data. International Journal of Humanoid Robotics 1, 04 (2004), 651--669.
[58]
Victor W Zue and James R Glass. 2000. Conversational interfaces: Advances and challenges. Proc. IEEE 88, 8 (2000), 1166--1180.

Cited By

View all
  • (2022)Iterative Design of Gestures During Elicitation: Understanding the Role of Increased ProductionProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501962(1-14)Online publication date: 29-Apr-2022
  • (2022)Evaluation in scenarios of ubiquity of technology: a systematic literature review on interactive installationsPersonal and Ubiquitous Computing10.1007/s00779-022-01696-827:2(343-361)Online publication date: 11-Nov-2022

Index Terms

  1. The Willful Marionette: Exploring Responses to Embodied Interaction

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    C&C '17: Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition
    June 2017
    584 pages
    ISBN:9781450344036
    DOI:10.1145/3059454
    • General Chairs:
    • David A. Shamma,
    • Jude Yew,
    • Program Chair:
    • Brian Bailey
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 22 June 2017

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. embodied cognition
    2. interactive art

    Qualifiers

    • Research-article

    Conference

    C&C '17
    Sponsor:
    C&C '17: Creativity and Cognition
    June 27 - 30, 2017
    Singapore, Singapore

    Acceptance Rates

    C&C '17 Paper Acceptance Rate 27 of 94 submissions, 29%;
    Overall Acceptance Rate 108 of 371 submissions, 29%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 30 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2022)Iterative Design of Gestures During Elicitation: Understanding the Role of Increased ProductionProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501962(1-14)Online publication date: 29-Apr-2022
    • (2022)Evaluation in scenarios of ubiquity of technology: a systematic literature review on interactive installationsPersonal and Ubiquitous Computing10.1007/s00779-022-01696-827:2(343-361)Online publication date: 11-Nov-2022

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media