Nothing Special   »   [go: up one dir, main page]

skip to main content
article

A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception

Published: 01 September 2015 Publication History

Abstract

A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: 'The face is the portrait of the mind; the eyes, its informers'. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human-Robot Interaction and Human-Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.

References

[1]
{Abe86}¿ Abele A.: Functions of gaze in social interaction: Communication and monitoring. Journal of Nonverbal Behavior Volume 10, Issue 2 1986, pp.83-101.
[2]
{ABT*11}¿ Admoni H., Bank C., Tan J., Toneva M., Scassellati B.: Robot gaze does not reflexively cue human attention. In Proceedings of the 33rd Annual Conference of the Cognitive Science Society CogSci 2011 Boston, MA, USA, 2011, Citeseer, pp. pp.1983-1988.
[3]
{AC76}¿ Argyle M., Cook M.: Gaze and Mutual Gaze. Cambridge University Press, Cambridge, UK, 1976.
[4]
{AHFS*13}¿ Admoni H., Hayes B., Feil-Seifer D., Ullman D., Scassellati B.: Are you looking at me?: Perception of robot attention is mediated by gaze type and group size. In HRI '13: Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction Tokyo, Japan, 2013, IEEE, pp. pp.389-396.
[5]
{AHO90}¿ Astington J., Harris P., Olson D.: Developing Theories of Mind. Cambridge University Press, Cambridge, UK, 1990.
[6]
{AHS02}¿ Albrecht I., Haber J., Seidel H.-p.: Automatic generation of non-verbal facial expressions from speech. In Advances in Modelling, Animation and Rendering. Springer London 2002, pp. pp.283-293.
[7]
{Ama14}¿ <sc>Amazon.com</sc>: Amazon Mechanical Turk, 2014. "http://www.mturk.com/mturk/". Accessed 15 January 2015.
[8]
{AMEB12}¿ Al Moubayed S., Edlund J., Beskow J.: Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections. ACM Transactions on Interactive Intelligent Systems Volume 1, Issue 2 2012, pp.11:1-11:25.
[9]
{AMG13}¿ Andrist S., Mutlu B., Gleicher M.: Conversational gaze aversion for virtual agents. In Proceedings of the 13th International Conference on Intelligent Virtual Agents Edinburgh, UK, 2013, vol. 8108 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.249-262.
[10]
{And81}¿ Anderson J. H.: Ocular torsion in the cat after lesions of the interstitial nucleus of Cajal. Annals of the New York Academy of Sciences Volume 374, Issue 1 1981, pp.865-871.
[11]
{APMG12a}¿ Andrist S., Pejsa T., Mutlu B., Gleicher M.: A head-eye coordination model for animating gaze shifts of virtual characters. In Gaze-In '12: Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction Santa Monica, CA, USA, 2012, ACM, pp. pp.4:1-4:6.
[12]
{APMG12b}¿ Andrist S., Pejsa T., Mutlu B., Gleicher M.: Designing effective gaze mechanisms for virtual agents. In CHI '12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Austin, TX, USA, 2012, ACM, pp. pp.705-714.
[13]
{ATGM14}¿ Andrist S., Tan X. Z., Gleicher M., Mutlu B.: Conversational gaze aversion for humanlike robots. In HRI '14: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction Bielefeld, Germany, 2014, ACM, pp. pp.25-32.
[14]
{AWH10}¿ Anderson C., Wales A., Horne J.: PVT lapses differ according to eyes open, closed, or looking away. Sleep 2010, pp.3:197-3:204.
[15]
{BA08}¿ Bee N., André E.: Writing with your eye: A dwell time free writing system adapted to the nature of human eye gaze. In Perception in Multimodal Dialogue Systems Kloster Irsee, Germany, 2008, vol. 5078 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.111-122.
[16]
{BAP*96}¿ Bentin S., Allison T., Puce A., Perez E., McCarthy G.: Electrophysiological studies of face perception in humans. Journal of Cognitive Neuroscience Volume 8, Issue 6 1996, pp.551-565.
[17]
{BAS75}¿ Bahill A. T., Adler D., Stark L.: Most naturally occurring human saccades have magnitudes of 15 degrees or less. Investigative Ophthalmology & Visual Science Volume 14, Issue 6 1975, pp.468-469.
[18]
{BAT09}¿ Bee N., André E., Tober S.: Breaking the ice in human-agent communication: Eye-gaze based initiation of contact with an embodied conversational agent. In Proceedings of the 9th International Conference on Intelligent Virtual Agents Amsterdam, the Netherlands, 2009, vol. 5773 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.229-242.
[19]
{BB09}¿ Banf M., Blanz V.: Example-based rendering of eye movements. Computer Graphics Forum Volume 28, Issue 2 2009, pp.659-666.
[20]
{BBBL01}¿ Bailenson J. N., Blascovich J., Beall A. C., Loomis J. M.: Equilibrium theory revisited: Mutual gaze and personal space in virtual environments. Presence: Teleoperators & Virtual Environments Volume 10, Issue 6 2001, pp.583-598.
[21]
{BBLB02}¿ Blascovich J., Beall A. C., Loomis J. M., Bailenson J. N.: Interpersonal distance in immersive virtual environments. Personality and Social Psychology Bulletin Volume 29 2002, pp.1-15.
[22]
{BBN*14}¿ Bérard P., Bradley D., Nitti M., Beeler T., Gross M.: High-quality capture of eyes. ACM Transactions on Graphics Volume 33, Issue 6 2014, pp.223:1-223:12.
[23]
{BBW96}¿ Burgoon J., Buller D., Woodall W.: Nonverbal Communication: The Unspoken Dialogue. McGraw-Hill Higher Education, New York, USA, 1996.
[24]
{BCCDR11}¿ Bonino D., Castellina E., Corno F., &lt;familyNamePrefix&gt;De&lt;/familyNamePrefix&gt;Russis L.: DOGeye: Controlling your home with eye interaction. Interacting with Computers Volume 23, Issue 5 2011, pp.484-498.
[25]
{BCS75}¿ Bahill A., Clark M. R., Stark L.: The main sequence, a tool for studying human eye movements. Mathematical Biosciences Volume 24, Issue 3-4 1975, pp.191-204.
[26]
{BCWJ97}¿ Baron-Cohen S., Wheelwright S., Jolliffe T.: Is there a 'language of the eyes'? Evidence from normal adults, and adults with autism or Asperger syndrome. Visual Cognition Volume 4, Issue 3 1997, pp.311-331.
[27]
{BDG*07}¿ Busso C., Deng Z., Grimm M., Neumann U., Narayanan S.: Rigid head motion in expressive speech animation: Analysis and synthesis. IEEE Transactions on Audio, Speech, and Language Processing Volume 15, Issue 3 2007, pp.1075-1086.
[28]
{BDNN07}¿ Busso C., Deng Z., Neumann U., Narayanan S.: Learning expressive human-like head motion sequences from speech. In Data-Driven 3D Facial Animation. Springer-Verlag London Limited 2007, pp. pp.113-131.
[29]
{Bec89}¿ Becker W.: The neurobiology of saccadic eye movements. Metrics. Reviews of Oculomotor Research 3 1989, pp.13-67.
[30]
{Bev13}¿ Bevacqua E.: A survey of listener behavior and listener models for embodied conversational agents. In Coverbal Synchrony in Human-Machine Interaction. M.Rojc and N.Campbell Eds. Boca Raton, FL, USA, 2013, CRC Press, pp. pp.243-268.
[31]
{BF88}¿ Becker W., Fuchs A. F.: Lid-eye coordination during vertical gaze changes in man and monkey. Journal of Neurophysiology Volume 60, Issue 4 1988, pp.1227-52.
[32]
{BFA09}¿ Bee N., Franke S., André E.: Relations between facial display, eye gaze and head tilt: Dominance perception variations of virtual agents. In Proceedings of 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009 Amsterdam, the Netherlands, 2009, IEEE, pp. pp.1-7.
[33]
{BFJ*05}¿ Bennewitz M., Faber F., Joho D., Schreiber M., Behnke S.: Integrating vision and speech for conversations with multiple persons. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems IROS Edmonton, Canada, 2005, IEEE, pp. pp.2523-2528.
[34]
{BG79}¿ Bandler R., Grinder J.: Frogs into Princes: Neuro Linguistic Programming. Real People Press, Boulder, CO, USA, 1979.
[35]
{BH10}¿ Bohus D., Horvitz E.: Facilitating multiparty dialog with gaze, gesture, and speech. In ICMI-MLMI '10: Proceedings of International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction Beijing, China, 2010, ACM, pp. pp.5:1-5:8.
[36]
{BHAF09}¿ Bard E., Hill R., Arai M., Foster M. E.: Referring and gaze alignment: Accessibility is alive and well in situated dialogue. In Proceedings of CogSci 2009 Amsterdam, the Netherlands, 2009, Cognitive Science Society, pp. pp.1246-1251.
[37]
{BHKS11}¿ Bainbridge W., Hart J., Kim E., Scassellati B.: The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics Volume 3, Issue 1 2011, pp.41-52.
[38]
{BHZS12}¿ Brennan S. E., Hanna J. E., Zelinsky G. J., Savietta K. J.: Eye gaze cues for coordination in collaborative tasks. In Proceedings of 2012 ACM Conference on Computer Supported Cooperative Work DUET 2012 Workshop: Dual Eye Tracking in CSCE Seattle, WA, 2012, ACM.
[39]
{BKG11}¿ Buhrmester M., Kwang T., Gosling S. D.: Amazon's mechanical turk: A new source of inexpensive, yet high-quality, data ? Perspectives on Psychological Science Volume 6, Issue 1 2011, pp.3-5.
[40]
{BMEL08}¿ Bradley M. M., Miccoli L., Escrig M. A., Lang P. J.: The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology Volume 45, Issue 4 2008, pp.602-607.
[41]
{BPAW10}¿ Bee N., Pollock C., André E., Walker M.: Bossy or wimpy: Expressing social dominance by combining gaze and linguistic behaviors. In Proceedings of the 10th International Conference on Intelligent Virtual Agents Philadelphia, PA, USA, 2010, vol. 6356 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.265-271.
[42]
{BPG*06}¿ Breton G., Pelé D., Garcia C., Panaget F., Bretier P.: Modeling gaze behavior for a 3D ECA in a dialogue situation. In Gesture in Human-Computer Interaction and Simulation Berder Island, France, 2006, vol. 3881 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.252-255.
[43]
{BWA*10}¿ Bee N., Wagner J., André E., Vogt T., Charles F., Pizzi D., Cavazza M.: Gaze behavior during interaction with a virtual character in interactive storytelling. In Proceedings of International Workshop on Interacting with ECAs as Virtual Characters AAMAS 2010 Toronto, Canada, 2010, AAMAS.
[44]
{Car78}¿ Cary M. S.: The role of gaze in the initiation of conversation. Social Psychology Volume 41, Issue 3 1978, pp.269-271.
[45]
{CBCA07}¿ Chin C. A., Barreto A., Cremades G., Adjouadi M.: Performance analysis of an integrated eye gaze tracking/Electromyogram cursor control system. In Assets '07: Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility Tempe, AZ, USA, 2007, ACM, pp. pp.233-234.
[46]
{CBK*14}¿ Corrigan L. J., Basedow C., Küster D., Kappas A., Peters C., Castellano G.: Mixing implicit and explicit probes: Finding a ground truth for engagement in social human-robot interactions. In HRI '14: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction Bielefeld, Germany, 2014, ACM, pp. pp.140-141.
[47]
{CCD00}¿ Cohen M. F., Colburn R. A., Drucker S. M.: The role of eye gaze in avatar mediated conversational interfaces. Sketches and Applications, SIGGRAPH '00 2000.
[48]
{CGV09}¿ Cafaro A., Gaito R., Vilhjálmsson H. H.: Animating idle gaze in public places. In Proceedings of the 9th International Conference on Intelligent Virtual Agents Amsterdam, the Netherlands, 2009, vol. 5773 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.250-256.
[49]
{Cho92}¿ Chovil N.: Discourse-oriented facial displays in conversation. Research on Language & Social Interaction Volume 25 1991/1992, pp.163-194.
[50]
{CK03}¿ Clark H. H., Krych M. A.: Speaking while monitoring addressees for understanding. Journal of Memory and Language Volume 50, Issue 1 2003, pp.62-81.
[51]
{CKB99}¿ Chopra-Khullar S., Badler N. I.: Where to look? Automating attending behaviors of virtual human characters. In AGENTS '99: Proceedings of the Third Annual Conference on Autonomous Agents Seattle, WA, USA, 1999, ACM, pp. pp.16-23.
[52]
{CKEMT10}¿ Cig C., Kasap Z., Egges A., Magnenat-Thalmann N.: Realistic emotional gaze and head behavior generation based on arousal and dominance factors. In Motion in Games Utrecht, the Netherlands, 2010, vol. 6459 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.278-289.
[53]
{CPB*94}¿ Cassell J., Pelachaud C., Badler N., Steedman M., Achorn B., Becket T., Douville B., Prevost S., Stone M.: Animated conversation: Rule-based generation of facial expression, gesture & spoken intonation for multiple conversational agents. In SIGGRAPH '94: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques Orlando, FL, USA, 1994, ACM, pp. pp.413-420.
[54]
{CPC13}¿ Corrigan L. J., Peters C., Castellano G.: Identifying task engagement: Towards personalised interactions with educational robots. In Proceedings of 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction ACII Geneva, Switzerland, 2013, IEEE, pp. pp.655-658.
[55]
{CT99}¿ Cassell J., Thórisson K. R.: The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational agents. Applied Artificial Intelligence Volume 13, Issue 4-5 1999, pp.519-538.
[56]
{CT07}¿ Cassell J., Tartaro A.: Intersubjectivity in human-agent interaction. Interaction Studies Volume 8, Issue 3 2007, pp.391-410.
[57]
{CTP98}¿ Cassell J., Torres O. E., Prevost S.: Turn taking vs. discourse structure: How best to model multimodal conversation. In Machine Conversations 1998, Kluwer, The Hague, pp. pp.143-154.
[58]
{CV99}¿ Cassell J., Vilhjálmsson H. H.: Fully embodied conversational avatars: making communicative behaviors autonomous. Autonomous Agents and Multi-Agent Systems Volume 2, Issue 1 1999, pp.45-64.
[59]
{CVB01}¿ Cassell J., Vilhjálmsson H. H., Bickmore T.: BEAT: The behavior expression animation toolkit. In SIGGRAPH '01: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, CA, USA, 2001, ACM, pp. pp.477-486.
[60]
{CVB*12}¿ Cafaro A., Vilhjálmsson H. H., Bickmore T., Heylen D., Jóhannsdóttir K. R., Valgarsson G. S.: First impressions: Users' judgments of virtual agents' personality and interpersonal attitude in first encounters. In Proceedings of the 12th International Conference on Intelligent Virtual Agents Santa Cruz, CA, USA, 2012, vol. Volume 7502 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.67-80.
[61]
{DAH12}¿ Dybdal M. L., Agustin J. S., Hansen J. P.: Gaze input for mobile devices by dwell and gestures. In ETRA '12: Proceedings of the Symposium on Eye Tracking Research and Applications Santa Barbara, CA, USA, 2012, ACM, pp. pp.225-228.
[62]
{DCPPS04}¿ &lt;familyNamePrefix&gt;De&lt;/familyNamePrefix&gt;Carolis B., Pelachaud C., Poggi I., Steedman M.: APML, a markup language for believable behavior generation. In Life-Like Characters. Springer Berlin Heidelberg, 2004, pp. pp.65-85.
[63]
{DdGB10}¿ Delaunay F., &lt;familyNamePrefix&gt;de&lt;/familyNamePrefix&gt;Greeff J., Belpaeme T.: A study of a retro-projected robotic face and its effectiveness for gaze reading by humans. In HRI '10: Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction Osaka, Japan, 2010, IEEE, pp. pp.39-44.
[64]
{DLN05}¿ Deng Z., Lewis J. P., Neumann U.: Automated eye motion using texture synthesis. IEEE Computer Graphics and Applications Volume 25, Issue 2 2005, pp.24-30.
[65]
{DLN07}¿ Deng Z., Lewis J., Neumann U.: Realistic eye motion synthesis by texture synthesis. In Data-Driven 3D Facial Animation. Z.Deng and U.Neumann U Eds. Springer-Verlag London Limited 2007, pp. pp.98-112.
[66]
{Dou01}¿ Doughty M. J.: Consideration of three types of spontaneous eyeblink activity in normal humans: during reading and video display terminal use, in primary gaze, and while in conversation. Optometry and Vision Science Volume 78, Issue 10 2001, pp.712-725.
[67]
{DRKK14}¿ Das D., Rashed M. G., Kobayashi Y., Kuno Y.: Recognizing gaze pattern for human robot interaction. In HRI '14: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction Bielefeld, Germany, 2014, ACM, pp. pp.142-143.
[68]
{DS07}¿ Drewes H., Schmidt A.: Interacting with the computer using gaze gestures. In Human-Computer Interaction - INTERACT 2007 Rio de Janeiro, Brazil, 2007, vol. 4663 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.475-488.
[69]
{Dun74}¿ Duncan Starkey J.: On the structure of speaker-auditor interaction during speaking turns. Language in Society Volume 3, Issue 2 1974, pp.161-180.
[70]
{EBCR*07}¿ Elisei F., Bailly G., Casari A., Raidt S.: Towards eyegaze-aware analysis and synthesis of audiovisual speech. In ISCA: Proceedings of International Conference on Auditory-Visual Speech Processing, AVSP 2007 Kasteel Groenendaal, Hilvarenbeek, the Netherlands, 2007, pp. pp.50-56.
[71]
{EF03}¿ Ekman P., Friesen W.: Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Malor Books, Oxford, UK, 2003.
[72]
{EMP*94}¿ Evinger C., Manning K. A., Pellegrini J. J., Basso M. A., Powers A. S., Sibony P. A.: Not looking while leaping: The linkage of blinking and saccadic gazeshifts. Experimental Brain Research Volume 100, Issue 2 1994, pp.337-344.
[73]
{EMS91}¿ Evinger C., Manning K. A., Sibony P. A.: Eyelid movements. Mechanisms and normal data. Investigative Ophthalmology & Visual Science Volume 32, Issue 2 1991, pp.387-400.
[74]
{EPAI07}¿ Eichner T., Prendinger H., André E., Ishizuka M.: Attentive presentation agents. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.283-295.
[75]
{EPM08}¿ Ekman I. M., Poikola A. W., Mäkäräinen M. K.: Invisible Eni: Using gaze and pupil size to control a game. In CHI EA '08: Extended Abstracts on Human Factors in Computing Systems Florence, Italy, 2008, ACM, pp. pp.3135-3140.
[76]
{FBT07}¿ Frischen A., Bayliss A. P., Tipper S. P.: Gaze cueing of attention: Visual attention, social cognition, and individual differences. Psychological Bulletin Volume 133, Issue 4 2007, pp.694-724.
[77]
{FGBB09}¿ François G., Gautron P., Breton G., Bouatouch K.: Image-based modeling of the human eye. IEEE Transactions on Visualization and Computer Graphics Volume 15, Issue 5 2009, pp.815-827.
[78]
{FN11}¿ Fukuhara Y., Nakano Y.: Gaze and conversation dominance in multiparty interaction. In Proceedings of 2nd Workshop on Eye Gaze in Intelligent Human Machine Interaction Palo Alto, CA, USA, 2011.
[79]
{FOM*02}¿ Fukayama A., Ohno T., Mukawa N., Sawaki M., Hagita N.: Messages embedded in gaze of interface agents-Impression management with agent's gaze. In CHI '02: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Minneapolis, MN, USA, 2002, ACM, pp. pp.41-48.
[80]
{FR84}¿ Fischer B., Ramsperger E.: Human express saccades: Extremely short reaction times of goal directed eye movements. Experimental Brain Research Volume 57, Issue 1 1984, pp.191-195.
[81]
{FTT99}¿ Funge J., Tu X., Terzopoulos D.: Cognitive modeling: Knowledge, reasoning and planning for intelligent characters. In SIGGRAPH '99: Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, CA, USA, 1999, ACM Press/Addison-Wesley Publishing Co, pp. pp.29-38.
[82]
{Ful92}¿ Fuller J.: Head movement propensity. Experimental Brain Research Volume 92, Issue 1 1992, pp.152-164.
[83]
{GB06}¿ Gu E., Badler N. I.: Visual attention and eye gaze during multiparty conversations with distractions. In Proceedings of the 6th International Conference on Intelligent Virtual Agents Marina Del Rey, CA, 2006, vol. 4133 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.193-204.
[84]
{GC11}¿ Gergle D., Clark A. T.: See what I'm saying? Using dyadic mobile eye tracking to study collaborative reference. In CSCW '11: Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work Hangzhou, China, 2011, ACM Press, p. pp.435.
[85]
{GD02}¿ Gillies M. F. P., Dodgson N. A.: Eye movements and attention for behavioural animation. Journal of Visualization and Computer Animation Volume 13, Issue 5 2002, pp.287-300.
[86]
{GK05}¿ Grimm M., Kroschel K.: Evaluation of natural emotions using self assessment manikins. In Proceedings of 2005 IEEE Workshop on Automatic Speech Recognition and Understanding Cancún, Mexico, 2005, IEEE, pp. pp.381-385.
[87]
{GLBB07}¿ Gu E., Lee S., Badler J. B., Badler N. I.: Eye movements, saccades, and multiparty conversations. In Data-Driven 3D Facial Animation. Springer-Verlag London Limited 2007, pp. pp.79-97.
[88]
{Gle13}¿ Glebas F.: The Animator's Eye: Composition and Design for Better Animation. Taylor & Francis, Abingdon, OX, UK, 2013.
[89]
{Gof63}¿ Goffman E.: Behaviour in Public Places: Notes on the Social Order of Gatherings. The Free Press, New York, 1963.
[90]
{GP09}¿ Gatica-Perez D.: Modeling interest in face-to-face conversations from multimodal nonverbal behaviour. In Multi-Modal Signal Processing: Methods and Techniques to Build Multimodal Interactive Systems, J.Thiran, H.Bourlard and F.Marqués Eds. EURASIP and Academic Press Series in Signal and Image Processing. Amsterdam, Netherlands, 2009, Elsevier B.V., pp. pp.309-326.
[91]
{Gre86}¿ Greene P. R.: Gaussian and Poisson blink statistics: A preliminary study. IEEE Transactions on Biomedical Engineering Volume 33, Issue 3 1986, pp.359-361.
[92]
{GRS03}¿ Gosling S. D., Rentfrow P. J., Swann W. B.: A very brief measure of the Big-Five personality domains. Journal of Research in Personality Volume 37, Issue 6 2003, pp.504-528.
[93]
{GSC91}¿ Guitton D., Simard R., Codère F.: Upper eyelid movements measured with a search coil during blinks and vertical saccades. Investigative Ophthalmology & Visual Science Volume 32, Issue 13 1991, pp.3298-3305.
[94]
{GSV*03}¿ Garau M., Slater M., Vinayagamoorthy V., Brogni A., Steed A., Sasse M. A.: The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In CHI '03: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Ft. Lauderdale, FL, USA, 2003, ACM, pp. pp.529-536.
[95]
{GT09}¿ Grillon H., Thalmann D.: Simulating gaze attention behaviors for crowds. Computer Animation and Virtual Worlds Volume 20, Issue 2 2009, pp.111-119.
[96]
{GV87}¿ Guitton D., Volle M.: Gaze control in humans: Eye-head coordination during orienting movements to targets within and beyond the oculomotor range. Journal of Neurophysiology Volume 58, Issue 3 1987, pp.427-459.
[97]
{GWG*07}¿ Gratch J., Wang N., Gerten J., Fast E., Duffy R.: Creating rapport with virtual agents. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.125-138.
[98]
{HAMM14}¿ Hansen J. P., Alapetite A., MacKenzie I. S., Møllenbach E.: The use of gaze to control drones. In ETRA '14: Proceedings of the Symposium on Eye Tracking Research and Applications Safety Harbor, FL, USA, 2014, ACM, pp. pp.27-34.
[99]
{HB07}¿ Hanna J. E., Brennan S. E.: Speakers' eye gaze disambiguates referring expressions early during face-to-face conversation. Journal of Memory and Language Volume 57, Issue 4 2007, pp.596-615.
[100]
{Hei13a}¿ Heikkilä H.: EyeSketch: A drawing application for gaze control. In ETSA '13: Proceedings of the 2013 Conference on Eye Tracking South Africa Cape Town, South Africa, 2013, ACM, pp. pp.71-74.
[101]
{Hei13b}¿ Heikkilä H.: Tools for a gaze-controlled drawing application-Comparing gaze gestures against dwell buttons. In Proceedings of Human-Computer Interaction - INTERACT 2013 Cape Town, South Africa, 2013, vol. 8118 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.187-201.
[102]
{Hey06}¿ Heylen D.: Head gestures, gaze and the principles of conversational structure. International journal of Humanoid Robotics Volume 3, Issue 3 2006, pp.241-267.
[103]
{HHA*88}¿ Harris C. M., Hainline L., Abramov I., Lemerise E., Camenzuli C.: The distribution of fixation durations in infants and naive adults. Vision Research Volume 28, Issue 3 1988, pp.419-432.
[104]
{HJ96}¿ Hendriks-Jansen H.: Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought. A Bradford book. MIT Press, Cambridge, MA, USA, 1996.
[105]
{HJO*10}¿ Hodgins J., Jörg S., O'Sullivan C., Park S. I., Mahler M.: The saliency of anomalies in animated human characters. ACM Transactions on Applied Perception Volume 7, Issue 4 2010, pp.22:1-22:14.
[106]
{HM06}¿ Hanes D. A., McCollum G.: Variables contributing to the coordination of rapid eye/head gaze shifts. Biological Cybernetics Volume 94, Issue 4 2006, pp.300-324.
[107]
{HM13}¿ Huang C.-M., Mutlu B.: Modeling and evaluating narrative gestures for humanlike robots. In Proceedings of Robotics: Science and Systems Berlin, Germany, 2013, pp. pp.26-32.
[108]
{HNP07}¿ Heylen D., Nijholt A., Poel M.: Generating nonverbal signals for a sensitive artificial listener. In Verbal and Nonverbal Communication Behaviours Vietri sul Mare, Italy, 2007, vol. 4775 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.264-274.
[109]
{HO12}¿ Hjalmarsson A., Oertel C.: Gaze direction as a Back-Channel inviting cue in dialogue. In Proceedings of IVA 2012 Workshop on Realtime Conversational Virtual Agents Santa Cruz, CA, USA, 2012.
[110]
{Hoo13}¿ Hooks E.: Acting for Animators. Taylor & Francis, Abingdon, OX, UK, 2013.
[111]
{HPB*07}¿ Hoekstra A., Prendinger H., Bee N., Heylen D., Ishizuka M.: Highly realistic 3D presentation agents with visual attention capability. In Smart Graphics Kyoto, Japan, 2007, vol. 4569 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.73-84.
[112]
{HR12}¿ Heikkilä H., Räihä K.-J.: Simple gaze gestures and the closure of the eyes as an interaction technique. In ETRA '12: Proceedings of the Symposium on Eye Tracking Research and Applications Santa Barbara, CA, USA, 2012, ACM, pp. pp.147-154.
[113]
{ID03}¿ Itti L., Dhavale N.: Realistic avatar eye and head animation using a neurobiological model of visual attention. In Proceedings of SPIE's 48th Annual Meeting Optical Science and Technology, San Diego, CA, USA, 2003, International Society for Optics and Photonics, SPIE Press, pp. pp.64-78.
[114]
{IDP06}¿ Itti L., Dhavale N., Pighin F. H.: Photorealistic attention-based gaze animation. In Proceedings of IEEE International Conference on Multimedia and Expo Toronto, Ontario, Canada, 2006, IEEE, pp. pp.521-524.
[115]
{IHI*10}¿ Istance H., Hyrskykari A., Immonen L., Mansikkamaa S., Vickers S.: Designing gaze gestures for gaming: An investigation of performance. In ETRA '10: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications Austin, TX, USA, 2010, ACM, pp. pp.323-330.
[116]
{IMFN06}¿ Ishii R., Miyajima T., Fujita K., Nakano Y.: Avatar's gaze control to facilitate conversational turn-taking in virtual-space multi-user voice chat system. In Proceedings of the 6th International Conference on Intelligent Virtual Agents Marina Del Rey, CA, USA, 2006, vol. 4133 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.458-458.
[117]
{IONN13}¿ Ishii R., Ooko R., Nakano Y., Nishida T.: Effectiveness of gaze-based engagement estimation in conversational agents. In Eye Gaze in Intelligent User Interfaces. Y. I.Nakano, C.Conati and T.Bader Eds. Springer-Verlag London 2013, pp. pp.85-110.
[118]
{Itt00}¿ Itti L.: Models of Bottom-Up and Top-Down Visual Attention. PhD thesis, California Institute of Technology, 2000.
[119]
{Iza91}¿ Izard C.: The Psychology of Emotions. Emotions, Personality, and Psychotherapy. Springer US, Berlin, Heidelberg, Germany, 1991.
[120]
{JHM*07}¿ Jan D., Herrera D., Martinovski B., Novick D. G., Traum D. R.: A computational model of culture-specific conversational behavior. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.45-56.
[121]
{JPMG13}¿ Jowers I., Prats M., McKay A., Garner S.: Evaluating an eye tracking interface for a two-dimensional sketch editor. Computer-Aided Design Volume 45, Issue 5 2013, pp.923-936.
[122]
{JTC*07}¿ Johns M., Tucker A., Chapman R., Crowley K., Michael N.: Monitoring eye and eyelid movements by infrared reflectance oculography to measure drowsiness in drivers. Somnologie - Schlafforschung und Schlafmedizin Volume 11, Issue 4 2007, pp.234-242.
[123]
{KAD11}¿ Kirchner N., Alempijevic A., Dissanayake G.: Nonverbal robot-group interaction using an imitated gaze cue. In HRI '11: Proceedings of the 6th International Conference on Human-Robot Interaction Lausanne, Switzerland, 2011, ACM, pp. pp.497-504.
[124]
{KB04}¿ Kidd C., Breazeal C.: Effect of a robot on user perceptions. In Proceedings of IROS 2004 Sendai, Japan, 2004, vol. 4, IEEE, pp. pp.3559-3564.
[125]
{Ken67}¿ Kendon A.: Some functions of gaze-direction in social interaction. Acta Psychologica Volume 26, Issue 1 1967, pp.22-63.
[126]
{Ken90}¿ Kendon A.: Conducting Interaction: Patterns of Behavior in Focused Encounters. Studies in Interactional Sociolinguistics. Cambridge University Press, Cambridge, UK, 1990.
[127]
{KG08}¿ Kipp M., Gebhard P.: IGaze: Studying reactive gaze behavior in semi-immersive human-avatar interactions. In Proceedings of the 8th International Conference on Intelligent Virtual Agents Tokyo, Japan, 2008, vol. 5208 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.191-199.
[128]
{KHJK13}¿ Komogortsev O., Holland C., Jayarathna S., Karpov A.: 2D linear oculomotor plant mathematical model: Verification and biometric applications. ACM Transactions on Applied Perception Volume 10, Issue 4 2013, pp.27:1-27:18.
[129]
{KK13}¿ Kulms P., Kopp S.: Using virtual agents to guide attention in multi-task scenarios. In Proceedings of the 13th International Conference on Intelligent Virtual Agents Edinburgh, UK, 2013, vol. 8108 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.295-302.
[130]
{KKGK11}¿ Kulms P., Krämer N. C., Gratch J., Kang S.-H.: It's in their eyes: A study on female and male virtual humans' gaze. In Proceedings of the 10th International Conference on Intelligent Virtual Agents Reykjavik, Iceland, 2011, vol. 6895 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.80-92.
[131]
{KKM*06}¿ Kopp S., Krenn B., Marsella S., Marshall A. N., Pelachaud C., Pirker H., Thórisson K. R., Vilhjálmsson H. H.: Towards a common framework for multimodal generation: The behavior markup language. In Proceedings of the 6th International Conference on Intelligent Virtual Agents Marina Del Rey, CA, USA, 2006, vol. 4133 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.205-217.
[132]
{Kle86}¿ Kleinke C. L.: Gaze and eye contact: A research review. Psychological Bulletin Volume 100, Issue 1 1986, pp.78-100.
[133]
{KOS11}¿ Kokkinara E., Oyekoya O., Steed A.: Modelling selective visual attention for autonomous virtual characters. Computer Animation and Virtual Worlds Volume 22, Issue 4 2011, pp.361-369.
[134]
{KPPP11}¿ Krenn B., Pelachaud C., Pirker H., Peters C.: Embodied conversational characters: Representation formats for multimodal communicative behaviours. In Emotion-Oriented Systems. R.Cowie, C.Pelachaud and P.Petta Eds. Cognitive Technologies. Springer Berlin Heidelberg, 2011, pp. pp.389-415.
[135]
{KPW07}¿ Kumar M., Paepcke A., Winograd T.: EyePoint: Practical pointing and selection using gaze and keyboard. In CHI '07: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems San Jose, CA, USA, 2007, ACM, pp. pp.421-430.
[136]
{KV12}¿ Kristensson P. O., Vertanen K.: The potential of dwell-free eye-typing for fast assistive gaze communication. In ETRA '12: Proceedings of the Symposium on Eye Tracking Research and Applications Santa Barbara, CA, USA, 2012, ACM, pp. pp.241-244.
[137]
{KYH*98}¿ Kikuchi H., Yokoyama M., Hoashi K., Hidaki Y., Kobayashi T., Shirai K.: Controlling gaze of humanoid in communication with human. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems Victoria, B.C., Canada, 1998, vol. 1, IEEE, pp. pp.255-260.
[138]
{Las01}¿ Lasseter J.: Tricks to animating characters with a computer. SIGGRAPH Computer Graphics Volume 35, Issue 2 2001, pp.45-47.
[139]
{LB06}¿ Lam M. W., Baranoski G. V.: A predictive light transport model for the human iris. Computer Graphics Forum Volume 25, Issue 3 2006, pp.359-368.
[140]
{LBB02}¿ Lee S. P., Badler J. B., Badler N. I.: Eyes alive. In SIGGRAPH '02: Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques San Antonio, TX, USA, 2002, ACM, pp. pp.637-644.
[141]
{LBS*03}¿ Lefohn A., Budge B., Shirley P., Caruso R., Reinhard E.: An ocularist's approach to human iris synthesis. IEEE Computer Graphics and Applications Volume 23, Issue 6 2003, pp.70-75.
[142]
{Lik32}¿ Likert R.: A technique for the measurement of attitudes. Archives of Psychology Volume 22, Issue 140 1932, pp.1-55.
[143]
{LM07}¿ Lance B., Marsella S.: Emotionally expressive head and body movement during gaze shifts. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.72-85.
[144]
{LM10a}¿ Lance B., Marsella S.: Glances, glares, and glowering: How should a virtual human express emotion through gaze ? Autonomous Agents and Multi-Agent Systems Volume 20, Issue 1 2010, pp.50-69.
[145]
{LM10b}¿ Lance B., Marsella S.: The expressive gaze model: Using gaze to express emotion. IEEE Computer Graphics and Applications Volume 30, Issue 4 2010, pp.62-73.
[146]
{LM12}¿ Li Z., Mao X.: Emotional eye movement generation based on Geneva emotion wheel for virtual agents. Journal of Visual Languages & Computing Volume 23, Issue 5 2012, pp.299-310.
[147]
{LMD12}¿ Le B., Ma X., Deng Z.: Live speech driven head-and-eye motion generators. IEEE Transactions on Visualization and Computer Graphics Volume 18, Issue 11 2012, pp.1902-1914.
[148]
{LMK04}¿ Lance B., Marsella S., Koizumi D.: Towards expressive gaze manner in embodied virtual agents. In Proceedings of Autonomous Agents and Multi-Agent Systems Workshop on Empathic Agents Budapest, Hungary, 2004, AAMAS.
[149]
{LMT*07}¿ Lee J., Marsella S., Traum D., Gratch J., Lance B.: The Rickel Gaze model: A window on the mind of a virtual human. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.296-303.
[150]
{LR86}¿ Laurutis V. P., Robinson D. A.: The vestibulo-ocular reflex during human saccadic eye movements. The Journal of Physiology Volume 373, Issue 1 1986, pp.209-233.
[151]
{LvW12}¿ Lohse M., &lt;familyNamePrefix&gt;van&lt;/familyNamePrefix&gt;Welbergen H.: Designing appropriate feedback for virtual agents and robots. In Position paper at RO-MAN 2012 Workshop 'Robot Feedback in Human-Robot Interaction: How to Make a Robot "Readable" for a Human Interaction Partner' Paris, France, 2012, IEEE.
[152]
{LZ99}¿ Leigh R. J., Zee D. S.: The Neurology of Eye Movements 3rd edition. No. 55 in Contemporary Neurology Series. Oxford University Press, Oxford, England, UK, 1999.
[153]
{Mae06}¿ Maestri G.: Digital Character Animation 3. Pearson Education, New York, NY, USA, 2006.
[154]
{MB10}¿ McDonnell R., Breidt M.: Face reality: Investigating the uncanny valley for virtual faces. In SA '10: ACM SIGGRAPH ASIA 2010 Sketches Seoul, Republic of Korea, 2010, ACM, pp. pp.41:1-41:2.
[155]
{MB12}¿ Mariooryad S., Busso C.: Generating human-like behaviors using joint, speech-driven models for conversational agents. IEEE Transactions on Audio, Speech, and Language Processing Volume 20, Issue 8 2012, pp.2329-2340.
[156]
{MBB12}¿ McDonnell R., Breidt M., Bülthoff H. H.: Render me real? Investigating the effect of render style on the perception of animated virtual humans. ACM Transactions on Graphics Volume 31, Issue 4 2012, pp.91:1-91:11.
[157]
{MD08}¿ Morency L.-P., Darrell T.: Conditional sequence model for context-based recognition of gaze aversion. In Machine Learning for Multimodal Interaction Brno, Czech Republic, 2008, vol. 4892 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.11-23.
[158]
{MDR*11}¿ Martin J.-C., Devillers L., Raouzaiou A., Caridakis G., Ruttkay Z., Pelachaud C., Mancini M., Niewiadomski R., Pirker H., Krenn B., Poggi I., Caldognetto E. M., Cavicchio F., Merola G., Rojas A. G., Vexo F., Thalmann D., Egges A., Magnenat-Thalmann N.: Coordinating the generation of signs in multiple modalities in an affective agent. In Emotion-Oriented Systems. R.Cowie, C.Pelachaud and P.Petta Eds. Cognitive Technologies. Springer Berlin Heidelberg, 2011, pp. pp.349-367.
[159]
{Meh80}¿ Mehrabian A.: Basic Dimensions for a General Psychological Theory. OGH Publishers, Cambridge, MA, USA, 1980.
[160]
{MGR04}¿ Marsella S., Gratch J., Rickel J.: Expressive behaviors for virtual worlds. In Life-Like Characters. Cognitive Technologies. Springer Berlin Heidelberg, 2004, pp. pp.317-360.
[161]
{MH07}¿ Masuko S., Hoshino J.: Head-eye animation corresponding to a conversation for CG characters. Computer Graphics Forum Volume 26, Issue 3 2007, pp.303-312.
[162]
{MHKS07}¿ Mitake H., Hasegawa S., Koike Y., Sato M.: Reactive virtual human with bottom-up and top-down visual attention for gaze generation in realtime interactions. In VR '07: Proceedings of IEEE Virtual Reality Conference Charlotte, NC, USA, 2007, IEEE, pp. pp.211-214.
[163]
{MHP12}¿ Mardanbegi D., Hansen D. W., Pederson T.: Eye-based head gestures. In ETRA '12: Proceedings of the Symposium on Eye Tracking Research and Applications Santa Barbara, CA, USA, 2012, ACM, pp. pp.139-146.
[164]
{MLD*08}¿ McDonnell R., Larkin M., Dobbyn S., Collins S., O'Sullivan C.: Clone attack! Perception of crowd variety. ACM Transactions on Graphics Volume 27, Issue 3 2008, pp.26:1-26:8.
[165]
{MLGH10}¿ Møllenbach E., Lillholm M., Gail A., Hansen J. P.: Single gaze gestures. In ETRA '10: Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications Austin, Texas, 2010, ACM, pp. pp.177-180.
[166]
{MLH*09}¿ McDonnell R., Larkin M., Hernández B., Rudomin I., O'Sullivan C.: Eye-catching crowds: Saliency based selective variation. ACM Transactions on Graphics Volume 28, Issue 3 2009, pp.55:1-55:10.
[167]
{MM11}¿ Mumm J., Mutlu B.: Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In HRI '11: Proceedings of the 6th International Conference on Human-Robot Interaction Lausanne, Switzerland, 2011, ACM, pp. pp.331-338.
[168]
{Mor70}¿ Mori M.: The uncanny valley. Energy Volume 7 1970, pp.33-35.
[169]
{MRS*07}¿ Murray N., Roberts D., Steed A., Sharkey P., Dickerson P., Rae J.: An assessment of eye-gaze potential within immersive virtual environments. ACM Transactions on Multimedia Computing, Communications, and Applications Volume 3, Issue 4 2007, pp.8:1-8:17.
[170]
{MSH*06}¿ Mojzisch A., Schilbach L., Helmert J. R., Pannasch S., Velichkovsky B. M., Vogeley K.: The effects of self-involvement on attention, arousal, and facial expression during social interaction with virtual others: A psychophysiological study. Social Neuroscience Volume 1, Issue 3-4 2006, pp.184-195.
[171]
{MSK*09}¿ Mutlu B., Shiwa T., Kanda T., Ishiguro H., Hagita N.: Footing in human-robot conversations: How robots might shape participant roles using gaze cues. In HRI '09: Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction La Jolla, CA, USA, 2009, ACM, pp. pp.61-68.
[172]
{MSSSB10}¿ Martinez S., Sloan R. J. S., Szymkowiak A., Scott-Brown K. C.: Using virtual agents to cue observer attention. In Proceedings of CONTENT 2010: The Second International Conference on Creative Content Technologies2010, pp.7-12.
[173]
{MTG*14}¿ Moon A., Troniak D. M., Gleeson B., Pan M. K., Zheng M., Blumer B. A., MacLean K., Croft E. A.: Meet me where I'm gazing: How shared attention gaze affects human-robot handover timing. In HRI '14: Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction Bielefeld, Germany, 2014, ACM, pp. pp.334-341.
[174]
{Mur01}¿ Murch W.: In the Blink of an Eye: A Perspective on Film Editing. Silman-James Press, Los Angeles, CA, USA, 2001.
[175]
{MVC*12}¿ McKeown G., Valstar M., Cowie R., Pantic M., Schroder M.: The SEMAINE database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Transactions on Affective Computing Volume 3, Issue 1 2012, pp.5-17.
[176]
{MXL*13}¿ Marsella S., Xu Y., Lhommet M., Feng A., Scherer S., Shapiro A.: Virtual character performance from speech. In SCA '13: Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation Anaheim, CA, USA&lt;/publisherName&gt;, 2013, &lt;publisherName&gt;ACM, pp. pp.25-35.
[177]
{NBF*13}¿ Normoyle A., Badler J. B., Fan T., Badler N. I., Cassol V. J., Musse S. R.: Evaluating perceived trust from procedurally animated gaze. In MIG '13: Proceedings of Motion on Games Dublin, Ireland, 2013, ACM, pp. pp.119:141-119:148.
[178]
{NBHH13}¿ Nakano Y. I., Baba N., Huang H.-H., Hayashi Y.: Implementation and evaluation of a multimodal addressee identification mechanism for multiparty conversation systems. In ICMI '13: Proceedings of the 15th ACM on International Conference on Multimodal Interaction Sydney, Australia, 2013, ACM, pp. pp.35-42.
[179]
{NHP13}¿ Niewiadomski R., Hyniewska S., Pelachaud C.: Computational Models of Expressive Behaviors for a Virtual Agent. Oxford Series on Cognitive Models and Architecture. OUP USA, 2013.
[180]
{NI10}¿ Nakano Y. I., Ishii R.: Estimating user's engagement from eye-gaze behaviors in human-agent conversations. In IUI '10: Proceedings of the 15th International Conference on Intelligent User Interfaces Hong Kong, China, 2010, ACM, pp. pp.139-148.
[181]
{NK10}¿ Nakano T., Kitazawa S.: Eyeblink entrainment at breakpoints of speech. Experimental Brain Research Volume 205, Issue 4 2010, pp.577-581.
[182]
{NKM*13}¿ Nakano T., Kato M., Morito Y., Itoi S., Kitazawa S.: Blink-related momentary activation of the default mode network while viewing videos. Proceedings of the National Academy of Sciences Volume 110, Issue 2 2013, pp.702-706.
[183]
{Nor63}¿ Norman W. T.: Toward an adequate taxonomy of personality attributes: Replicated factors structure in peer nomination personality ratings. Journal of Abnormal and Social Psychology Volume 66 1963, pp.574-583.
[184]
{ODK*12}¿ Obaid M., Damian I., Kistler F., Endrass B., Wagner J., André E.: Cultural Behaviors of Virtual Agents in an Augmented Reality Environment. In Proceedings of the 12th International Conference on Intelligent Virtual Agents Santa Cruz, CA, USA, 2012, vol. 7502 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.412-418.
[185]
{OO80}¿ Otteson J., Otteson C.: Effect of teacher's gaze on children's story recall. Perceptual and Motor Skills Volume 50, Issue 1 1980, pp.35-42.
[186]
{Opp86}¿ Oppenheimer P. E.: Real time design and animation of fractal plants and trees. ACM SIGGRAPH Computer Graphics Volume 20, Issue 4 1986, pp.55-64.
[187]
{Osi10}¿ Osipa J.: Stop Staring: Facial Modeling and Animation Done Right. IT Pro. Wiley, Alameda, CA, USA, 2010.
[188]
{OSP11}¿ Oyekoya O., Steed A., Pan X.: Exploring the object relevance of a gaze animation model. In EGVE - JVRC'11: Proceedings of the 17th Eurographics Conference on Virtual Environments & Third Joint Virtual Reality Aire-la-Ville, Switzerland, 2011, Eurographics Association, pp. pp.111-114.
[189]
{OSS09}¿ Oyekoya O., Steptoe W., Steed A.: A saliency-based method of simulating visual attention in virtual scenes. In VRST '09: Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology Kyoto, Japan, 2009, ACM, pp. pp.199-206.
[190]
{OST57}¿ Osgood C. E., Suci G. J., Tannenbaum P. H.: The Measurement of Meaning. University of Illinois Press, Champaign, IL, USA, 1957.
[191]
{PAK10}¿ Peters C., Asteriadis S., Karpouzis K.: Investigating shared attention with a virtual agent using a gaze-based interface. Journal on Multimodal User Interfaces Volume 3, Issue 1-2 2010, pp.119-130.
[192]
{PB03}¿ Pelachaud C., Bilvi M.: Modelling gaze behavior for conversational agents. In Proceedings of the 4th International Conference on Intelligent Virtual Agents Kloster Irsee, Germany, 2003, vol. 2792 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.93-100.
[193]
{PBER07}¿ Picot A., Bailly G., Elisei F., Raidt S.: Scrutinizing natural scenes: Controlling the gaze of an embodied conversational agent. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.272-282.
[194]
{PCdF09}¿ Peters C., Castellano G., &lt;familyNamePrefix&gt;de&lt;/familyNamePrefix&gt;Freitas S.: An exploration of user engagement in HCI. In AFFINE '09: Proceedings of the International Workshop on Affective-Aware Virtual Agents and Social Robots Boston, MA, USA, 2009, ACM, pp. pp.9:1-9:3.
[195]
{PCI10}¿ Paolacci G., Chandler J., Ipeirotis P. G.: Running experiments on Amazon Mechanical Turk. Judgment and Decision Making Volume 5, Issue 5 2010, pp.411-419.
[196]
{PCR*11}¿ Peters C., Castellano G., Rehm M., André E., Raouzaiou A., Rapantzikos K., Karpouzis K., Volpe G., Camurri A., Vasalou A.: Fundamentals of agent perception and attention modelling. In Emotion-Oriented Systems, Cognitive Technologies. Springer Berlin Heidelberg, 2011, pp. pp.293-319.
[197]
{Pet05}¿ Peters C.: Direction of attention perception for conversation initiation in virtual environments. In Proceedings of the 5th International Conference on Intelligent Virtual Agents Kos, Greece, 2005, vol. 3661 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.215-228.
[198]
{Pet06}¿ Peters C.: Evaluating perception of interaction initiation in virtual environments using humanoid agents. In Proceedings of the 2006 Conference on ECAI 2006: 17th European Conference on Artificial Intelligence Riva del Garda, Italy, 2006, IOS Press, pp. pp.46-50.
[199]
{Pet10}¿ Peters C.: Animating gaze shifts for virtual characters based on head movement propensity. In Proceedings of 2010 Second International Conference on Games and Virtual Worlds for Serious Applications VS-GAMES Braga, Portugal, 2010, IEEE, pp. pp.11-18.
[200]
{PHN*09}¿ Poel M., Heylen D., Nijholt A., Meulemans M., &lt;familyNamePrefix&gt;van&lt;/familyNamePrefix&gt;Breemen A.: Gaze behaviour, believability, likability and the iCat. AI & Society Volume 24, Issue 1 2009, pp.61-73.
[201]
{PKFT07}¿ Powers A., Kiesler S., Fussell S., Torrey C.: Comparing a computer agent with a humanoid robot. In Proceedings of 2nd ACM/IEEE International Conference on Human-Robot Interaction HRI Arlington, VA, USA, 2007, IEEE, pp. pp.145-152.
[202]
{PLPW12}¿ Pfeiffer-Lessmann N., Pfeiffer T., Wachsmuth I.: An operational model of joint attention-Timing of the initiate-act in interactions with a virtual human, In Proceedings of KogWis, D. Dörner, R. Goebel, M. Oaksford, M. Pauen and E. Stern Eds. Bamberg, Germany, 2012, University of Bamberg Press, pp. 96-97.
[203]
{PMG13}¿ Pejsa T., Mutlu B., Gleicher M.: Stylized and performative gaze for character animation. Computer Graphics Forum Volume 32, Issue 2 2013, pp.143-152.
[204]
{PO03}¿ Peters C., O'Sullivan C.: Attention-driven eye gaze and blinking for virtual humans. In SIGGRAPH '03: ACM SIGGRAPH 2003 Sketches & Applications San Diego, CA, USA, 2003, ACM, pp. pp.1-1.
[205]
{POB09}¿ Pamplona V. F., Oliveira M. M., Baranoski G. V. G.: Photorealistic models for pupil light reflex and iridal pattern deformation. ACM Transactions on Graphics Volume 28, Issue 4 2009, pp.106:1-106:12.
[206]
{POS03}¿ Peters C., O' Sullivan C.: Bottom-up visual attention for virtual human animation. In CASA '03: Proceedings of the 16th International Conference on Computer Animation and Social Agents CASA 2003 New Brunswick, NJ, USA, 2003, IEEE, pp. pp.111-117.
[207]
{PPB*05}¿ Peters C., Pelachaud C., Bevacqua E., Mancini M., Poggi I.: A model of attention and interest using gaze behavior. In Proceedings of the 5th International Conference on Intelligent Virtual Agents Kos, Greece, 2005, vol. 3661 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.229-240.
[208]
{PPDR00}¿ Poggi I., Pelachaud C., &lt;familyNamePrefix&gt;De&lt;/familyNamePrefix&gt;Rosis F.: Eye communication in a conversational 3D synthetic agent. AI Commununications Volume 13, Issue 3 2000, pp.169-181.
[209]
{PQ10}¿ Peters C., Qureshi A.: Graphics for serious games: A head movement propensity model for animating gaze shifts and blinks of virtual characters. Computers and Graphics Volume 34, Issue 6 2010, pp.677-687.
[210]
{PSA*04}¿ Pourtois G., Sander D., Andres M., Grandjean D., Reveret L., Olivier E., Vuilleumier P.: Dissociable roles of the human somatosensory and superior temporal cortices for processing social face signals. European Journal of Neuroscience Volume 20, Issue 12 2004, pp.3507-3515.
[211]
{QBM07}¿ Queiroz R., Barros L., Musse S.: Automatic generation of expressive gaze in virtual animated characters: From artists craft to a behavioral animation model. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.401-402.
[212]
{QBM08}¿ Queiroz R. B., Barros L. M., Musse S. R.: Providing expressive gaze to virtual animated characters in interactive applications. Computers in Entertainment Volume 6, Issue 3 2008, pp.41:1-41:23.
[213]
{QPA14}¿ Qureshi A., Peters C., Apperly I.: How does varying gaze direction affect interaction between a virtual agent and participant in an on-line communication scenario ? In Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments Heraklion, Crete, Greece, 2014, vol. 8525 of Lecture Notes in Computer Science, Springer International Publishing Switzerland, pp. pp.305-316.
[214]
{Rem11}¿ Remington L.: Clinical Anatomy and Physiology of the Visual System. Elsevier/Butterworth-Heinemann, Philadelphia, PA, USA, 2011.
[215]
{RMH13}¿ Ruijten P. A. M., Midden C. J. H., Ham J.: I didn't know that virtual agent was angry at me: Investigating effects of gaze direction on emotion recognition and evaluation. In Persuasive Technology Sydney, NSW, Australia, 2013, vol. 7822 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.192-197.
[216]
{Rob07}¿ Roberts S.: Character Animation: 2D Skills for Better 3D. Focal Press Visual Effects and Animation Series. Focal Press, Waltham, MA, USA, 2007.
[217]
{SB12}¿ Schulman D., Bickmore T.: Changes in verbal and nonverbal conversational behavior in long-term interaction. In ICMI '12: Proceedings of the 14th ACM International Conference on Multimodal Interaction Santa Monica, CA, USA, 2012, ACM, pp. pp.11-18.
[218]
{SBMH94}¿ Sagar M. A., Bullivant D., Mallinson G. D., Hunter P. J.: A virtual environment and model of the eye for surgical simulation. In SIGGRAPH '94: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques Orlando, FL, USA, 1994, ACM, pp. pp.205-212.
[219]
{SC09}¿ Staudte M., Crocker M.: The effect of robot gaze on processing robot utterances. In Proceedings of the 31th Annual Conference of the Cognitive Science Society Amsterdam, the Netherlands, 2009, Society, Inc.
[220]
{SD12}¿ Stellmach S., Dachselt R.: Look & touch: Gaze-supported target acquisition. In CHI '12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Austin, TX, USA, 2012, ACM, pp. pp.2981-2990.
[221]
{SD14}¿ Świrski L., Dodgson N.: Rendering synthetic ground truth images for eye tracker evaluation. In ETRA '14: Proceedings of the Symposium on Eye Tracking Research and Applications Safety Harbor, FL, USA, 2014, ACM, pp. pp.219-222.
[222]
{SG06}¿ Smith J. D., Graham T. C. N.: Use of eye movements for video game control. In ACE '06: Proceedings of the 2006 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology Hollywood, CA, USA, 2006, ACM.
[223]
{Sha11}¿ Shapiro A.: Building a character animation system. In Motion in Games Edinburgh, UK, 2011, vol. 7060 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.98-109.
[224]
{SHO13}¿ Skantze G., Hjalmarsson A., Oertel C.: Exploring the effects of gaze and pauses in situated human-robot interaction. In Proceedings of the SIGDIAL 2013 Conference Metz, France, 2013, Association for Computational Linguistics, pp. pp.163-172.
[225]
{SII04}¿ Surakka V., Illi M., Isokoski P.: Gazing and frowning as a new human-computer interaction technique. ACM Transactions on Applied Perception Volume 1, Issue 1 2004, pp.40-56.
[226]
{Sim94}¿ Simon B.-C.: How to build a baby that can read minds: Cognitive mechanisms in mindreading. Cahiers de Psychologie Cognitive/Current Psychology of Cognition Volume 13, Issue 5 1994, pp.513-552.
[227]
{SM11}¿ Srinivasan V., Murphy R.: A survey of social gaze. In Proceedings of 6th ACM/IEEE International Conference on Human-Robot Interaction HRI, Lausanne, Switzerland, 2011, IEEE, pp. pp.253-254.
[228]
{SNJ*07}¿ Skotte J., Nøjgaard J., Jørgensen L., Christensen K., Sjøgaard G.: Eye blink frequency during different computer tasks quantified by electrooculography. European Journal of Applied Physiology Volume 99, Issue 2 2007, pp.113-119.
[229]
{SOS10}¿ Steptoe W., Oyekoya O., Steed A.: Eyelid kinematics for virtual characters. Computer Animation and Virtual Worlds Volume 21, Issue 3-4 2010, pp.161-171.
[230]
{SS08}¿ Steptoe W., Steed A.: High-fidelity avatar eye-representation. In VR '08: Proceedings of IEEE Virtual Reality Conference Reno, NV, USA, 2008, IEEE, pp. pp.111-114.
[231]
{SSND11}¿ Stellmach S., Stober S., Nürnberger A., Dachselt R.: Designing gaze-supported multimodal interactions for the exploration of large image collections. In NGCA '11: Proceedings of the 1st Conference on Novel Gaze-Controlled Applications Karlskrona, Sweden, 2011, ACM, pp. pp.1:1-1:8.
[232]
{Sta99}¿ Stahl J. S.: Amplitude of human head movements associated with horizontal saccades. Experimental Brain Research Volume 126, Issue 1 1999, pp.41-54.
[233]
{SWG84}¿ Stern J. A., Walrath L. C., Goldstein R.: The endogenous eyeblink. Psychophysiology Volume 21, Issue 1 1984, pp.22-33.
[234]
{TAB*13}¿ Turner J., Alexander J., Bulling A., Schmidt D., Gellersen H.: Eye pull, eye push: Moving objects between large screens and personal devices with gaze and touch. In Proceedings of Human-Computer Interaction - INTERACT 2013 Cape Town, South Africa, 2013, vol. 8118 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.170-186.
[235]
{TCMH11}¿ Trutoiu L. C., Carter E. J., Matthews I., Hodgins J. K.: Modeling and animating eye blinks. ACM Transactions on Applied Perception Volume 8 2011, pp.17:1-17:17.
[236]
{TCW*95}¿ Tsotsos J. K., Culhane S. M., Wai W. Y. K., Lai Y., Davis N., Nuflo F.: Modeling visual attention via selective tuning. Artificial Intelligence Volume 78, Issue 1-2 1995, pp.507-545.
[237]
{TJ95}¿ Thomas F., Johnston O.: The Illusion of Life: Disney Animation. Hyperion Press, New York, NY, USA, 1995.
[238]
{TL86}¿ Tychsen L., Lisberger S. G.: Visual motion processing for the initiation of smooth-pursuit eye movements in humans. Journal of Neurophysiology Volume 56, Issue 4 1986, pp.953-968.
[239]
{TLM09}¿ Thiebaux M., Lance B., Marsella S.: Real-time expressive gaze animation for virtual humans. In AAMAS '09: Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems Budapest, Hungary, 2009, International Foundation for Autonomous Agents and Multiagent Systems, pp. pp.321-328.
[240]
{TS04}¿ Tombs S., Silverman I.: Pupillometry: A sexual selection approach. Evolution and Human Behavior Volume 25, Issue 4 2004, pp.221-228.
[241]
{TT94}¿ Tu X., Terzopoulos D.: Artificial fishes: Physics, locomotion, perception, behavior. In SIGGRAPH '94: Proceedings of the 21st Annual Conference on Computer Graphics and Interactive Techniques Orlando, FL, USA, 1994, ACM, pp. pp.43-50.
[242]
{VBP11}¿ Vala M., Blanco G., Paiva A.: Providing gender to embodied conversational agents. In Proceedings of the 10th International Conference on Intelligent Virtual Agents Reykjavik, Iceland, 2011, vol. 6895 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.148-154.
[243]
{VBR*03}¿ VanderWerf F., Brassinga P., Reits D., Aramideh M., Ongerboer de Visser B.: Eyelid movements: Behavioral studies of blinking in humans under different stimulus conditions. Journal of Neurophysiology Volume 89, Issue 5 2003, pp.2784-2796.
[244]
{VC98}¿ Vilhjálmsson H. H., Cassell J.: BodyChat: Autonomous communicative behaviors in avatars. In AGENTS '98: Proceedings of the Second International Conference on Autonomous Agents Minneapolis, MN, USA, 1998, ACM, pp. pp.269-276.
[245]
{VCC*07}¿ Vilhjálmsson H. H., Cantelmo N., Cassell J. E., Chafai N., Kipp M., Kopp S., Mancini M., Marsella S., Marshall A. N., Pelachaud C., Ruttkay Z., Thórisson K. R., Welbergen H., Werf R. J.: The behavior markup language: Recent developments and challenges. In Proceedings of the 7th International Conference on Intelligent Virtual Agents Paris, France, 2007, vol. 4722 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.99-111.
[246]
{vdKS11}¿ &lt;familyNamePrefix&gt;van der&lt;/familyNamePrefix&gt;Kamp J., Sundstedt V.: Gaze and voice controlled drawing. In NGCA '11: Proceedings of the 1st Conference on Novel Gaze-Controlled Applications Karlskrona, Sweden, 2011, ACM, pp. pp.9:1-9:8.
[247]
{VGSS04}¿ Vinayagamoorthy V., Garau M., Steed A., Slater M.: An eye gaze model for dyadic interaction in an immersive virtual environment: Practice and experience. Computer Graphics Forum Volume 23, Issue 1 2004, pp.1-12.
[248]
{Vil04}¿ Vilhjálmsson H. H.: Animating conversation in online games. In Entertainment Computing - ICEC 2004 Eindhoven, the Netherlands, 2004, Rauterberg M., Ed., vol. 3166 of Lecture Notes in Computer Science, Springer Berlin Heidelberg, pp. pp.139-150.
[249]
{vM12}¿ Špakov O., Majaranta P.: Enhanced gaze interaction using simple head gestures. In UbiComp '12: Proceedings of the 2012 ACM Conference on Ubiquitous Computing Pittsburgh, PA, USA, 2012, ACM, pp. pp.705-710.
[250]
{WEP*08}¿ Wilcox T., Evans M., Pearce C., Pollard N., Sundstedt V.: Gaze and voice based game interaction: The revenge of the killer penguins. In SIGGRAPH '08: ACM SIGGRAPH 2008 Posters Los Angeles, CA, USA, 2008, ACM, pp. pp.81:1-81:1.
[251]
{WG10}¿ Wang N., Gratch J.: Don't just stare at me ! In CHI '10: Proceedings of the 28th International Conference on Human Factors in Computing Systems Atlanta, GA, USA, 2010, ACM, pp. pp.1241-1250.
[252]
{Wil09}¿ Williams R.: The Animator's Survival Kit: A Manual of Methods, Principles and Formulas for Classical, Computer, Games, Stop Motion and Internet Animators. Faber & Faber, London, UK, 2009.
[253]
{WLO10}¿ Weissenfeld A., Liu K., Ostermann J.: Video-realistic image-based eye animation via statistically driven state machines. The Visual Computer: International Journal of Computer Graphics Volume 26, Issue 9 2010, pp.1201-1216.
[254]
{WRS*07}¿ Wobbrock J. O., Rubinstein J., Sawyer M., Duchowski A. T., Uk L.: Gaze-based creativity not typing but writing: Eye-based text entry using letter-like gestures. In Proceedings of the 3rd Conference on Communication by Gaze Interaction COGAIN 2007 Leicester, UK, 2007.
[255]
{WSG05}¿ Wecker L., Samavati F., Gavrilova M.: Iris synthesis: A reverse subdivision application. In GRAPHITE '05: Proceedings of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia Dunedin, New Zealand, 2005, ACM, pp. pp.121-125.
[256]
{XLW13}¿ Xu Q., Li L., Wang G.: Designing engagement-aware agents for multiparty conversations. In CHI '13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Paris, France, 2013, ACM, pp. pp.2233-2242.
[257]
{YC06}¿ Yeo A. W., Chiu P.-C.: Gaze estimation model for eye drawing. In CHI '06 Extended Abstracts on Human Factors in Computing Systems Montreal, Québec, Canada, 2006, CHI EA '06, ACM, pp. pp.1559-1564.
[258]
{YHC*10}¿ Yoo B., Han J.-J., Choi C., Yi K., Suh S., Park D., Kim C.: 3D user interface combining gaze and hand gestures for large-scale display. In CHI '10 Extended Abstracts on Human Factors in Computing Systems Atlanta, GA, USA, 2010, CHI EA '10, ACM, pp. pp.3709-3714.
[259]
{YLNP12}¿ Yeo S. H., Lesmana M., Neog D. R., Pai D. K.: Eyecatch: simulating visuomotor coordination for object interception. ACM Transactions on Graphics Volume 31, Issue 4 2012, pp.42:1-42:10.
[260]
{YNG70}¿ YNGVE V. H.: On getting a word in edgewise. In Proceedings of Chicago Linguistics Society, 6th Meeting1970, Chicago, IL, USA, pp. pp.567-578.
[261]
{YSI*06}¿ Yoshikawa Y., Shinozawa K., Ishiguro H., Hagita N., Miyamoto T.: Responsive robot gaze to interaction partner. In Proceedings of Robotics: Science and Systems Philadelphia, PA, USA, 2006, IEEE.
[262]
{YSS12}¿ Yu C., Schermerhorn P., Scheutz M.: Adaptive eye gaze patterns in interactions with human and artificial agents. ACM Transactions on Interactive Intelligent Systems Volume 1, Issue 2 2012, pp.13:1-13:25.
[263]
{ZFP11}¿ Zoric G., Forchheimer R., Pandzic I.: On creating multimodal virtual humans-Real time speech driven facial gesturing. Multimedia Tools and Applications Volume 54, Issue 1 2011, pp.165-179.
[264]
{ZHRM13}¿ Zibrek K., Hoyet L., Ruhland K., McDonnell R.: Evaluating the effect of emotion on gender recognition in virtual humans. In SAP '13: Proceedings of the ACM Symposium on Applied Perception Dublin, Ireland, 2013, ACM, pp. pp.45-49.
[265]
{ZS06}¿ Zuo J., Schmid N. A.: A model based, anatomy based method for synthesizing iris images. In ICB '06: Proceedings of the 2006 International Conference on Advances in Biometrics. Springer Berlin Heidelberg, 2006, pp. pp.428-435.

Cited By

View all
  • (2024)Navigating Communication Patterns and Personalities in User Preference During Human-Agent InteractionProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3690913(447-449)Online publication date: 24-Nov-2024
  • (2024)S3: Speech, Script and Scene driven Head and Eye AnimationACM Transactions on Graphics10.1145/365817243:4(1-12)Online publication date: 19-Jul-2024
  • (2024)Reactive Gaze during Locomotion in Natural EnvironmentsProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15168(1-12)Online publication date: 21-Aug-2024
  • Show More Cited By
  1. A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Computer Graphics Forum
    Computer Graphics Forum  Volume 34, Issue 6
    September 2015
    314 pages
    ISSN:0167-7055
    EISSN:1467-8659
    Issue’s Table of Contents

    Publisher

    The Eurographs Association & John Wiley & Sons, Ltd.

    Chichester, United Kingdom

    Publication History

    Published: 01 September 2015

    Author Tags

    1. I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism-Animation
    2. facial animation

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 24 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Navigating Communication Patterns and Personalities in User Preference During Human-Agent InteractionProceedings of the 12th International Conference on Human-Agent Interaction10.1145/3687272.3690913(447-449)Online publication date: 24-Nov-2024
    • (2024)S3: Speech, Script and Scene driven Head and Eye AnimationACM Transactions on Graphics10.1145/365817243:4(1-12)Online publication date: 19-Jul-2024
    • (2024)Reactive Gaze during Locomotion in Natural EnvironmentsProceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation10.1111/cgf.15168(1-12)Online publication date: 21-Aug-2024
    • (2024)Automatic Gaze Analysis: A Survey of Deep Learning Based ApproachesIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.332133746:1(61-84)Online publication date: 1-Jan-2024
    • (2024)University Students’ Opinions on Using Intelligent Agents to Cope with Stress and Anxiety in Social SituationsComputers in Human Behavior10.1016/j.chb.2023.108072153:COnline publication date: 12-Apr-2024
    • (2023)Emotional Speech-Driven Animation with Content-Emotion DisentanglementSIGGRAPH Asia 2023 Conference Papers10.1145/3610548.3618183(1-13)Online publication date: 10-Dec-2023
    • (2023)Guiding Visual Attention on 2D Screens: Effects of Gaze Cues from Avatars and HumansProceedings of the 2023 ACM Symposium on Spatial User Interaction10.1145/3607822.3614529(1-9)Online publication date: 13-Oct-2023
    • (2023)The Stare-in-the-Crowd Effect When Navigating a Crowd in Virtual RealityACM Symposium on Applied Perception 202310.1145/3605495.3605796(1-10)Online publication date: 5-Aug-2023
    • (2023)Who's next?Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents10.1145/3570945.3607312(1-8)Online publication date: 19-Sep-2023
    • (2022)Multiparty Interaction Between Humans and Socially Interactive AgentsThe Handbook on Socially Interactive Agents10.1145/3563659.3563665(113-154)Online publication date: 27-Oct-2022
    • Show More Cited By

    View Options

    View options

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media