default search action
HAI 2016: Singapore
- Wei-Yun Yau, Takashi Omori, Giorgio Metta, Hirotaka Osawa, Shengdong Zhao:
Proceedings of the Fourth International Conference on Human Agent Interaction, HAI 2016, Biopolis, Singapore, October 4-7, 2016. ACM 2016, ISBN 978-1-4503-4508-8
Keynote Lecture I
- David Hsu:
Robots in Harmony with Humans. 1
Main Track Session I: Designing Interactions
- Momoka Nakayama, Shunji Yamanaka:
Perception of Animacy by the Linear Motion of a Group of Robots. 3-9 - Siti Aisyah binti Anas, Shi Qiu, Matthias Rauterberg, Jun Hu:
Exploring Social Interaction with Everyday Object based on Perceptual Crossing. 11-18 - Takamasa Iio, Yuichiro Yoshikawa, Hiroshi Ishiguro:
Pre-scheduled Turn-Taking between Robots to Make Conversation Coherent. 19-25
Main Track Session II: Power of Groups
- Masahiro Shiomi, Norihiro Hagita:
Do Synchronized Multiple Robots Exert Peer Pressure? 27-33 - Nihan Karatas, Soshi Yoshikawa, Michio Okada:
NAMIDA: Sociable Driving Agents with Multiparty Conversation. 35-42 - Viktor Richter, Birte Carlmeyer, Florian Lier, Sebastian Meyer zu Borgsen, David Schlangen, Franz Kummert, Sven Wachsmuth, Britta Wrede:
Are you talking to me?: Improving the Robustness of Dialogue Systems in a Multi Party HRI Scenario by Incorporating Gaze Direction and Lip Movement of Attendees. 43-50
Poster Session I
- Keng Peng Tee, Yuanwei Chua, Zhiyong Huang:
Tracking Human Gestures under Field-of-View Constraints. 51-54 - Shi Qiu, Siti Aisyah Anas, Jun Hu:
Whispering Bubbles: Exploring Anthropomorphism through Shape-Changing Interfaces. 55-58 - Shi Qiu, Siti Aisyah Anas, Hirotaka Osawa, Matthias Rauterberg, Jun Hu:
Model-Driven Gaze Simulation for the Blind Person in Face-to-Face Communication. 59-62 - Zhuoyu Shen, Yan Wu:
Investigation of Practical Use of Humanoid Robots in Elderly Care Centres. 63-66 - Yusuke Kudo, Wataru Kayano, Takuya Sato, Hirotaka Osawa:
User Generated Agent: Designable Book Recommendation Robot Programmed by Children. 67-70 - Takuto Ishioh, Tomoko Koda:
Cross-cultural Study of Perception and Acceptance of Japanese Self-adaptors. 71-74 - Singo Sawa, Hiroaki Kawashima, Kei Shimonishi, Takashi Matsuyama:
Modulating Dynamic Models for Lip Motion Generation. 75-78 - Nur Ellyza Abd Rahman, Azhri Azhar, Kasun Karunanayaka, Adrian David Cheok, Mohammad Abdullah Mohamad Johar, Jade Gross, Andoni Luis Aduriz:
Magnetic Dining Table Interface and Magnetic Foods for new Human Food Interactions. 79-81 - Jia Qi Lim, Nicole Sze Ting Lim, Maya Zheng, Swee Lan See:
A Study on Trust in Pharmacists for Better HAI Design. 83-84 - Masato Fukuda, Hung-Hsuan Huang, Tetsuya Kanno, Naoki Ohta, Kazuhiro Kuwabara:
Development of a Simulated Environment for Recruitment Examination and Training of High School Teachers. 85-88 - Ai Kashii, Kazunori Takashio, Hideyuki Tokuda:
Ex-Amp Robot: Physical Avatar for Enhancing Human to Human Communication. 89-92 - Hidehito Honda, Ryosuke Hisamatsu, Yoshimasa Ohmoto, Kazuhiro Ueda:
Interaction in a Natural Environment: Estimation of Customer's Preference Based on Nonverbal Behaviors. 93-96 - Taisuke Murakami:
Ear Ball for Empathy: To Realize the Sensory Experience of People with Autism Apectrum Disorder. 97-98 - Takafumi Sakamoto, Yugo Takeuchi:
Process of Agency Identification Based on the Desire to Communicate in Embodied Interaction. 99-102 - Junya Nakanishi, Hidenobu Sumioka, Hiroshi Ishiguro:
Can Children Anthropomorphize Human-shaped Communication Media?: A Pilot Study on Co-sleeping with a Huggable Communication Medium. 103-106 - Marco Antonio Gutierrez, Luis Fernando D'Haro, Rafael E. Banchs:
A Multimodal Control Architecture for Autonomous Unmanned Aerial Vehicles. 107-110
Main Track Session III: Modelling Interactions
- Tetsuya Matsui, Seiji Yamada:
Building Trust in PRVAs by User Inner State Transition through Agent State Transition. 111-114 - Mamoru Yamanouchi, Taichi Sono, Michita Imai:
The Use of The BDI Model As Design Principle for A Migratable Agent. 115-122 - Adam S. Miner, Amanda Chow, Sarah Adler, Ilia Zaitsev, Paul Tero, Alison Darcy, Andreas Paepcke:
Conversational Agents and Mental Health: Theory-Informed Assessment of Language and Affect. 123-130
Main Track Session IV: Emotions and Inner States
- Sooyeon Jeong, Cynthia Lynn Breazeal:
Improving Smartphone Users' Affect and Wellbeing with Personalized Positive Psychology Interventions. 131-137 - Naoto Yoshida, Tomoko Yonezawa:
Investigating Breathing Expression of a Stuffed-Toy Robot Based on Body-Emotion Model. 139-144 - Sho Sakurai, Yuki Ban, Toki Katsumura, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose:
Sharing Emotion Described as Text on the Internet by Changing Self-physiological Perception. 145-153
Main Track Session V: Extending Body Image
- Masashi Nishiyama, Tsubasa Miyauchi, Hiroki Yoshimura, Yoshio Iwai:
Synthesizing Realistic Image-based Avatars by Body Sway Analysis. 155-162 - Kana Misawa, Jun Rekimoto:
Who am I Touching?: User Study of Remote Handshaking with a Telepresence Face. 163-170 - Yuya Onishi, Kazuaki Tanaka, Hideyuki Nakanishi:
Embodiment of Video-mediated Communication Enhances Social Telepresence. 171-178
Main Track Session VI: Human Characteristics
- Mohammad Obaid, Maha Salem, Micheline Ziadee, Halim Boukaram, Elena Moltchanova, Majd F. Sakr:
Investigating Effects of Professional Status and Ethnicity in Human-Agent Interaction. 179-186 - Takahisa Uchida, Takashi Minato, Hiroshi Ishiguro:
Does a Conversational Robot Need to Have its own Values?: A Study of Dialogue Strategy to Enhance People's Motivation to Use Autonomous Conversational Robots. 187-192
Main Track Session VII: Communication Cues
- Mitsuhiko Kimoto, Takamasa Iio, Masahiro Shiomi, Ivan Tanev, Katsunori Shimohara, Norihiro Hagita:
Alignment Approach Comparison between Implicit and Explicit Suggestions in Object Reference Conversations. 193-200 - Takahiro Hirano, Masahiro Shiomi, Takamasa Iio, Mitsuhiko Kimoto, Takuya Nagashio, Ivan Tanev, Katsunori Shimohara, Norihiro Hagita:
Communication Cues in a Human-Robot Touch Interaction. 201-206 - Simon Schulz, Florian Lier, Andreas Kipp, Sven Wachsmuth:
Humotion: A Human Inspired Gaze Control Framework for Anthropomorphic Robot Heads. 207-214
Keynote Lecture II
- Leila Takayama:
Perceptions of Agency in Human-robot Interactions. 215
Poster Session II
- Tetsushi Oka, Sho Uchino:
Human-Robot Cooperative Conveyance Using Speech and Head Gaze. 217-220 - Birte Carlmeyer, David Schlangen, Britta Wrede:
"Look at Me!": Self-Interruptions as Attention Booster? 221-224 - Sichao Song, Seiji Yamada:
Investigation on Effects of Color, Sound, and Vibration on Human's Emotional Perception. 225-227 - Tomoki Nishide, Kei Shimonishi, Hiroaki Kawashima, Takashi Matsuyama:
Voting-Based Backchannel Timing Prediction Using Audio-Visual Information. 229-232 - Emma Yann Zhang, Adrian David Cheok:
Forming Intimate Human-Robot Relationships Through A Kissing Machine. 233-234 - Ayano Kitamura, Yugo Hayashi:
Effects of Deformed Embodied Agent during Collaborative Interaction Tasks: Investigation on Subjective Feelings and Emotion. 235-237 - Manoj Ramanathan, Wei-Yun Yau, Eam Khwang Teoh:
Human Posture Detection using H-ELM Body Part and Whole Person Detectors for Human-Robot Interaction. 239-242 - Yoshihisa Ishihara, Kazuki Kobayashi, Seiji Yamada:
Behavioral Expression Design onto Manufactured Figures. 243-244 - Masahiro Shiomi, Kasumi Abe, Yachao Pei, Narumitsu Ikeda, Takayuki Nagai:
"I'm Scared": Little Children Reject Robots. 245-247 - Vanessa Lim, Hui Shan Ang, Estelle Lee, Boon Pang Lim:
Towards an Interactive Voice Agent for Singapore Hokkien. 249-252 - Ikkaku Kawaguchi, Yuki Kodama, Hideaki Kuzuoka, Mai Otsuki, Yusuke Suzuki:
Effect of Embodiment Presentation by Humanoid Robot on Social Telepresence. 253-256 - Kenta Yamada, Jun Miura:
Ambiguity-driven Interaction in Robot-to-Human Teaching. 257-260 - Kaito Tsukada, Mihoko Niitsuma:
Impression on Human-Robot Communication Affected by Inconsistency in Expected Robot Perception. 261-262 - Longjiang Zhou, Albertus Hendrawan Adiwahono, Yuanwei Chua, Wei Liang Chan:
Haptic Workspace Control of the Humanoid Robot Arms. 263-266 - Muhammad Attamimi, Masahiro Miyata, Tetsuji Yamada, Takashi Omori, Ryoma Hida:
Attention Estimation for Child-Robot Interaction. 267-271 - Andreea I. Niculescu, Kheng Hui Yeo, Rafael Enrique Banchs:
Designing MUSE: A Multimodal User Experience for a Shopping Mall Kiosk. 273-275
Main Track Session VIII: Interaction Tactics
- Kazunori Terada, Seiji Yamada, Kazuyuki Takahashi:
A Leader-Follower Relation between a Human and an Agent. 277-280 - Andreas Kipp, Franz Kummert:
"I know how you performed!": Fostering Engagement in a Gaming Situation Using Memory of Past Interaction. 281-288 - Mariana Serras Pereira, Jolanda de Lange, Suleman Shahid, Marc Swerts:
Children's Facial Expressions in Truthful and Deceptive Interactions with a Virtual Agent. 289-296
Main Track Session IX: Supporting Work
- Muneeb Imtiaz Ahmad, Omar Mubin, Joanne Orlando:
Understanding Behaviours and Roles for Social and Adaptive Robots In Education: Teacher's Perspective. 297-304 - Tomoko Yonezawa, Kunihiko Fujiwara, Naoto Yoshida:
Evaluation of Schedule Managing Agent among Multiple Members with Representation of Background Negotiations. 305-312 - Wilson Kien Ho Ko, Yan Wu, Keng Peng Tee:
LAP: A Human-in-the-loop Adaptation Approach for Industrial Robots. 313-319
Poster Session III
- Mako Okanda, Yue Zhou, Takayuki Kanda, Hiroshi Ishiguro, Shoji Itakura:
Response Tendencies of Four-Year-Old Children to Communicative and Non-Communicative Robots. 321-324 - Nur Amira Samshir, Nurafiqah Johari, Kasun Karunanayaka, Adrian David Cheok:
Thermal Sweet Taste Machine for Multisensory Internet. 325-328 - Eunice Njeri Mwangi, Emilia I. Barakova, Ruixin Zhang, Marta Díaz, Andreu Català, Matthias Rauterberg:
See Where I am Looking at: Perceiving Gaze Cues With a NAO Robot. 329-332 - Yuya Nakanishi, Yasuhiko Kitamura:
Promoting Physical Activities by Massive Competition in Virtual Marathon. 333-336 - Yutaka Ishii, Tomio Watanabe, Yoshihiro Sejima:
Development of an Embodied Avatar System using Avatar-Shadow's Color Expressions with an Interaction-activated Communication Model. 337-340 - Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada:
Model-based Reminiscence: Guiding Mental Time Travel by Cognitive Modeling. 341-344 - Siti Aisyah binti Anas, Shi Qiu, Matthias Rauterberg, Jun Hu:
Exploring Gaze in Interacting with Everyday Objects with an Interactive Cup. 345-348 - Masahiro Shiomi, Kasumi Abe, Yachao Pei, Tingyi Zhang, Narumitsu Ikeda, Takayuki Nagai:
ChiCaRo: Tele-presence Robot for Interacting with Babies and Toddlers. 349-351 - Masahiro Kitagawa, Benjamin Luke Evans, Nagisa Munekata, Tetsuo Ono:
Mutual Adaptation between a Human and a Robot Based on Timing Control of "Sleep-time". 353-354 - Longjiang Zhou, Keng Peng Tee, Zhiyong Huang:
Simulation of a Tele-operated Task under Human-Robot Shared Control. 355-358 - Takeomi Goto, Hirotaka Osawa:
Evaluation of a Substitution Device for Emotional Labor by using Task-Processing Time and Cognitive Load. 359-362 - Lue Lin, Luis Fernando D'Haro, Rafael E. Banchs:
A Web-based Platform for Collection of Human-Chatbot Interactions. 363-366 - Yumiko Shinohara, Katsuhiro Kubo, Momoyo Nozawa, Misa Yoshizaki, Tomomi Takahashi, Hirofumi Hayakawa, Atsushi Hirota, Yukiko Nishizaki, Natsuki Oka:
The Optimum Rate of Mimicry in Human-Agent Interaction. 367-370 - Sin-Hwa Kang, Andrew W. Feng, Mike Seymour, Ari Shapiro:
Smart Mobile Virtual Characters: Video Characters vs. Animated Characters. 371-374
Main Track Session X: Agents for Real-world
- Yoshimasa Ohmoto, Takashi Suyama, Toyoaki Nishida:
A Method to Alternate the Estimation of Global Purposes and Local Objectives to Induce and Maintain the Intentional Stance. 379-385 - Evelyn Florentine, Mark Adam Ang, Scott Drew Pendleton, Hans Andersen, Marcelo H. Ang Jr.:
Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service. 387-392 - Kenji Koide, Jun Miura:
Estimating Person's Awareness of an Obstacle using HCRF for an Attendant Robot. 393-397
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.