default search action
ICMI 2019: Suzhou, China
- Wen Gao, Helen Mei-Ling Meng, Matthew A. Turk, Susan R. Fussell, Björn W. Schuller, Yale Song, Kai Yu:
International Conference on Multimodal Interaction, ICMI 2019, Suzhou, China, October 14-18, 2019. ACM 2019, ISBN 978-1-4503-6860-5
Keynote & Invited Talks
- Hsiao-Wuen Hon:
A Brief History of Intelligence. 1 - Zhengyou Zhang:
Challenges of Multimodal Interaction in the Era of Human-Robot Coexistence. 2 - Alexander Waibel:
Connecting Humans with Humans: Multimodal, Multilingual, Multiparty Mediation. 3-4 - Elisabeth André:
Socially-Aware User Interfaces: Can Genuine Sensitivity Be Learnt at all? 5
Session 1: Human Behavior
- Ognjen Rudovic, Meiru Zhang, Björn W. Schuller, Rosalind W. Picard:
Multi-modal Active Learning From Human Data: A Deep Reinforcement Learning Approach. 6-15 - Gian-Luca Savino, Niklas Emanuel, Steven Kowalzik, Felix Kroll, Marvin C. Lange, Matthis Laudan, Rieke Leder, Zhanhua Liang, Dayana Markhabayeva, Martin Schmeißer, Nicolai Schütz, Carolin Stellmacher, Zihe Xu, Kerstin Bub, Thorsten Kluss, Jaime Leonardo Maldonado Cañón, Ernst Kruijff, Johannes Schöning:
Comparing Pedestrian Navigation Methods in Virtual Reality and Real Life. 16-25 - Metehan Doyran, Batikan Türkmen, Eda Aydin Oktay, Sibel Halfon, Albert Ali Salah:
Video and Text-Based Affect Analysis of Children in Play Therapy. 26-34 - Byung Cheol Song, Min Kyu Lee, Dong-Yoon Choi:
Facial Expression Recognition via Relation-based Conditional Generative Adversarial Network. 35-39 - Suowei Wu, Zhengyin Du, Weixin Li, Di Huang, Yunhong Wang:
Continuous Emotion Recognition in Videos by Fusing Facial Expression, Head Pose and Eye Gaze. 40-48 - Md Abdullah Al Fahim, Mohammad Maifi Hasan Khan, Theodore Jensen, Yusuf Albayram, Emil Coman, Ross Buck:
Effect of Feedback on Users' Immediate Emotions: Analysis of Facial Expressions during a Simulated Target Detection Task. 49-58
Session 2: Artificial Agents
- Mohammad Soleymani, Kalin Stefanov, Sin-Hwa Kang, Jan Ondras, Jonathan Gratch:
Multimodal Analysis and Estimation of Intimate Self-Disclosure. 59-68 - Deepali Aneja, Daniel McDuff, Shital Shah:
A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities. 69-73 - Chaitanya Ahuja, Shugao Ma, Louis-Philippe Morency, Yaser Sheikh:
To React or not to React: End-to-End Visual Pose Forecasting for Personalized Avatar during Dyadic Conversations. 74-84 - Yuki Hirano, Shogo Okada, Haruto Nishimoto, Kazunori Komatani:
Multitask Prediction of Exchange-level Annotations for Multimodal Dialogue Systems. 85-94 - Leili Tavabi, Kalin Stefanov, Setareh Nasihati Gilani, David R. Traum, Mohammad Soleymani:
Multimodal Learning for Identifying Opportunities for Empathetic Responses. 95-104
Session 3: Touch and Gesture
- Abishek Sriramulu, Jionghao Lin, Sharon L. Oviatt:
Dynamic Adaptive Gesturing Predicts Domain Expertise in Mathematics. 105-113 - Zhuoming Zhang, Robin Héron, Eric Lecolinet, Françoise Détienne, Stéphane Safin:
VisualTouch: Enhancing Affective Touch Communication with Multi-modality Stimulation. 114-123 - Jongho Lim, Yongjae Yoo, Hanseul Cho, Seungmoon Choi:
TouchPhoto: Enabling Independent Picture Taking and Understanding for Visually-Impaired Users. 124-134 - Ilhan Aslan, Katharina Weitz, Ruben Schlagowski, Simon Flutura, Susana Garcia Valesco, Marius Pfeil, Elisabeth André:
Creativity Support and Multimodal Pen-based Interaction. 135-144 - Hao Jiang:
Motion Eavesdropper: Smartwatch-based Handwriting Recognition Using Deep Learning. 145-153
Session 4: Physiological Modeling
- Tobias Appel, Natalia Sevcenko, Franz Wortha, Katerina Tsarava, Korbinian Moeller, Manuel Ninaus, Enkelejda Kasneci, Peter Gerjets:
Predicting Cognitive Load in an Emergency Simulation Based on Behavioral and Physiological Measures. 154-163 - Yuning Qiu, Teruhisa Misu, Carlos Busso:
Driving Anomaly Detection with Conditional Generative Adversarial Network using Physiological and CAN-Bus Data. 164-173 - Mimansa Jaiswal, Zakaria Aldeneh, Emily Mower Provost:
Controlling for Confounders in Multimodal Emotion Classification via Adversarial Learning. 174-184 - Yi Ding, Brandon Huynh, Aiwen Xu, Tom Bullock, Hubert Cecotti, Matthew A. Turk, Barry Giesbrecht, Tobias Höllerer:
Multimodal Classification of EEG During Physical Activity. 185-194
Session 5: Sound and interaction
- Erik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik:
"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR. 195-204 - Najla Al Futaisi, Zixing Zhang, Alejandrina Cristià, Anne S. Warlaumont, Björn W. Schuller:
VCMNet: Weakly Supervised Learning for Automatic Infant Vocalisation Maturity Analysis. 205-209 - Nicole Andelic, Aidan Feeney, Gary McKeown:
Evidence for Communicative Compensation in Debt Advice with Reduced Multimodality. 210-219 - Ahmed Hussen Abdelaziz, Barry-John Theobald, Justin Binder, Gabriele Fanelli, Paul Dixon, Nicholas Apostoloff, Thibaut Weise, Sachin Kajareker:
Speaker-Independent Speech-Driven Visual Speech Synthesis using Domain-Adapted Acoustic Models. 220-225 - Divesh Lala, Koji Inoue, Tatsuya Kawahara:
Smooth Turn-taking by a Robot Using an Online Continuous Model to Generate Turn-taking Cues. 226-234 - Rui Hou, Verónica Pérez-Rosas, Stacy L. Loeb, Rada Mihalcea:
Towards Automatic Detection of Misinformation in Online Medical Videos. 235-243
Session 6: Multiparty interaction
- Lucca Eloy, Angela E. B. Stewart, Mary Jean Amon, Caroline Reinhardt, Amanda Michaels, Chen Sun, Valerie Shute, Nicholas D. Duran, Sidney D'Mello:
Modeling Team-level Multimodal Dynamics during Multiparty Collaboration. 244-258 - Kevin El Haddad, Sandeep Nallan Chakravarthula, James Kennedy:
Smile and Laugh Dynamics in Naturalistic Dyadic Interactions: Intensity Levels, Sequences and Roles. 259-263 - Go Miura, Shogo Okada:
Task-independent Multimodal Prediction of Group Performance Based on Product Dimensions. 264-273 - Philipp Matthias Müller, Andreas Bulling:
Emergent Leadership Detection Across Datasets. 274-278 - Ameneh Shamekhi, Timothy W. Bickmore:
A Multimodal Robot-Driven Meeting Facilitation System for Group Decision-Making Sessions. 279-290
Poster Session
- Stephanie Arevalo, Stanislaw Miller, Martha Janka, Jens Gerken:
What's behind a choice? Understanding Modality Choices under Changing Environmental Conditions. 291-301 - Yulan Chen, Jia Jia, Zhiyong Wu:
Modeling Emotion Influence Using Attention-based Graph Convolutional Recurrent Network. 302-309 - Maite Frutos Pascual, Jake Michael Harrison, Chris Creed, Ian Williams:
Evaluation of Ultrasound Haptics as a Supplementary Feedback Cue for Grasping in Virtual Environments. 310-318 - Yosra Rekik, Walid Merrad, Christophe Kolski:
Understanding the Attention Demand of Touch and Tangible Interaction on a Composite Task. 319-328 - Chandan Kumar, Daniyal Akbari, Raphael Menges, I. Scott MacKenzie, Steffen Staab:
TouchGazePath: Multimodal Interaction with Touch and Gaze Path for Secure Yet Efficient PIN Entry. 329-338 - Mira Sarkis, Céline Coutrix, Laurence Nigay, Andrzej Duda:
WiBend: Wi-Fi for Sensing Passive Deformable Surfaces. 339-348 - Kaixin Ma, Xinyu Wang, Xinru Yang, Mingtong Zhang, Jeffrey M. Girard, Louis-Philippe Morency:
ElderReact: A Multimodal Dataset for Recognizing Emotional Response in Aging Adults. 349-357 - Jiaming Huang, Chen Min, Liping Jing:
Unsupervised Deep Fusion Cross-modal Hashing. 358-366 - Vineet Mehta, Sai Srinadhu Katta, Devendra Pratap Yadav, Abhinav Dhall:
DIF : Dataset of Perceived Intoxicated Faces for Drunk Person Identification. 367-374 - Soumia Dermouche, Catherine Pelachaud:
Generative Model of Agent's Behaviors in Human-Agent Interaction. 375-384 - Lingyu Zhang, Mallory Morgan, Indrani Bhattacharya, Michael Foley, Jonas Braasch, Christoph Riedl, Brooke Foucault Welles, Richard J. Radke:
Improved Visual Focus of Attention Estimation and Prosodic Features for Analyzing Group Interactions. 385-394 - Youfang Leng, Li Yu, Jie Xiong:
DeepReviewer: Collaborative Grammar and Innovation Neural Network for Automatic Paper Review. 395-403 - Tianyi Zhang, Abdallah El Ali, Chen Wang, Xintong Zhu, Pablo César:
CorrFeat: Correlation-based Feature Extraction Algorithm using Skin Conductance and Pupil Diameter for Emotion Recognition. 404-408 - Ankit Parag Shah, Vasu Sharma, Vaibhav Vaibhav, Mahmoud Alismail, Louis-Philippe Morency:
Multimodal Behavioral Markers Exploring Suicidal Intent in Social Media Videos. 409-413 - Dimosthenis Kontogiorgos, André Pereira, Joakim Gustafson:
Estimating Uncertainty in Task-Oriented Dialogue. 414-418 - Fumio Nihei, Yukiko I. Nakano, Ryuichiro Higashinaka, Ryo Ishii:
Determining Iconic Gesture Forms based on Entity Image Representation. 419-425 - Sixia Li, Shogo Okada, Jianwu Dang:
Interaction Process Label Recognition in Group Discussion. 426-434 - Qingqing Li, Theodora Chaspari:
Exploring Transfer Learning between Scripted and Spontaneous Speech for Emotion Recognition. 435-439 - Soumia Dermouche, Catherine Pelachaud:
Engagement Modeling in Dyadic Interaction. 440-445
Doctoral Consortium
- Hashini Senaratne:
Detecting Temporal Phases of Anxiety in The Wild: Toward Continuously Adaptive Self-Regulation Technologies. 446-452 - Leili Tavabi:
Multimodal Machine Learning for Interactive Mental Health Therapy. 453-456 - Aishat Aloba:
Tailoring Motion Recognition Systems to Children's Motions. 457-462 - Tianyi Zhang:
Multi-modal Fusion Methods for Robust Emotion Recognition using Body-worn Physiological Sensors in Mobile Environments. 463-467 - Michel-Pierre Jansen:
Communicative Signals and Social Contextual Factors in Multimodal Affect Recognition. 468-472 - Sambit Praharaj:
Co-located Collaboration Analytics. 473-476 - Chaitanya Ahuja:
Coalescing Narrative and Dialogue for Grounded Pose Forecasting. 477-481 - Lisa-Marie Vortmann:
Attention-driven Interaction Systems for Augmented Reality. 482-486 - Abdul Rafey Aftab:
Multimodal Driver Interaction with Gesture, Gaze and Speech. 487-492
Demo and Exhibit Session
- Zi Fong Yong, Ai Ling Ng, Yuta Nakayama:
The Dyslexperience: Use of Projection Mapping to Simulate Dyslexia. 493-495 - Yuyun Hua, Sixian Zhang, Xinhang Song, Jia'ning Li, Shuqiang Jiang:
A Real-Time Scene Recognition System Based on RGB-D Video Streams. 496-498 - Jin-hwan Oh, Sudhakar Sah, Jihoon Kim, Yoori Kim, Jeonghwa Lee, Wooseung Lee, Myeongsoo Shin, Jaeyon Hwang, Seongwon Kim:
Hang Out with the Language Assistant. 499-500 - Fahim A. Salim, Fasih Haider, Sena Busra Yengec Tasdemir, Vahid Naghashi, Izem Tengiz, Kubra Cengiz, Dees B. W. Postma, Robby van Delden, Dennis Reidsma, Saturnino Luz, Bert-Jan van Beijnum:
A Searching and Automatic Video Tagging Tool for Events of Interest during Volleyball Training Sessions. 501-503 - Abdenaceur Abdouni, Rory Clark, Orestis Georgiou:
Seeing Is Believing but Feeling Is the Truth: Visualising Mid-Air Haptics in Oil Baths and Lightboxes. 504-505 - Khalil J. Anderson, Theodore Dubiel, Kenji Tanaka, Marcelo Worsley, Cody Poultney, Steve Brenneman:
Chemistry Pods: A Mutlimodal Real Time and Retrospective Tool for the Classroom. 506-507 - Aaron E. Rodriguez, Adriana I. Camacho, Laura J. Hinojos, Mahdokht Afravi, David G. Novick:
A Proxemics Measurement Tool Integrated into VAIF and Unity. 508-509
Challenge 1: The 1st Chinese Audio-Textual Spoken Language Understanding Challenge
- Xu Wang, Chengda Tang, Xiaotian Zhao, Xuancai Li, Zhuolin Jin, Dequan Zheng, Tiejun Zhao:
Transfer Learning Methods for Spoken Language Understanding. 510-515 - Heyan Huang, Xianling Mao, Puhai Yang:
Streamlined Decoder for Chinese Spoken Language Understanding. 516-520 - Su Zhu, Zijian Zhao, Tiejun Zhao, Chengqing Zong, Kai Yu:
CATSLU: The 1st Chinese Audio-Textual Spoken Language Understanding Challenge. 521-525 - Chaohong Tan, Zhenhua Ling:
Multi-Classification Model for Spoken Language Understanding. 526-530 - Hao Li, Chen Liu, Su Zhu, Kai Yu:
Robust Spoken Language Understanding with Acoustic and Domain Knowledge. 531-535
Challenge 2: The 1st Mandarin Audio-Visual Speech Recognition Challenge (MAVSR)
- Yue Yao, Tianyu Wang, Heming Du, Liang Zheng, Tom Gedeon:
Spotting Visual Keywords from Temporal Sliding Windows. 536-539 - Yougen Yuan, Wei Tang, Minhao Fan, Yue Cao, Peng Zhang, Lei Xie:
Deep Audio-visual System for Closed-set Word-level Speech Recognition. 540-545
Challenge 3: Seventh Emotion Recognition in the Wild Challenge (EmotiW)
- Abhinav Dhall:
EmotiW 2019: Automatic Emotion, Engagement and Cohesion Prediction Tasks. 546-550 - Kai Wang, Jianfei Yang, Da Guo, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao:
Bootstrap Model Ensemble and Rank Loss for Engagement Intensity Regression. 551-556 - Da Guo, Kai Wang, Jianfei Yang, Kaipeng Zhang, Xiaojiang Peng, Yu Qiao:
Exploring Regularizations with Face, Body and Image Cues for Group Cohesion Prediction. 557-561 - Hengshun Zhou, Debin Meng, Yuanyuan Zhang, Xiaojiang Peng, Jun Du, Kai Wang, Yu Qiao:
Exploring Emotion Features and Fusion Strategies for Audio-Video Emotion Recognition. 562-566 - Van Thong Huynh, Soo-Hyung Kim, Gueesang Lee, Hyung-Jeong Yang:
Engagement Intensity Prediction withFacial Behavior Features. 567-571 - Dang Xuan Tien, Soo-Hyung Kim, Hyung-Jeong Yang, Gueesang Lee, Thanh-Hung Vo:
Group-level Cohesion Prediction using Deep Learning Models with A Multi-stream Hybrid Network. 572-576 - Bin Zhu, Xin Guo, Kenneth E. Barner, Charles Boncelet:
Automatic Group Cohesiveness Detection With Multi-modal Features. 577-581 - Jianming Wu, Zhiguang Zhou, Yanan Wang, Yi Li, Xin Xu, Yusuke Uchida:
Multi-feature and Multi-instance Learning with Anti-overfitting Strategy for Engagement Intensity Prediction. 582-588 - Sunan Li, Wenming Zheng, Yuan Zong, Cheng Lu, Chuangao Tang, Xingxun Jiang, Jiateng Liu, Wanchuang Xia:
Bi-modality Fusion for Emotion Recognition in the Wild. 589-594 - Yanan Wang, Jianming Wu, Keiichiro Hoashi:
Multi-Attention Fusion Network for Video-based Emotion Recognition. 595-601
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.