default search action
17th ICMI 2015: Seattle, WA, USA
- Zhengyou Zhang, Phil Cohen, Dan Bohus, Radu Horaud, Helen Meng:
Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, Seattle, WA, USA, November 09 - 13, 2015. ACM 2015, ISBN 978-1-4503-3912-4
Keynote Address 1
- Samy Bengio:
Sharing Representations for Long Tail Computer Vision Problems. 1
Keynote Address 2
- Kerstin Dautenhahn:
Interaction Studies with Social Robots. 3
Keynote Address 3 (Sustained Accomplishment Award Talk)
- Eric Horvitz:
Connections: 2015 ICMI Sustained Accomplishment Award Lecture. 5
Oral Session 1: Machine Learning in Multimodal Systems
- Moitreya Chatterjee, Sunghyun Park, Louis-Philippe Morency, Stefan Scherer:
Combining Two Perspectives on Classifying Multimodal Data for Recognizing Speaker Traits. 7-14 - Shogo Okada, Oya Aran, Daniel Gatica-Perez:
Personality Trait Classification via Co-Occurrent Multiparty Multimodal Event Discovery. 15-22 - Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng, David Suendermann-Oeft:
Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring. 23-30 - Tanaya Guha, Che-Wei Huang, Naveen Kumar, Yan Zhu, Shrikanth S. Narayanan:
Gender Representation in Cinematic Content: A Multimodal Approach. 31-34
Oral Session 2: Audio-Visual, Multimodal Inference
- Keith Curtis, Gareth J. F. Jones, Nick Campbell:
Effects of Good Speaking Techniques on Audience Engagement. 35-42 - Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, Stefan Scherer:
Multimodal Public Speaking Performance Assessment. 43-50 - Laurent Son Nguyen, Daniel Gatica-Perez:
I Would Hire You in a Minute: Thin Slices of Nonverbal Behavior in Job Interviews. 51-58 - Verónica Pérez-Rosas, Mohamed Abouelenien, Rada Mihalcea, Mihai Burzo:
Deception Detection using Real-life Trial Data. 59-66
Oral Session 3: Language, Speech and Dialog
- Gabriel Skantze, Martin Johansson, Jonas Beskow:
Exploring Turn-taking Cues in Multi-party Human-robot Discussions about Objects. 67-74 - Teruhisa Misu:
Visual Saliency and Crowdsourcing-based Priors for an In-car Situated Dialog System. 75-82 - Yun-Nung Chen, Ming Sun, Alexander I. Rudnicky, Anatole Gershman:
Leveraging Behavioral Patterns of Mobile Applications for Personalized Spoken Language Understanding. 83-86 - Punarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, Hugo Van hamme:
Who's Speaking?: Audio-Supervised Classification of Active Speakers in Video. 87-90
Oral Session 4: Communication Dynamics
- Yukiko I. Nakano, Sakiko Nihonyanagi, Yutaka Takase, Yuki Hayashi, Shogo Okada:
Predicting Participation Styles using Co-occurrence Patterns of Nonverbal Behaviors in Collaborative Learning. 91-98 - Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Multimodal Fusion using Respiration and Gaze for Predicting Next Speaker in Multi-Party Meetings. 99-106 - Catharine Oertel, Kenneth Alberto Funes Mora, Joakim Gustafson, Jean-Marc Odobez:
Deciphering the Silent Participant: On the Use of Audio-Visual Cues for the Classification of Listener Categories in Group Discussions. 107-114 - Najmeh Sadoughi, Carlos Busso:
Retrieving Target Gestures Toward Speech Driven Animation with Meaningful Behaviors. 115-122
Oral Session 5: Interaction Techniques
- Konstantin Klamka, Andreas Siegel, Stefan Vogt, Fabian Göbel, Sophie Stellmach, Raimund Dachselt:
Look & Pedal: Hands-free Navigation in Zoomable Information Spaces through Gaze-supported Foot Input. 123-130 - Ishan Chatterjee, Robert Xiao, Chris Harrison:
Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions. 131-138 - Nimesha Ranasinghe, Gajan Suthokumar, Kuan-Yi Lee, Ellen Yi-Luen Do:
Digital Flavor: Towards Digitally Simulating Virtual Flavors. 139-146 - Xi Laura Cang, Paul Bucci, Andrew Strang, Jeff Allen, Karon E. MacLean, H. Y. Sean Liu:
Different Strokes and Different Folks: Economical Dynamic Surface Sensing and Affect-Related Touch Recognition. 147-154
Oral Session 6: Mobile and Wearable
- Yu-Hao Wu, Jia Jia, Wai-Kim Leung, Yejun Liu, Lianhong Cai:
MPHA: A Personal Hearing Doctor Based on Mobile Devices. 155-162 - Xiang Xiao, Jingtao Wang:
Towards Attentive, Bi-directional MOOC Learning on Mobile Devices. 163-170 - Yeseul Park, Kyle Koh, Heonjin Park, Jinwook Seo:
An Experiment on the Feasibility of Spatial Acquisition using a Moving Auditory Cue for Pedestrian Navigation. 171-174 - Antti Jylhä, Yi-Ta Hsieh, Valeria Orso, Salvatore Andolina, Luciano Gamberini, Giulio Jacucci:
A Wearable Multimodal Interface for Exploring Urban Points of Interest. 175-182
Poster Session
- Fred Charles, Florian Pecune, Gabor Aranyi, Catherine Pelachaud, Marc Cavazza:
ECA Control using a Single Affective User Dimension. 183-190 - Sébastien Pelurson, Laurence Nigay:
Multimodal Interaction with a Bifocal View on Mobile Devices. 191-198 - Kim Hartmann, Julia Krüger, Jörg Frommer, Andreas Wendemuth:
NaLMC: A Database on Non-acted and Acted Emotional Sequences in HCI. 199-202 - Behjat Siddiquie, Dave Chisholm, Ajay Divakaran:
Exploiting Multimodal Affect and Semantics to Identify Politically Persuasive Web Videos. 203-210 - Samer Al Moubayed, Jill Lehman:
Toward Better Understanding of Engagement in Multiparty Spoken Interaction with Children. 211-218 - Yina Ye, Petteri Nurmi:
Gestimator: Shape and Stroke Similarity Based Gesture Recognition. 219-226 - Sarah Strohkorb, Iolanda Leite, Natalie Warren, Brian Scassellati:
Classification of Children's Social Dominance in Group Interactions with Robots. 227-234 - Michal Muszynski, Theodoros Kostoulas, Guillaume Chanel, Patrizia Lombardo, Thierry Pun:
Spectators' Synchronization Detection based on Manifold Representation of Physiological Signals: Application to Movie Highlights Detection. 235-238 - Julia Wache, Ramanathan Subramanian, Mojtaba Khomami Abadi, Radu-Laurentiu Vieriu, Nicu Sebe, Stefan Winkler:
Implicit User-centric Personality Recognition Based on Physiological Responses to Emotional Videos. 239-246 - Abdelkareem Bedri, Apoorva Verlekar, Edison Thomaz, Valerie Avva, Thad Starner:
Detecting Mastication: A Wearable Approach. 247-250 - Marcelo Worsley, Stefan Scherer, Louis-Philippe Morency, Paulo Blikstein:
Exploring Behavior Representation for Learning Analytics. 251-258 - Alina Roitberg, Nikhil Somani, Alexander Clifford Perzylo, Markus Rickert, Alois C. Knoll:
Multimodal Human Activity Recognition for Industrial Manufacturing Processes in Robotic Workcells. 259-266 - Nigel Bosch, Huili Chen, Sidney K. D'Mello, Ryan Shaun Baker, Valerie J. Shute:
Accuracy vs. Availability Heuristic in Multimodal Affect Detection in the Wild. 267-274 - Yue Zhang, Eduardo Coutinho, Zixing Zhang, Caijiao Quan, Björn W. Schuller:
Dynamic Active Learning Based on Agreement and Applied to Emotion Recognition in Spoken Interactions. 275-278 - Ilhan Aslan, Thomas Meneweger, Verena Fuchsberger, Manfred Tscheligi:
Sharing Touch Interfaces: Proximity-Sensitive Touch Targets for Tablet-Mediated Collaboration. 279-286 - Fahim A. Salim, Fasih Haider, Owen Conlan, Saturnino Luz, Nick Campbell:
Analyzing Multimodality of Video for User Engagement Assessment. 287-290 - Asif Iqbal, Carlos Busso, Nicholas R. Gans:
Adjacent Vehicle Collision Warning System using Image Sensor and Inertial Measurement Unit. 291-298 - Robert Bixler, Nathaniel Blanchard, Luke Garrison, Sidney K. D'Mello:
Automatic Detection of Mind Wandering During Reading Using Gaze and Physiology. 299-306 - Hamdi Dibeklioglu, Zakia Hammal, Ying Yang, Jeffrey F. Cohn:
Multimodal Detection of Depression in Clinical Interviews. 307-310 - Sharon L. Oviatt, Kevin Hang, Jianlong Zhou, Fang Chen:
Spoken Interruptions Signal Productive Problem Solving and Domain Expertise in Mathematics. 311-318 - Anton Treskunov, Mike Darnell, Rongrong Wang:
Active Haptic Feedback for Touch Enabled TV Remote. 319-322 - Pierrick Bruneau, Mickaël Stefas, Hervé Bredin, Johann Poignant, Thomas Tamisier, Claude Barras:
A Visual Analytics Approach to Finding Factors Improving Automatic Speaker Identifications. 323-326 - Nina Rosa, Wolfgang Hürst, Wouter Vos, Peter J. Werkhoven:
The Influence of Visual Cues on Passive Tactile Sensations in a Multimodal Immersive Virtual Environment. 327-334 - Sergey Demyanov, James Bailey, Kotagiri Ramamohanarao, Christopher Leckie:
Detection of Deception in the Mafia Party Game. 335-342 - Reina Ueda, Tetsuya Takiguchi, Yasuo Ariki:
Individuality-Preserving Voice Reconstruction for Articulation Disorders Using Text-to-Speech Synthesis. 343-346 - Lucile Bechade, Guillaume Dubuisson Duplessis, Mohamed El Amine Sehili, Laurence Devillers:
Behavioral and Emotional Spoken Cues Related to Mental States in Human-Robot Social Interaction. 347-350 - Sven Bambach, David J. Crandall, Chen Yu:
Viewpoint Integration for Hand-Based Recognition of Social Interactions from a First-Person View. 351-354 - Iwan de Kok, Julian Hough, Felix Hülsmann, Mario Botsch, David Schlangen, Stefan Kopp:
A Multimodal System for Real-Time Action Instruction in Motor Skill Learning. 355-362
Demonstrations
- André D. Milota:
The Application of Word Processor UI paradigms to Audio and Animation Editing. 363-364 - Laura Cang, Paul Bucci, Karon E. MacLean:
CuddleBits: Friendly, Low-cost Furballs that Respond to Touch. 365-366 - Mathieu Chollet, Kalin Stefanov, Helmut Prendinger, Stefan Scherer:
Public Speaking Training with a Multimodal Interactive Virtual Audience Framework. 367-368 - Fiona Dermody, Alistair Sutherland:
A Multimodal System for Public Speaking with Real Time Feedback. 369-370 - Maryam Saberi, Ulysses Bernardet, Steve DiPaola:
Model of Personality-Based, Nonverbal Behavior in Affective Virtual Humanoid Character. 371-372 - Xiang Xiao, Phuong Pham, Jingtao Wang:
AttentiveLearner: Adaptive Mobile MOOC Learning via Implicit Cognitive States Inference. 373-374 - Torsten Wörtwein, Boris Schauerte, Karin E. Müller, Rainer Stiefelhagen:
Interactive Web-based Image Sonification for the Blind. 375-376 - Christian J. A. M. Willemse, Gerald M. Munters, Jan B. F. van Erp, Dirk Heylen:
Nakama: A Companion for Non-verbal Affective Communication. 377-378 - Sven Schmeier, Aaron Ruß, Norbert Reithinger:
Wir im Kiez: Multimodal App for Mutual Help Among Elderly Neighbours. 379-380 - Ethan Selfridge, Michael Johnston:
Interact: Tightly-coupling Multimodal Dialog with an Interactive Virtual Assistant. 381-382 - David G. Novick, Iván Gris Sepulveda, Diego A. Rivera, Adriana I. Camacho, Alex Rayon, Mario Gutiérrez:
The UTEP AGENT System. 383-384 - Fabien Badeig, Quentin Pelorson, Soraya Arias, Vincent Drouard, Israel D. Gebru, Xiaofei Li, Georgios D. Evangelidis, Radu Horaud:
A Distributed Architecture for Interacting with NAO. 385-386
Grand Challenge 1: Recognition of Social Touch Gestures Challenge 2015
- Merel M. Jung, Xi Laura Cang, Mannes Poel, Karon E. MacLean:
Touch Challenge '15: Recognizing Social Touch Gestures. 387-390 - Viet-Cuong Ta, Wafa Johal, Maxime Portaz, Eric Castelli, Dominique Vaufreydaz:
The Grenoble System for the Social Touch Challenge at ICMI 2015. 391-398 - Yona Falinie A. Gaus, Temitayo A. Olugbade, Asim Jan, Rui Qin, Jingxin Liu, Fan Zhang, Hongying Meng, Nadia Bianchi-Berthouze:
Social Touch Gesture Recognition using Random Forest and Boosting on Distinct Feature Sets. 399-406 - Tugce Balli Altuglu, Kerem Altun:
Recognizing Touch Gestures for Social Human-Robot Interaction. 407-413 - Dana Hughes, Nicholas Farrow, Halley Profita, Nikolaus Correll:
Detecting and Identifying Tactile Gestures using Deep Autoencoders, Geometric Moments and Gesture Level Features. 415-422
Grand Challenge 2: Emotion Recognition in the Wild Challenge 2015
- Abhinav Dhall, O. V. Ramana Murthy, Roland Goecke, Jyoti Joshi, Tom Gedeon:
Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015. 423-426 - Bo-Kyeong Kim, Hwaran Lee, Jihyeon Roh, Soo-Young Lee:
Hierarchical Committee of Deep CNNs with Exponentially-Weighted Decision Fusion for Static Facial Expression Recognition. 427-434 - Zhiding Yu, Cha Zhang:
Image based Static Facial Expression Recognition with Multiple Deep Network Learning. 435-442 - Hongwei Ng, Viet Dung Nguyen, Vassilios Vonikakis, Stefan Winkler:
Deep Learning for Emotion Recognition on Small Datasets using Transfer Learning. 443-449 - Anbang Yao, Junchao Shao, Ningning Ma, Yurong Chen:
Capturing AU-Aware Facial Features and Their Latent Relations for Emotion Recognition in the Wild. 451-458 - Heysem Kaya, Furkan Gürpinar, Sadaf Afshar, Albert Ali Salah:
Contrasting and Combining Least Squares Based Learners for Emotion Recognition in the Wild. 459-466 - Samira Ebrahimi Kahou, Vincent Michalski, Kishore Reddy Konda, Roland Memisevic, Christopher Joseph Pal:
Recurrent Neural Networks for Emotion Recognition in Video. 467-474 - Jianlong Wu, Zhouchen Lin, Hongbin Zha:
Multiple Models Fusion for Emotion Recognition in the Wild. 475-481 - Wei Li, Farnaz Abtahi, Zhigang Zhu:
A Deep Feature based Multi-kernel Learning Approach for Video Emotion Recognition. 483-490 - Yuan Zong, Wenming Zheng, Xiaohua Huang, Jingwei Yan, Tong Zhang:
Transductive Transfer LDA with Riesz-based Volume LBP for Emotion Recognition in The Wild. 491-496 - Bo Sun, Liandong Li, Guoyan Zhou, Xuewen Wu, Jun He, Lejun Yu, Dongxue Li, Qinglan Wei:
Combining Multimodal Features within a Fusion Network for Emotion Recognition in the Wild. 497-502 - Gil Levi, Tal Hassner:
Emotion Recognition in the Wild via Convolutional Neural Networks and Mapped Binary Patterns. 503-510 - Albert C. Cruz:
Quantification of Cinematography Semiotics for Video-based Facial Emotion Recognition in the EmotiW 2015 Grand Challenge. 511-518 - Mehmet Kayaoglu, Cigdem Eroglu Erdem:
Affect Recognition using Key Frame Selection based on Minimum Sparse Reconstruction. 519-524
Grand Challenge 3: Multimodal Learning and Analytics Grand Challenge 2015
- Marcelo Worsley, Katherine Chiluiza, Joseph F. Grafsgaard, Xavier Ochoa:
2015 Multimodal Learning and Analytics Grand Challenge. 525-529 - Roghayeh Barmaki, Charles E. Hughes:
Providing Real-time Feedback for Student Teachers in a Virtual Rehearsal Environment. 531-537 - Jan Schneider, Dirk Börner, Peter van Rosmalen, Marcus Specht:
Presentation Trainer, your Public Speaking Multimodal Coach. 539-546 - Chee Wee Leong, Lei Chen, Gary Feng, Chong Min Lee, Matthew Mulholland:
Utilizing Depth Sensors for Analyzing Multimodal Presentations: Hardware, Software and Toolkits. 547-556 - Sidney K. D'Mello, Andrew McGregor Olney, Nathaniel Blanchard, Borhan Samei, Xiaoyi Sun, Brooke Ward, Sean Kelly:
Multimodal Capture of Teacher-Student Interactions for Automated Dialogic Analysis in Live Classrooms. 557-566 - Federico Domínguez, Katherine Chiluiza, Vanessa Echeverría, Xavier Ochoa:
Multimodal Selfies: Designing a Multimodal Recording Device for Students in Traditional Classrooms. 567-574
Doctoral Consortium
- Thomas Janssoone:
Temporal Association Rules for Modelling Multimodal Social Signals. 575-579 - Tariq Iqbal, Laurel D. Riek:
Detecting and Synthesizing Synchronous Joint Action in Human-Robot Teams. 581-585 - Amir Zadeh:
Micro-opinion Sentiment Intensity Analysis and Summarization in Online Videos. 587-591 - Zhou Yu:
Attention and Engagement Aware Multimodal Conversational Systems. 593-597 - Julia Wache:
Implicit Human-computer Interaction: Two Complementary Approaches. 599-603 - Hoe Kin Wong:
Instantaneous and Robust Eye-Activity Based Task Analysis. 605-609 - Sayan Ghosh:
Challenges in Deep Learning for Multimodal Applications. 611-615 - Feng Sun:
Exploring Intent-driven Multimodal Interface for Geographical Information System. 617-621 - Martin Fischbach:
Software Techniques for Multimodal Input Processing in Realtime Interactive Systems. 623-627 - Hafsa Ismail:
Gait and Postural Sway Analysis, A Multi-Modal System. 629-633 - Ganapreeta R. Naidu:
A Computational Model of Culture-Specific Emotion Detection for Artificial Agents in the Learning Domain. 635-639 - Jan Kolkmeier:
Record, Transform & Reproduce Social Encounters in Immersive VR: An Iterative Approach. 641-644 - Nigel Bosch:
Multimodal Affect Detection in the Wild: Accuracy, Availability, and Generalizability. 645-649 - Roghayeh Barmaki:
Multimodal Assessment of Teaching Behavior in Immersive Rehearsal Environment-TeachLivE. 651-655
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.