default search action
18th ICMI 2016: Tokyo, Japan
- Yukiko I. Nakano, Elisabeth André, Toyoaki Nishida, Louis-Philippe Morency, Carlos Busso, Catherine Pelachaud:
Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI 2016, Tokyo, Japan, November 12-16, 2016. ACM 2016, ISBN 978-1-4503-4556-9
Invited Talks
- James W. Pennebaker:
Understanding people by tracking their word use (keynote). 1 - Richard S. Zemel:
Learning to generate images and their descriptions (keynote). 2 - Susumu Tachi:
Embodied media: expanding human capacity via virtual reality and telexistence (keynote). 3 - Wolfgang Wahlster:
Help me if you can: towards multiadaptive interaction platforms (ICMI awardee talk). 4
Oral Session 1: Multimodal Social Agents
- Gale M. Lucas, Giota Stratou, Shari Lieblich, Jonathan Gratch:
Trust me: multimodal signals of trustworthiness. 5-12 - Iolanda Leite, André Pereira, Allison Funkhouser, Boyang Li, Jill Fain Lehman:
Semi-situated learning of verbal and nonverbal content for repeated human-robot interaction. 13-20 - Catharine Oertel, José Lopes, Yu Yu, Kenneth Alberto Funes Mora, Joakim Gustafson, Alan W. Black, Jean-Marc Odobez:
Towards building an attentive artificial listener: on the perception of attentiveness in audio-visual feedback tokens. 21-28 - Soumia Dermouche, Catherine Pelachaud:
Sequence-based multimodal behavior modeling for social agents. 29-36
Oral Session 2: Physiological and Tactile Modalities
- Phuong Pham, Jingtao Wang:
Adaptive review for mobile MOOC learning via implicit physiological signal sensing. 37-44 - Nina Rosa, Wolfgang Hürst, Peter J. Werkhoven, Remco C. Veltkamp:
Visuotactile integration for depth perception in augmented reality. 45-52 - Kyriaki Kalimeri, Charalampos Saitis:
Exploring multimodal biosignal features for stress detection during indoor mobility. 53-60 - Sebastian Peters, Jan Ole Johanssen, Bernd Bruegge:
An IDE for multimodal controls in smart buildings. 61-65
Poster Session 1
- Rui Hiraoka, Hiroki Tanaka, Sakriani Sakti, Graham Neubig, Satoshi Nakamura:
Personalized unknown word detection in non-native language reading using eye gaze. 66-70 - Daniel McDuff:
Discovering facial expressions for states of amused, persuaded, informed, sentimental and inspired. 71-75 - Rui Chen, Tiantian Xie, Yingtao Xie, Tao Lin, Ningjiu Tang:
Do speech features for detecting cognitive load depend on specific languages? 76-83 - Skanda Muralidhar, Laurent Son Nguyen, Denise Frauendorfer, Jean-Marc Odobez, Marianne Schmid Mast, Daniel Gatica-Perez:
Training on the job: behavioral analysis of job interviews in hospitality. 84-91 - Yelin Kim, Emily Mower Provost:
Emotion spotting: discovering regions of evidence in audio-visual emotion expressions. 92-99 - Nese Alyüz, Eda Okur, Ece Oktay, Utku Genc, Sinem Aslan, Sinem Emine Mete, Bert Arnrich, Asli Arslan Esme:
Semi-supervised model personalization for improved detection of learner's emotional engagement. 100-107 - Nanxiang Li, Teruhisa Misu, Ashish Tawari, Alexandre Miranda Añon, Chihiro Suga, Kikuo Fujimura:
Driving maneuver prediction using car sensor and driver physiological signals. 108-112 - Jonathan Aigrain, Arnaud Dapogny, Kevin Bailly, Séverine Dubuisson, Marcin Detyniecki, Mohamed Chetouani:
On leveraging crowdsourced data for automatic perceived stress detection. 113-120 - Xun Cao, Naomi Yamashita, Toru Ishida:
Investigating the impact of automated transcripts on non-native speakers' listening comprehension. 121-128 - Keith Curtis, Gareth J. F. Jones, Nick Campbell:
Speaker impact on audience comprehension for academic presentations. 129-136 - Behnaz Nojavanasghari, Tadas Baltrusaitis, Charles E. Hughes, Louis-Philippe Morency:
EmoReact: a multimodal approach and dataset for recognizing emotional responses in children. 137-144 - Sébastien Pelurson, Laurence Nigay:
Bimanual input for multiscale navigation with pressure and touch gestures. 145-152 - Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, Tanja Schultz:
Intervention-free selection using EEG and eye tracking. 153-160 - Lei Chen, Gary Feng, Chee Wee Leong, Blair Lehman, Michelle P. Martin-Raugh, Harrison Kell, Chong Min Lee, Su-Youn Yoon:
Automated scoring of interview videos using Doc2Vec multimodal feature extraction paradigm. 161-168 - Shogo Okada, Yoshihiko Ohtake, Yukiko I. Nakano, Yuki Hayashi, Hung-Hsuan Huang, Yutaka Takase, Katsumi Nitta:
Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets. 169-176 - Patrick J. Donnelly, Nathaniel Blanchard, Borhan Samei, Andrew McGregor Olney, Xiaoyi Sun, Brooke Ward, Sean Kelly, Martin Nystrand, Sidney K. D'Mello:
Multi-sensor modeling of teacher instructional segments in live classrooms. 177-184
Oral Session 3: Groups, Teams, and Meetings
- Fumio Nihei, Yukiko I. Nakano, Yutaka Takase:
Meeting extracts for discussion summarization based on multimodal nonverbal information. 185-192 - Catherine Neubauer, Joshua Woolley, Peter Khooshabeh, Stefan Scherer:
Getting to know you: a multimodal investigation of team behavior and resilience to stress. 193-200 - Ionut Damian, Tobias Baur, Elisabeth André:
Measuring the impact of multimodal behavioural feedback loops on social interactions. 201-208 - Ryo Ishii, Shiro Kumano, Kazuhiro Otsuka:
Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. 209-216
Oral Session 4: Personality and Emotion
- Biqiao Zhang, Georg Essl, Emily Mower Provost:
Automatic recognition of self-reported and perceived emotion: does joint modeling help? 217-224 - Sheng Fang, Catherine Achard, Séverine Dubuisson:
Personality classification and behaviour interpretation: an approach based on feature categories. 225-232 - Xinzhou Xu, Jun Deng, Maryna Gavryukova, Zixing Zhang, Li Zhao, Björn W. Schuller:
Multiscale kernel locally penalised discriminant analysis exemplified by emotion recognition in speech. 233-237 - Laura Cabrera Quiros, Ekin Gedik, Hayley Hung:
Estimating self-assessed personality from body movements and proximity in crowded mingling scenarios. 238-242
Poster Session 2
- Yuchi Huang, Hanqing Lu:
Deep learning driven hypergraph representation for image-based emotion recognition. 243-247 - Kevin El Haddad, Hüseyin Çakmak, Emer Gilmartin, Stéphane Dupont, Thierry Dutoit:
Towards a listening agent: a system generating audiovisual laughs and smiles to show interest. 248-255 - Helen F. Hastie, Pasquale Dente, Dennis Küster, Arvid Kappas:
Sound emblems for affective multimodal output of a robotic tutor: a perception study. 256-260 - Hiroki Tanaka, Hiroyoshi Adachi, Norimichi Ukita, Takashi Kudo, Satoshi Nakamura:
Automatic detection of very early stage of dementia through multimodal interaction with computer avatars. 261-265 - Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André:
MobileSSI: asynchronous fusion for social signal interpretation in the wild. 266-273 - Yue Zhang, Felix Weninger, Anton Batliner, Florian Hönig, Björn W. Schuller:
Language proficiency assessment of English L2 speakers based on joint analysis of prosody and native language. 274-278 - Emad Barsoum, Cha Zhang, Cristian Canton-Ferrer, Zhengyou Zhang:
Training deep networks for facial expression recognition with crowd-sourced label distribution. 279-283 - Behnaz Nojavanasghari, Deepak Gopinath, Jayanth Koushik, Tadas Baltrusaitis, Louis-Philippe Morency:
Deep multimodal fusion for persuasiveness prediction. 284-288 - Oleg Spakov, Poika Isokoski, Jari Kangas, Jussi Rantala, Deepak Akkil, Roope Raisamo:
Comparison of three implementations of HeadTurn: a multimodal interaction technique with gaze and head turns. 289-296 - Maike Paetzel, Christopher Peters, Ingela Nyström, Ginevra Castellano:
Effects of multimodal cues on children's perception of uncanniness in a social robot. 297-301 - Wolfgang Hürst, Kevin Vriens:
Multimodal feedback for finger-based interaction in mobile augmented reality. 302-306 - Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, Woontack Woo:
Smooth eye movement interaction using EOG glasses. 307-311 - Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, Hugo Van hamme:
Active speaker detection with audio-visual co-training. 312-316 - Cigdem Beyan, Nicolò Carissimi, Francesca Capozzi, Sebastiano Vascon, Matteo Bustreo, Antonio Pierro, Cristina Becchio, Vittorio Murino:
Detecting emergent leader in a meeting environment using nonverbal visual features only. 317-324 - Ailbhe N. Finnerty, Skanda Muralidhar, Laurent Son Nguyen, Fabio Pianesi, Daniel Gatica-Perez:
Stressful first impressions in job interviews. 325-332
Oral Session 5: Gesture, Touch, and Haptics
- Alex Shaw, Lisa Anthony:
Analyzing the articulation features of children's touchscreen gestures. 333-340 - Imtiaj Ahmed, Ville J. Harjunen, Giulio Jacucci, Eve E. Hoggan, Niklas Ravaja, Michiel M. A. Spapé:
Reach out and touch me: effects of four distinct haptic technologies on affective touch in virtual reality. 341-348 - Philipp Mock, Peter Gerjets, Maike Tibus, Ulrich Trautwein, Korbinian Möller, Wolfgang Rosenstiel:
Using touchscreen interaction data to predict cognitive workload. 349-356 - Adrien Arnaud, Jean-Baptiste Corrégé, Céline Clavel, Michèle Gouiffès, Mehdi Ammi:
Exploration of virtual environments on tablet: comparison between tactile and tangible interaction techniques. 357-361
Oral Session 6: Skill Training and Assessment
- Afra J. Mashhadi, Akhil Mathur, Marc Van den Broeck, Geert Vanderhulst, Fahim Kawsar:
Understanding the impact of personal feedback on face-to-face interactions in the workplace. 362-369 - Sowmya Rasipuram, Pooja Rao S. B., Dinesh Babu Jayagopi:
Asynchronous video interviews vs. face-to-face interviews for communication skill measurement: a systematic study. 370-377 - Xiang Xiao, Jingtao Wang:
Context and cognitive state triggered interventions for mobile MOOC learning. 378-385 - Mathieu Chollet, Helmut Prendinger, Stefan Scherer:
Native vs. non-native language fluency implications on multimodal interaction for interpersonal skills training. 386-393
Demo Session 1
- Ionut Damian, Michael Dietz, Frank Gaibler, Elisabeth André:
Social signal processing for dummies. 394-395 - Michael Cohen, Yousuke Nagayama, Bektur Ryskeldiev:
Metering "black holes": networking stand-alone applications for distributed multimodal synchronization. 396-397 - Euan Freeman, Graham A. Wilson, Stephen A. Brewster:
Towards a multimodal adaptive lighting system for visually impaired children. 398-399 - Graham A. Wilson, Euan Freeman, Stephen A. Brewster:
Multimodal affective feedback: combining thermal, vibrotactile, audio and visual signals. 400-401 - Ron Artstein, David R. Traum, Jill Boberg, Alesia Gainer, Jonathan Gratch, Emmanuel Johnson, Anton Leuski, Mikio Nakano:
Niki and Julie: a robot and virtual human for studying multimodal social interaction. 402-403 - Helen F. Hastie, Xingkun Liu, Pedro Patrón:
A demonstration of multimodal debrief generation for AUVs, post-mission and in-mission. 404-405 - Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, Elisabeth André:
Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization. 406-407
Demo Session 2
- Fiona Dermody, Alistair Sutherland:
Multimodal system for public speaking with real time feedback: a positive computing perspective. 408-409 - Wataru Hashiguchi, Junya Morita, Takatsugu Hirayama, Kenji Mase, Kazunori Yamada, Mayu Yokoya:
Multimodal biofeedback system integrating low-cost easy sensing devices. 410-411 - Kana Kushida, Hideyuki Nakanishi:
A telepresence system using a flexible textile display. 412-413 - Ryu Yasuhara, Masashi Inoue, Ikuya Suga, Tetsuo Kosaka:
Large-scale multimodal movie dialogue corpus. 414-415 - Wan-Lun Tsai, You-Lun Hsu, Chi-Po Lin, Chen-Yu Zhu, Yu-Cheng Chen, Min-Chun Hu:
Immersive virtual reality with multimodal interaction and streaming technology. 416 - Divesh Lala, Pierrick Milhorat, Koji Inoue, Tianyu Zhao, Tatsuya Kawahara:
Multimodal interaction with the autonomous Android ERICA. 417-418 - Michel F. Valstar, Tobias Baur, Angelo Cafaro, Alexandru Ghitulescu, Blaise Potard, Johannes Wagner, Elisabeth André, Laurent Durieu, Matthew P. Aylett, Soumia Dermouche, Catherine Pelachaud, Eduardo Coutinho, Björn W. Schuller, Yue Zhang, Dirk Heylen, Mariët Theune, Jelte van Waterschoot:
Ask Alice: an artificial retrieval of information agent. 419-420 - Anmol Srivastava, Pradeep Yammiyavar:
Design of multimodal instructional tutoring agents using augmented reality and smart learning objects. 421-422 - Phuong Pham, Jingtao Wang:
AttentiveVideo: quantifying emotional responses to mobile video advertisements. 423-424 - Iván Gris, Diego A. Rivera, Alex Rayon, Adriana I. Camacho, David G. Novick:
Young Merlin: an embodied conversational agent in virtual reality. 425-426
EmotiW Challenge
- Abhinav Dhall, Roland Göcke, Jyoti Joshi, Jesse Hoey, Tom Gedeon:
EmotiW 2016: video and group-level emotion recognition challenges. 427-432 - Sarah Adel Bargal, Emad Barsoum, Cristian Canton-Ferrer, Cha Zhang:
Emotion recognition in the wild from videos using images. 433-436 - Aleksandra Cerekovic:
A deep look into group happiness prediction from images. 437-444 - Yin Fan, Xiangju Lu, Dian Li, Yuanliu Liu:
Video-based emotion recognition using CNN-RNN and C3D hybrid networks. 445-450 - Bo Sun, Qinglan Wei, Liandong Li, Qihua Xu, Jun He, Lejun Yu:
LSTM for dynamic emotion and group emotion recognition in the wild. 451-457 - Jingwei Yan, Wenming Zheng, Zhen Cui, Chuangao Tang, Tong Zhang, Yuan Zong, Ning Sun:
Multi-clue fusion for emotion recognition in the wild. 458-463 - Jianlong Wu, Zhouchen Lin, Hongbin Zha:
Multi-view common space learning for emotion recognition in the wild. 464-471 - Anbang Yao, Dongqi Cai, Ping Hu, Shandong Wang, Liang Sha, Yurong Chen:
HoloNet: towards robust emotion recognition in the wild. 472-478 - Vassilios Vonikakis, Yasin Yazici, Viet Dung Nguyen, Stefan Winkler:
Group happiness assessment using geometric features and dataset balancing. 479-486 - Jianshu Li, Sujoy Roy, Jiashi Feng, Terence Sim:
Happiness level prediction with sequential inputs via multiple regressions. 487-493 - Shizhe Chen, Xinrui Li, Qin Jin, Shilei Zhang, Yong Qin:
Video emotion recognition in the wild based on fusion of multimodal features. 494-500 - John Gideon, Biqiao Zhang, Zakaria Aldeneh, Yelin Kim, Soheil Khorram, Duc Le, Emily Mower Provost:
Wild wild emotion: a multimodal ensemble approach. 501-505 - Wan Ding, Mingyu Xu, Dong-Yan Huang, Weisi Lin, Minghui Dong, Xinguo Yu, Haizhou Li:
Audio and face video emotion recognition in the wild using deep neural networks and small datasets. 506-513 - Mostafa Mehdipour-Ghazi, Hazim Kemal Ekenel:
Automatic emotion recognition in the wild using an ensemble of static and dynamic representations. 514-521
Doctoral Consortium
- Maike Paetzel:
The influence of appearance and interaction strategy of a social robot on the feeling of uncanniness in humans. 522-526 - Xueting Wang:
Viewing support system for multi-view videos. 527-531 - Alix Pérusseau-Lambert:
Engaging children with autism in a shape perception task using a haptic force feedback interface. 532-535 - Kei Shimonishi:
Modeling user's decision process through gaze behavior. 536-540 - Fiona Dermody:
Multimodal positive computing system for public speaking with real-time feedback. 541-545 - Sowmya Rasipuram:
Prediction/Assessment of communication skill using multimodal cues in social interactions. 546-549 - Nina Rosa:
Player/Avatar body relations in multimodal augmented reality games. 550-553 - Soumia Dermouche:
Computational model for interpersonal attitude expression. 554-558 - Ploypailin Intapong, Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura:
Assessing symptoms of excessive SNS usage based on user behavior and emotion. 559-562 - Tipporn Laohakangvalvit, Tiranee Achalakul, Michiko Ohkura:
Kawaii feeling estimation by product attributes and biological signals. 563-566 - Shalini Bhatia:
Multimodal sensing of affect intensity. 567-571 - Anmol Srivastava:
Enriching student learning experience using augmented reality and smart learning objects. 572-576 - Krystian Radlak, Bogdan Smolka:
Automated recognition of facial expressions authenticity. 577-581 - Biqiao Zhang:
Improving the generalizability of emotion recognition systems: towards emotion recognition in the wild. 582-586
Grand Challenge Summary
- Abhinav Dhall, Roland Goecke, Jyoti Joshi, Tom Gedeon:
Emotion recognition in the wild challenge 2016. 587-588
Workshop Summaries
- Patrick Holthaus, Thomas Hermann, Sebastian Wrede, Sven Wachsmuth, Britta Wrede:
1st international workshop on embodied interaction with smart environments (workshop summary). 589-590 - Khiet P. Truong, Dirk Heylen, Toyoaki Nishida, Mohamed Chetouani:
ASSP4MI2016: 2nd international workshop on advancements in social signal processing for multimodal interaction (workshop summary). 591-592 - Kim Hartmann, Ingo Siegert, Albert Ali Salah, Khiet P. Truong:
ERM4CT 2016: 2nd international workshop on emotion representations and modelling for companion systems (workshop summary). 593-595 - Wolfgang Hürst, Daisuke Iwai, Prabhakaran Balakrishnan:
International workshop on multimodal virtual and augmented reality (workshop summary). 596-597 - Mohamed Chetouani, Salvatore Maria Anzalone, Giovanna Varni, Isabelle Hupont Torres, Ginevra Castellano, Angelica Lim, Gentiane Venture:
International workshop on social learning and multimodal interaction for designing artificial agents (workshop summary). 598-600 - Anton Nijholt, Carlos Velasco, Kasun Karunanayaka, Gijs Huisman:
1st international workshop on multi-sensorial approaches to human-food interaction (workshop summary). 601-603 - Ronald Böck, Francesca Bonin, Nick Campbell, Ronald Poppe:
International workshop on multimodal analyses enabling artificial agents in human- machine interaction (workshop summary). 604-605
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.