default search action
7th ACII 2017: San Antonio, TX, USA
- Seventh International Conference on Affective Computing and Intelligent Interaction, ACII 2017, San Antonio, TX, USA, October 23-26, 2017. IEEE Computer Society 2017, ISBN 978-1-5386-0563-9
- Svati Dhamija, Terrance E. Boult:
Automated mood-aware engagement prediction. 1-8 - Wasif Khan, Jesse Hoey:
How different identities affect cooperation. 9-14 - Torsten Wörtwein, Stefan Scherer:
What really matters - An information gain analysis of questions and reactions in automated PTSD screenings. 15-20 - Iulia Lefter, Catholijn M. Jonker, Stephanie Klein Tuente, Wim Veling, Stefan Bogaerts:
NAA: A multimodal database of negative affect and aggression. 21-27 - Leimin Tian, Michal Muszynski, Catherine Lai, Johanna D. Moore, Theodoros Kostoulas, Patrizia Lombardo, Thierry Pun, Guillaume Chanel:
Recognizing induced emotions of movie audiences: Are induced and perceived emotions the same? 28-35 - Daniel Gábana Arellano, Laurissa N. Tokarchuk, Emily Hannon, Hatice Gunes:
Effects of valence and arousal on working memory performance in virtual reality gaming. 36-41 - Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Computational model of idiosyncratic perception of others' emotions. 42-49 - Shiro Kumano, Ryo Ishii, Kazuhiro Otsuka:
Comparing empathy perceived by interlocutors in multiparty conversation and external observers. 50-57 - Yuyu Xu, Pedro Sequeira, Stacy Marsella:
Towards modeling agent negotiators by analyzing human negotiation behavior. 58-64 - Bowen Cheng, Zhangyang Wang, Zhaobin Zhang, Zhu Li, Ding Liu, Jianchao Yang, Shuai Huang, Thomas S. Huang:
Robust emotion recognition from low quality and low bit rate video: A deep learning approach. 65-70 - Kara M. Smith, James R. Williamson, Thomas F. Quatieri:
Vocal markers of motor, cognitive, and depressive symptoms in Parkinson's disease. 71-78 - Nurul Lubis, Michael Heck, Sakriani Sakti, Koichiro Yoshino, Satoshi Nakamura:
Processing negative emotions through social communication: Multimodal database construction and analysis. 79-85 - Hesam Sagha, Jun Deng, Björn W. Schuller:
The effect of personality trait, age, and gender on the performance of automatic speech valence recognition. 86-91 - Guillaume Chanel, Sunny Avry, Gaëlle Molinari, Mireille Bétrancourt, Thierry Pun:
Multiple users' emotion recognition: Improving performance by joint modeling of affective reactions. 92-97 - Daniel McDuff:
Smiling from adolescence to old age: A large observational study. 98-104 - Adam C. Lammert, James R. Williamson, Austin R. Hess, Tejash Patel, Thomas F. Quatieri, HuiJun Liao, Alexander P. Lin, Kristin J. Heaton:
Noninvasive estimation of cognitive status in mild traumatic brain injury using speech production and facial expression. 105-110 - Tadas Baltrusaitis, Liandong Li, Louis-Philippe Morency:
Local-global ranking for facial expression intensity estimation. 111-118 - Maneesh Bilalpur, Seyed Mostafa Kia, Tat-Seng Chua, Ramanathan Subramanian:
Discovering gender differences in facial emotion recognition via implicit behavioral cues. 119-124 - Daniela Girardi, Filippo Lanubile, Nicole Novielli:
Emotion detection using noninvasive low cost sensors. 125-130 - Hui Sophie Wang, Stacy Marsella:
Assessing personality through objective behavioral sensing. 131-137 - Michael Tsang, Vadim Korolik, Stefan Scherer, Maja J. Mataric:
Comparing models for gesture recognition of children's bullying behaviors. 138-145 - Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De:
Evaluating effectiveness of smartphone typing as an indicator of user emotion. 146-151 - Javier Hernandez, Craig Ferguson, Akane Sano, Weixuan Chen, Weihui Li, Albert S. Yeung, Rosalind W. Picard:
Stress measurement from tongue color imaging. 152-157 - Philip L. Lopes, Georgios N. Yannakakis, Antonios Liapis:
RankTrace: Relative and unbounded affect annotation. 158-163 - Ilia Shumailov, Hatice Gunes:
Computational analysis of valence and arousal in virtual reality gaming using lower arm electromyograms. 164-169 - Taylan K. Sen, Mohammad Rafayet Ali, Mohammed Ehsan Hoque, Ronald M. Epstein, Paul Duberstein:
Modeling doctor-patient communication with affective text analysis. 170-177 - Wenbo Liu, Tianyan Zhou, Chenghao Zhang, Xiaobing Zou, Ming Li:
Response to name: A dataset and a multimodal machine learning framework towards autism study. 178-183 - Nicholas V. Mudrick, Michelle Taub, Roger Azevedo, Jonathan P. Rowe, James C. Lester:
Toward affect-sensitive virtual human tutors: The influence of facial expressions on learning and emotion. 184-189 - Efthymios Tzinis, Alexandros Potamianos:
Segment-based speech emotion recognition using recurrent neural networks. 190-195 - Jianyu Fan, Miles Thorogood, Philippe Pasquier:
Emo-soundscapes: A dataset for soundscape emotion recognition. 196-201 - Natasha Jaques, Sara Taylor, Akane Sano, Rosalind W. Picard:
Multimodal autoencoder: A deep learning approach to filling in missing sensor data and enabling better mood prediction. 202-208 - Behnaz Nojavanasghari, Charles E. Hughes, Tadas Baltrusaitis, Louis-Philippe Morency:
Hand2Face: Automatic synthesis and recognition of hand over face occlusions. 209-215 - Zakia Hammal, Wen-Sheng Chu, Jeffrey F. Cohn, Carrie Heike, Matthew L. Speltz:
Automatic action unit detection in infants using convolutional neural network. 216-221 - Jun Wang, Michael Xuelin Huang, Grace Ngai, Hong Va Leong:
Are you stressed? Your eyes and the mouse can tell. 222-228 - Catherine Neubauer, Sharon Mozgai, Brandon Chuang, Joshua Woolley, Stefan Scherer:
Manual and automatic measures confirm - Intranasal oxytocin increases facial expressivity. 229-235 - Zhaojun Yang, Boqing Gong, Shrikanth S. Narayanan:
Weighted geodesic flow kernel for interpersonal mutual influence modeling and emotion recognition in dyadic interactions. 236-241 - Yu Ding, Lei Shi, Zhigang Deng:
Perceptual enhancement of emotional mocap head motion: An experimental study. 242-247 - Georgios N. Yannakakis, Roddy Cowie, Carlos Busso:
The ordinal nature of emotions. 248-255 - Xiao Sun, Man Lv, Changqin Quan, Fuji Ren:
Improved facial expression recognition method based on ROI deep convolutional neutral network. 256-261 - Panikos Heracleous, Keiji Yasuda, Fumiaki Sugaya, Akio Yoneyama, Masayuki Hashimoto:
Speech emotion recognition in noisy and reverberant environments. 262-266 - Theodora Chaspari, Adela C. Timmons, Brian R. Baucom, Laura Perrone, Katherine J. W. Baucom, Panayiotis G. Georgiou, Gayla Margolin, Shrikanth S. Narayanan:
Exploring sparse representation measures of physiological synchrony for romantic couples. 267-272 - Yongjae Yoo, Hojin Lee, Hyejin Choi, Seungmoon Choi:
Emotional responses of vibrotactile-thermal stimuli: Effects of constant-temperature thermal stimuli. 273-278 - Shazia Afzal, Bikram Sengupta, Munira Syed, Nitesh V. Chawla, G. Alex Ambrose, Malolan Chetlur:
The ABC of MOOCs: Affect and its inter-play with behavior and cognition. 279-284 - Ashwaq Al-Hargan, Neil Cooke, Tareq Binjammaz:
Affect recognition in an interactive gaming environment using eye tracking. 285-291 - Huang-Cheng Chou, Wei-Cheng Lin, Lien-Chiang Chang, Chyi-Chang Li, Hsi-Pin Ma, Chi-Chun Lee:
NNIME: The NTHU-NTUA Chinese interactive multimodal emotion corpus. 292-298 - Iulia Lefter, Catholijn M. Jonker:
Aggression recognition using overlapping speech. 299-304 - Harald Strömfelt, Yue Zhang, Björn W. Schuller:
Emotion-augmented machine learning: Overview of an emerging domain. 305-312 - Fu-Sheng Tsai, Yi-Ming Weng, Chip-Jin Ng, Chi-Chun Lee:
Embedding stacked bottleneck vocal features in a LSTM architecture for automatic pain level classification during emergency triage. 313-318 - Jinkun Chen, Cong Liu, Ming Li:
Automatic emotional spoken language text corpus construction from written dialogs in fictions. 319-324 - Asma Ghandeharioun, Szymon Fedor, Lisa Sangermano, Dawn Ionescu, Jonathan Alpert, Chelsea Dale, David A. Sontag, Rosalind W. Picard:
Objective assessment of depressive symptoms with machine learning and wearable sensors data. 325-332 - Elizabeth Camilleri, Georgios N. Yannakakis, Antonios Liapis:
Towards general models of player affect. 333-339 - Shahin Amiriparian, Sergey Pugachevskiy, Nicholas Cummins, Simone Hantke, Jouni Pohjalainen, Gil Keren, Björn W. Schuller:
CAST a database: Rapid targeted large-scale big data acquisition via small-world modelling of social media platforms. 340-345 - Akane Sano, Paul Johns, Mary Czerwinski:
Designing opportune stress intervention delivery timing using multi-modal data. 346-353 - David Antonio Gómez Jáuregui, Carole Castanier, Bingbing Chang, Michael Val, François Cottin, Christine Le Scanff, Jean-Claude Martin:
Toward automatic detection of acute stress: Relevant nonverbal behaviors and impact of personality traits. 354-361 - Chaolan Lin, Travis Faas, Erin Brady:
Exploring affection-oriented virtual pet game design strategies in VR attachment, motivations and expectations of users of pet games. 362-369 - Yuqian Zhou, Bertram Emil Shi:
Photorealistic facial expression synthesis by the conditional difference adversarial autoencoder. 370-376 - Chun-Min Chang, Bo-Hao Su, Shih-Chen Lin, Jeng-Lin Li, Chi-Chun Lee:
A bootstrapped multi-view weighted Kernel fusion framework for cross-corpus integration of multimodal emotion recognition. 377-382 - Jaebok Kim, Khiet P. Truong, Gwenn Englebienne, Vanessa Evers:
Learning spectro-temporal features with 3D CNNs for speech emotion recognition. 383-388 - Daniel Baumann, Marwa Mahmoud, Peter Robinson, Eduardo Dias, Lee Skrypchuk:
Multimodal classification of driver glance. 389-394 - Asem M. Ali, Islam Alkabbany, Amal Farag, Ian Bennett, Aly A. Farag:
Facial action units detection under pose variations using deep regions learning. 395-400 - Philipp Werner, Sebastian Handrich, Ayoub Al-Hamadi:
Facial action unit intensity estimation and feature relevance visualization with random regression forests. 401-406 - Minha Lee, Jaebok Kim, Khiet P. Truong, Yvonne de Kort, Femke Beute, Wijnand A. IJsselsteijn:
Exploring moral conflicts in speech: Multidisciplinary analysis of affect and stress. 407-414 - Weixuan Chen, Ognjen (Oggi) Rudovic, Rosalind W. Picard:
GIFGIF+: Collecting emotional animated GIFs with clustered multi-task learning. 410-417 - Reza Lotfian, Carlos Busso:
Formulating emotion perception as a probabilistic model with application to categorical emotion classification. 415-420 - Helma Torkamaan, Jürgen Ziegler:
A taxonomy of mood research and its applications in computer science. 421-426 - Giota Stratou, Job Van Der Schalk, Rens Hoegen, Jonathan Gratch:
Refactoring facial expressions: An automatic analysis of natural occurring facial expressions in iterative social dilemma. 427-433 - Srinivas Parthasarathy, Carlos Busso:
Predicting speaker recognition reliability by considering emotional content. 434-439 - Xin Lu, Reginald B. Adams Jr., Jia Li, Michelle G. Newman, James Z. Wang:
An investigation into three visual characteristics of complex scenes that evoke human emotion. 440-447 - Codruta Gîrlea, Roxana Girju:
Decoding the perception of sincerity in written dialogues. 448-455 - Youngjun Cho, Nadia Bianchi-Berthouze, Simon J. Julier:
DeepBreath: Deep learning of breathing patterns for automatic stress recognition using low-cost thermal imaging in unconstrained settings. 456-463 - Gary McKeown, Christine Spencer, Alex Patterson, Thomas Creaney, Damien Dupré:
Comparing virtual reality with computer monitors as rating environments for affective dimensions in social interactions. 464-469 - Brandon M. Booth, Asem M. Ali, Shrikanth S. Narayanan, Ian Bennett, Aly A. Farag:
Toward active and unobtrusive engagement assessment of distance learners. 470-476 - Vicki Liu, Carmen Banea, Rada Mihalcea:
Grounded emotions. 477-483 - Le Yang, Dongmei Jiang, Wenjing Han, Hichem Sahli:
DCNN and DNN based multi-modal depression recognition. 484-489 - Alexandria Katarina Vail, Tadas Baltrusaitis, Luciana Pennant, Elizabeth S. Liebson, Justin T. Baker, Louis-Philippe Morency:
Visual attention in schizophrenia: Eye contact and gaze aversion during clinical interactions. 490-497 - Aamir Mustafa, Shalini Bhatia, Munawar Hayat, Roland Goecke:
Heart rate estimation from facial videos for depression analysis. 498-503 - Lei Chen, Ru Zhao, Chee Wee Leong, Blair Lehman, Gary Feng, Mohammed (Ehsan) Hoque:
Automated video interview judgment on a large-sized corpus collected online. 504-509 - Kalani Wataraka Gamage, Vidhyasaharan Sethu, Eliathamby Ambikairajah:
Modeling variable length phoneme sequences - A step towards linguistic information for speech emotion recognition in wider world. 518-523 - Wei Huang, R. Benjamin Knapp:
An exploratory study of population differences based on massive database of physiological responses to music. 524-530 - Giota Stratou, Rens Hoegen, Gale M. Lucas, Jonathan Gratch:
Investigating gender differences in temporal dynamics during an iterated social dilemma: An automatic analysis using networks. 531-536 - Monica Perusquía-Hernández, Mazakasu Hirokawa, Kenji Suzuki:
Spontaneous and posed smile recognition based on spatial and temporal patterns of facial EMG. 537-541 - Stefan Slater, Jaclyn Ocumpaugh, Ryan S. Baker, Ma. Victoria Almeda, Laura K. Allen, Neil T. Heffernan:
Using natural language processing tools to develop complex models of student engagement. 542-547 - Caitlin Sikora, Winslow Burleson:
The dance of emotion: Demonstrating ubiquitous understanding of human motion and emotion in support of human computer interaction. 548-555 - Huiyuan Yang, Lijun Yin:
CNN based 3D facial expression recognition using masking and landmark features. 556-560 - Soraia M. Alarcão:
Reminiscence therapy improvement using emotional information. 561-565 - Alex Hernández-García:
Perceived emotion from images through deep neural networks. 566-570 - Jacqueline Deanna Bailey:
Avatar and participant gender differences in the perception of uncanniness of virtual humans. 571-575 - Susmitha Vekkot:
Building a generalized model for multi-lingual vocal emotion conversion. 576-580 - Svati Dhamija:
Learning based visual engagement and self-efficacy. 581-585 - Amanjot Kaur:
Automatic personality assessment in the wild. 586-590 - Adam Hair:
Wear your heart on your sleeve: Visible psychophysiology for contextualized relaxation. 591-595 - Youngjun Cho:
Automated mental stress recognition through mobile thermal imaging. 596-600 - Kevin El Haddad:
Nonverbal conversation expressions processing for human-agent interactions. 601-605 - Yu Hao:
Dynamic emotion transitions based on emotion hysteresis. 606-610 - Kenneth Chen:
Towards more meaningful interactive narrative with intelligent affective characters. 611-615 - Taylan Kartal Sen:
Temporal patterns of facial expression in deceptive and honest communication. 616-620
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.