default search action
ACII 2015: Xi'an, China
- 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015, Xi'an, China, September 21-24, 2015. IEEE Computer Society 2015, ISBN 978-1-4799-9953-8
- Laura Boccanfuso, Elizabeth S. Kim, James C. Snider, Quan Wang, Carla A. Wall, Lauren DiNicola, Gabriella Greco, Frédérick Shic, Brian Scassellati, Lilli Flink, Sharlene Lansiquot, Katarzyna Chawarska, Pamela Ventola:
Autonomously detecting interaction with an affective robot to explore connection to developmental ability. 1-7 - Elizabeth S. Kim, Christopher M. Daniell, Corinne Makar, Julia Elia, Brian Scassellati, Frederick Shic:
Potential clinical impact of positive affect in robot interactions for autism intervention. 8-13 - Caroline Langlet, Chloé Clavel:
Adapting sentiment analysis to face-to-face human-agent interactions: From the detection to the evaluation issues. 14-20 - Carla A. Wall, Quan Wang, Mary Weng, Elizabeth S. Kim, Litton Whitaker, Michael Perlmutter, Frederick Shic:
Mapping connections between biological-emotional preferences and affective recognition: An eye-tracking interface for passive assessment of emotional competency. 21-27 - Christian J. A. M. Willemse, Dirk K. J. Heylen, Jan B. F. van Erp:
Warmth in affective mediated interaction: Exploring the effects of physical warmth on interpersonal warmth. 28-34 - Chris G. Christou, Kyriakos Herakleous, Aimilia Tzanavari, Charalambos Poullis:
Psychophysiological responses to virtual crowds: Implications for wearable computing. 35-41 - Anne-Marie Brouwer, Chris Dijksterhuis, Jan B. F. van Erp:
Physiological correlates of mental effort as manipulated through lane width during simulated driving. 42-48 - Deba Pratim Saha, Thomas L. Martin, R. Benjamin Knapp:
Towards incorporating affective feedback into context-aware intelligent environments. 49-55 - Takashi Yamauchi, Kunchen Xiao, Casady Bowman, Abdullah Mueen:
Dynamic time warping: A single dry electrode EEG study in a self-paced learning task. 56-62 - Mo Chen, Junwei Han, Lei Guo, Jiahui Wang, Ioannis Patras:
Identifying valence and arousal levels via connectivity between EEG channels. 63-69 - Yue Zhang, Eduardo Coutinho, Björn W. Schuller, Zixing Zhang, Michael G. Adam:
On rater reliability and agreement based dynamic active learning. 70-76 - Yoann Baveye, Emmanuel Dellandréa, Christel Chamaret, Liming Chen:
Deep learning vs. kernel methods: Performance for emotion prediction in videos. 77-83 - Na Li, Yong Xia, Yuwei Xia:
Semi-supervised emotional classification of color images by learning from cloud. 84-90 - Bo Xiao, Panayiotis G. Georgiou, Brian R. Baucom, Shrikanth S. Narayanan:
Modeling head motion entrainment for prediction of couples' behavioral characteristics. 91-97 - Roddy Cowie:
The enduring basis of emotional episodes: Towards a capacious overview. 98-104 - Jonathan Gratch, Lin Cheng, Stacy Marsella:
The appraisal equivalence hypothesis: Verifying the domain-independence of a computational model of emotion dynamics. 105-111 - Jason R. Wilson, Matthias Scheutz:
A model of empathy to shape trolley problem moral judgements. 112-118 - Yuliya Lutchyn, Paul Johns, Mary Czerwinski, Shamsi T. Iqbal, Gloria Mark, Akane Sano:
Stress is in the eye of the beholder. 119-124 - Silvia Monica Feraru, Dagmar Schuller, Björn W. Schuller:
Cross-language acoustic emotion recognition: An overview and some tendencies. 125-131 - Iulia Lefter, Harold T. Nefs, Catholijn M. Jonker, Léon J. M. Rothkrantz:
Cross-corpus analysis for acoustic recognition of negative interactions. 132-138 - Biqiao Zhang, Georg Essl, Emily Mower Provost:
Recognizing emotion from singing and speaking using shared models. 139-145 - Duc Le, Emily Mower Provost:
Data selection for acoustic emotion recognition: Analyzing and comparing utterance and sub-utterance selection strategies. 146-152 - Alessandro Valitutti, Tony Veale:
Inducing an ironic effect in automated tweets. 153-159 - Jules W. Verdijk, Daan Oldenhof, Daan Krijnen, Joost Broekens:
Growing emotions: Using affect to help children understand a plant's needs. 160-165 - Gary McKeown, William Curran, Johannes Wagner, Florian Lingenfelser, Elisabeth André:
The Belfast storytelling database: A spontaneous social interaction database with laughter focused annotation. 166-172 - Mohammad Rafayet Ali, Dev Crasta, Li Jin, Agustin Baretto, Joshua Pachter, Ronald D. Rogge, Mohammed (Ehsan) Hoque:
LISSA - Live Interactive Social Skill Assistance. 173-179 - Giota Stratou, Rens Hoegen, Gale M. Lucas, Jonathan Gratch:
Emotional signaling in a social dilemma: An automatic analysis. 180-186 - Torsten Wörtwein, Louis-Philippe Morency, Stefan Scherer:
Automatic assessment and analysis of public speaking anxiety: A virtual audience case study. 187-193 - Johnathan Mell, Gale M. Lucas, Jonathan Gratch, Avi Rosenfeld:
Saying YES! The cross-cultural complexities of favors and trust in human-agent negotiation. 194-200 - Bexy Alfonso, David V. Pynadath, Margot Lhommet, Stacy Marsella:
Emotional perception for updating agents' beliefs. 201-207 - Ercheng Pei, Le Yang, Dongmei Jiang, Hichem Sahli:
Multimodal dimensional affect recognition using deep bidirectional long short-term memory recurrent neural networks. 208-214 - Zahra Nazari, Gale M. Lucas, Jonathan Gratch:
Multimodal approach for automatic recognition of machiavellianism. 215-221 - Natasha Jaques, Sara Taylor, Asaph Azaria, Asma Ghandeharioun, Akane Sano, Rosalind W. Picard:
Predicting students' happiness from physiology, phone, mobility, and behavioral data. 222-228 - Yuan Shangguan, Emily Mower Provost:
EmoShapelets: Capturing local dynamics of audio-visual affective speech. 229-235 - Sharath Chandra Guntuku, Weisi Lin, Michael James Scott, Gheorghita Ghinea:
Modelling the influence of personality and culture on affect and enjoyment in multimedia. 236-242 - Temitayo A. Olugbade, Nadia Bianchi-Berthouze, Nicolai Marquardt, Amanda C. de C. Williams:
Pain level recognition using kinematics and muscle activity for physical rehabilitation in chronic pain. 243-249 - JunKai Chen, Zheru Chi, Hong Fu:
A new approach for pain event detection in video. 250-254 - Abhinav Dhall, Roland Goecke:
A temporally piece-wise fisher vector approach for depression analysis. 255-259 - Lang He, Dongmei Jiang, Hichem Sahli:
Multimodal depression recognition with dynamic visual and audio cues. 260-266 - Nesrine Fourati, Catherine Pelachaud:
Relevant body cues for the classification of emotional body expression in daily actions. 267-273 - Andra Adams, Marwa Mahmoud, Tadas Baltrusaitis, Peter Robinson:
Decoupling facial expressions and head motions in complex emotions. 274-280 - Zakia Hammal, Jeffrey F. Cohn, Carrie Heike, Matthew L. Speltz:
What can head and facial movements convey about positive and negative affect? 281-287 - Peter Robinson, Tadas Baltrusaitis:
Empirical analysis of continuous affect. 288-294 - Yuliya Lutchyn, Paul Johns, Asta Roseway, Mary Czerwinski:
MoodTracker: Monitoring collective emotions in the workplace. 295-301 - Kemal Taskin, Didem Gökçay:
Investigation of risk taking behavior and outcomes in decision making with modified BART (m-BART). 302-307 - Mohamed Redha Sidoumou, Scott J. Turner, Phil D. Picton, Kamal Bechkoum, Karima Benatchba:
Multitasking in emotion modelling: Attention Control. 308-314 - Celso M. de Melo, Jonathan Gratch:
People show envy, not guilt, when making decisions with machines. 315-321 - Chang Long Zhu, Harshit Agrawal, Pattie Maes:
Data-objects: Re-designing everyday objects as tactile affective interfaces. 322-326 - Kazuyuki Matsumoto, Kyosuke Akita, Minoru Yoshida, Kenji Kita, Fuji Ren:
Estimate the intimacy of the characters based on their emotional states for application to non-task dialogue. 327-333 - Mohamed Yacine Tsalamlal, Jean-Claude Martin, Mehdi Ammi, Adriana Tapus, Michel-Ange Amorim:
Affective handshake with a humanoid robot: How do participants perceive and combine its facial and haptic expressions? 334-340 - Hanan Salam, Mohamed Chetouani:
Engagement detection based on mutli-party cues for human robot interaction. 341-347 - Laurence Devillers, Sophie Rosset, Guillaume Dubuisson Duplessis, Mohamed El Amine Sehili, Lucile Bechade, Agnès Delaborde, Clément Gossart, Vincent Letard, Fan Yang, Yücel Yemez, Bekir Berker Turker, T. Metin Sezgin, Kevin El Haddad, Stéphane Dupont, Daniel Luzzati, Yannick Estève, Emer Gilmartin, Nick Campbell:
Multimodal data collection of human-robot humorous interactions in the Joker project. 348-354 - Andra Adams, Peter Robinson:
Automated recognition of complex categorical emotions from facial expressions and head motions. 355-361 - Yongsen Tao, Kunxia Wang, Jing Yang, Ning An, Lian Li:
Harmony search for feature selection in speech emotion recognition. 362-367 - Ya Li, Linlin Chao, Yazhu Liu, Wei Bao, Jianhua Tao:
From simulated speech to natural speech, what are the robust features for emotion recognition? 368-373 - Wentao Xue, Zhengwei Huang, Xin Luo, Qirong Mao:
Learning speech emotion features by joint disentangling-discrimination. 374-379 - Ntombikayise Banda, Andries P. Engelbrecht, Peter Robinson:
Continuous emotion recognition using a particle swarm optimized NARX neural network. 380-386 - Ingo Siegert, Ronald Böck, Andreas Wendemuth, Bogdan Vlasenko:
Exploring dataset similarities using PCA-based feature selection. 387-393 - Lei Chen, Chee Wee Leong, Gary Feng, Chong Min Lee, Swapna Somasundaran:
Utilizing multimodal cues to automatically evaluate public speaking performance. 394-400 - Sonja Gievska, Kiril Koroveshovski, Natasha Tagasovska:
Bimodal feature-based fusion for real-time emotion recognition in a mobile context. 401-407 - Xiao Sun, Fei Gao, Chengcheng Li, Fuji Ren:
Chinese microblog sentiment classification based on convolution neural network with content extension method. 408-414 - Jin Wang, K. Robert Lai, Liang-Chih Yu, Xuejie Zhang:
A locally weighted method to improve linear regression for lexical-based valence-arousal prediction. 415-420 - Agnieszka Landowska:
Web questionnaire as construction method of affect-annotated lexicon - Risks reduction strategy. 421-427 - Hüseyin Çakmak, Kevin El Haddad, Thierry Dutoit:
GMM-based synchronization rules for HMM-based audio-visual laughter synthesis. 428-434 - Pamela Carreno-Medrano, Sylvie Gibet, Pierre-François Marteau:
End-effectors trajectories: An efficient low-dimensional characterization of affective-expressive body motions. 435-441 - Omid Alemi, William Li, Philippe Pasquier:
Affect-expressive movement generation with factored conditional Restricted Boltzmann Machines. 442-448 - Junchao Xu, Joost Broekens, Koen V. Hindriks, Mark A. Neerincx:
Effects of a robotic storyteller's moody gestures on storytelling perception. 449-455 - Marc Schröder, Elisabetta Bevacqua, Roddy Cowie, Florian Eyben, Hatice Gunes, Dirk Heylen, Mark ter Maat, Gary McKeown, Sathish Pammi, Maja Pantic, Catherine Pelachaud, Björn W. Schuller, Etienne de Sevin, Michel F. Valstar, Martin Wöllmer:
Building autonomous sensitive artificial listeners (Extended abstract). 456-462 - Angeliki Metallinou, Athanasios Katsamanis, Martin Wöllmer, Florian Eyben, Björn W. Schuller, Shrikanth S. Narayanan:
Context-sensitive learning for enhanced audiovisual emotion classification (Extended abstract). 463-469 - Björn W. Schuller, Bogdan Vlasenko, Florian Eyben, Martin Wöllmer, André Stuhlsatz, Andreas Wendemuth, Gerhard Rigoll:
Cross-corpus acoustic emotion recognition: Variances and strategies (Extended abstract). 470-476 - Chung-Hsien Wu, Wei-Bin Liang:
Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels (Extended abstract). 477-483 - Gelareh Mohammadi, Alessandro Vinciarelli:
Automatic personality perception: Prediction of trait attribution based on prosodic features extended abstract. 484-490 - Mohammad Soleymani, Maja Pantic, Thierry Pun:
Multimodal emotion recognition in response to videos (Extended abstract). 491-497 - Donald Glowinski, Marcello Mortillaro, Klaus R. Scherer, Nele Dael, Gualtiero Volpe, Antonio Camurri:
Towards a minimal representation of affective gestures (Extended abstract). 498-504 - Mohammed (Ehsan) Hoque, Daniel McDuff, Rosalind W. Picard:
Exploring temporal patterns in classifying frustrated and delighted smiles (Extended abstract). 505-511 - Daniel McDuff, Rana El Kaliouby, Rosalind W. Picard:
Crowdsourcing facial responses to online videos: Extended abstract. 512-518 - Georgios N. Yannakakis, Julian Togelius:
Experience-driven procedural content generation (Extended abstract). 519-525 - Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li:
Multi task sequence learning for depression scale prediction from video. 526-531 - Lian Zhang, Joshua W. Wade, Amy Swanson, Amy Weitlauf, Zachary Warren, Nilanjan Sarkar:
Cognitive state measurement from eye gaze analysis in an intelligent virtual reality driving system for autism intervention. 532-538 - Gale M. Lucas, Jonathan Gratch, Stefan Scherer, Jill Boberg, Giota Stratou:
Towards an affective interface for assessment of psychological distress. 539-545 - Akane Sano, Paul Johns, Mary Czerwinski:
HealthAware: An advice system for stress, sleep, diet and exercise. 546-552 - Yelin Kim, Emily Mower Provost:
Leveraging inter-rater agreement for audio-visual emotion recognition. 553-559 - Wei Huang, Brennon Bortz, R. Benjamin Knapp:
Exploring the causal relationships between musical features and physiological indicators of emotion. 560-566 - Dongrui Wu, Vernon J. Lawhern, Brent J. Lance:
Reducing BCI calibration effort in RSVP tasks using online weighted adaptation regularization with source domain selection. 567-573 - Georgios N. Yannakakis, Héctor Pérez Martínez:
Grounding truth via ordinal annotation. 574-580 - Caroline Faur, Philippe Caillou, Jean-Claude Martin, Céline Clavel:
A socio-cognitive approach to personality: Machine-learned game strategies as cues of regulatory focus. 581-587 - Mary Weng, Carla A. Wall, Elizabeth S. Kim, Litton Whitaker, Michael Perlmutter, Quan Wang, Eli R. Lebowitz, Frederick Shic:
Linking volitional preferences for emotional information to social difficulties: A game approach using the microsoft kinect. 588-594 - Pablo Paredes, Ryuka Ko, Arezu Aghaseyedjavadi, John Chuang, John F. Canny, Linda Babler:
Synestouch: Haptic + audio affective design for wearable devices. 595-601 - Isabel Gonzalez, Werner Verhelst, Meshia Cédric Oveneke, Hichem Sahli, Dongmei Jiang:
Framework for combination aware AU intensity recognition. 602-608 - Sayan Ghosh, Eugene Laksana, Stefan Scherer, Louis-Philippe Morency:
A multi-label convolutional neural network approach to cross-domain action unit detection. 609-615 - Xing Zhang, Zheng Zhang, Lijun Yin, Daniel Hipp, Peter Gerhardstein:
Perception driven 3D facial expression analysis based on reverse correlation and normal component. 616-622 - Meshia Cédric Oveneke, Isabel Gonzalez, Weiyi Wang, Dongmei Jiang, Hichem Sahli:
Monocular 3D facial information retrieval for automated facial expression analysis. 623-629 - Qiyu Rao, Xing Qu, Qirong Mao, Yongzhao Zhan:
Multi-pose facial expression recognition based on SURF boosting. 630-635 - Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki:
Facial expression recognition with multithreaded cascade of rotation-invariant HOG. 636-642 - Quan Gan, Chongliang Wu, Shangfei Wang, Qiang Ji:
Posed and spontaneous facial expression differentiation using deep Boltzmann machines. 643-648 - Wenbo Liu, Li Yi, Zhiding Yu, Xiaobing Zou, Bhiksha Raj, Ming Li:
Efficient autism spectrum disorder prediction with eye movement: A machine learning framework. 649-655 - David A. Salter, Amir Tamrakar, Behjat Siddiquie, Mohamed R. Amer, Ajay Divakaran, Brian Lande, Darius Mehri:
The Tower Game Dataset: A multimodal dataset for analyzing social interaction predicates. 656-662 - Philipp Matthias Müller, Sikandar Amin, Prateek Verma, Mykhaylo Andriluka, Andreas Bulling:
Emotion recognition from embedded bodily expressions and speech during dyadic interactions. 663-669 - Casady Bowman, Takashi Yamauchi, Kunchen Xiao:
Emotion, voices and musical instruments: Repeated exposure to angry vocal sounds makes instrumental sounds angrier. 670-676 - Harry J. Griffin, Giovanna Varni, Gualtiero Volpe, Gisela Tomé Lourido, Maurizio Mancini, Nadia Bianchi-Berthouze:
Gesture mimicry in expression of laughter. 677-683 - Radoslaw Niewiadomski, Yu Ding, Maurizio Mancini, Catherine Pelachaud, Gualtiero Volpe, Antonio Camurri:
Perception of intensity incongruence in synthesized multimodal expressions of laughter. 684-690 - Sarah Cosentino, Salvatore Sessa, Weisheng Kong, Di Zhang, Atsuo Takanishi, Nadia Bianchi-Berthouze:
Automatic discrimination of laughter using distributed sEMG. 691-697 - Leimin Tian, Johanna D. Moore, Catherine Lai:
Emotion recognition in spontaneous and acted dialogues. 698-704 - Marcello A. Gómez Maureira, Lisa E. Rombout, Livia Teernstra, Imara C. T. M. Speek, Joost Broekens:
The influence of subliminal visual primes on player affect in a horror computer game. 705-711 - Kostas Karpouzis, Georgios N. Yannakakis, Noor Shaker, Stylianos Asteriadis:
The platformer experience dataset. 712-718 - Christoffer Holmgård, Georgios N. Yannakakis, Héctor Pérez Martínez, Karen-Inge Karstoft:
To rank or to classify? Annotating stress for reliable PTSD profiling. 719-725 - Kathrin Pollmann, Mathias Vukelic, Matthias Peissner:
Towards affect detection during human-technology interaction: An empirical study using a combined EEG and fNIRS approach. 726-732 - Zhaocheng Huang:
An investigation of emotion changes from speech. 733-736 - Leimin Tian, Johanna D. Moore, Catherine Lai:
Recognizing emotions in dialogues with acoustic and lexical features. 737-742 - Zhenyu Liu, Bin Hu, Lihua Yan, Tianyang Wang, Fei Liu, Xiaoyu Li, Huanyu Kang:
Detection of depression in speech. 743-747 - Yelin Kim:
Exploring sources of variation in human behavioral data: Towards automatic audio-visual emotion recognition. 748-753 - Jason R. Wilson:
Towards an affective robot capable of being a long-term companion. 754-759 - Mohammad Rafayet Ali:
Automated conversation skills assistant. 760-765 - Christian J. A. M. Willemse:
A warm touch of affect? 766-771 - Damien Dupré, Anna Tcherkassof, Michel Dubois:
Emotions triggered by innovative products: A multi-componential approach of emotions for user experience tools. 772-777 - Florian Eyben, Bernd Huber, Erik Marchi, Dagmar Schuller, Björn W. Schuller:
Real-time robust recognition of speakers' emotions and characteristics on mobile platforms. 778-780 - Xiujuan Chai, Hanjie Wang, Fang Yin, Xilin Chen:
Communication tool for the hard of hearings: A large vocabulary sign language recognition system. 781-783 - Andra Adams, Peter Robinson:
Expression training for complex emotions using facial expressions and head movements. 784-786 - Giota Stratou, Louis-Philippe Morency, David DeVault, Arno Hartholt, Edward Fast, Margaux Lhommet, Gale M. Lucas, Fabrizio Morbini, Kallirroi Georgila, Stefan Scherer, Jonathan Gratch, Stacy Marsella, David R. Traum, Albert A. Rizzo:
A demonstration of the perception system in SimSensei, a virtual human application for healthcare interviews. 787-789 - Joost Broekens:
Emotion engines for games in practice: Two case studies using Gamygdala. 790-791 - Ran Zhang, Xiaoyan Lou, Qinghua Wu:
Duration refinement for hybrid speech synthesis system using random forest. 792-796 - Yong Zhao, Dongmei Jiang, Hichem Sahli:
3D emotional facial animation synthesis with factored conditional Restricted Boltzmann Machines. 797-803 - Huaiping Ming, Dong-Yan Huang, Minghui Dong, Haizhou Li, Lei Xie, Shaofei Zhang:
Fundamental frequency modeling using wavelets for emotional voice conversion. 804-809 - Chung-Hsien Wu, Wei-Bin Liang, Kuan-Chun Cheng, Jen-Chun Lin:
Hierarchical modeling of temporal course in emotional expression for speech emotion recognition. 810-814 - Xixin Wu, Zhiyong Wu, Yishuang Ning, Jia Jia, Lianhong Cai, Helen M. Meng:
Understanding speaking styles of internet speech data with LSTM and low-resource training. 815-820 - Huimin Wu, Qin Jin:
Improving emotion classification on Chinese microblog texts with auxiliary cross-domain data. 821-826 - Weiqiao Zheng, J. S. Yu, Y. X. Zou:
An experimental study of speech emotion recognition based on deep convolutional neural networks. 827-831 - Yun Zhang, Wei Xin, Danmin Miao:
Personality test based on eye tracking techniques. 832-837 - Yongqiang Li, Yongping Zhao, Hongxun Yao, Qiang Ji:
Learning a discriminative dictionary for facial expression recognition. 838-844 - Wei Huang, Shuru Zeng, Guang Chen:
Region-based image retrieval based on medical media data using ranking and multi-view learning. 845-850 - Samer Schaat, Stefan Wilker, Aleksandar Miladinovic, Stephan Dickert, Erdem Geveze, Verena Gruber:
Modelling emotion and social norms for consumer simulations exemplified in social media. 851-856 - Tao Zhuo, Peng Zhang, Kangli Chen, Yanning Zhang:
Superframe segmentation based on content-motion correspondence for social video summarization. 857-862 - Yu-Hao Chin, Po-Chuan Lin, Tzu-Chiang Tai, Jia-Ching Wang:
Genre based emotion annotation for music in noisy environment. 863-866 - Jie Shen, Ognjen Rudovic, Shiyang Cheng, Maja Pantic:
Sentiment apprehension in human-robot interaction with NAO. 867-872 - Guihua Wen, Huihui Li, Danyang Li:
An ensemble convolutional echo state networks for facial expression recognition. 873-878 - Florian B. Pokorny, Franz Graf, Franz Pernkopf, Björn W. Schuller:
Detection of negative emotions in speech signals using bags-of-audio-words. 879-884 - Nemanja Rakicevic, Ognjen Rudovic, Stavros Petridis, Maja Pantic:
Neural conditional ordinal random fields for agreement level estimation. 885-890 - Simone Hantke, Florian Eyben, Tobias Appel, Björn W. Schuller:
iHEARu-PLAY: Introducing a game for crowdsourced data collection for affective computing. 891-897 - Jonathan Gratch, Gale M. Lucas, Nikolaos Malandrakis, Evan Szablowski, Eli Fessler, Jeffrey Nichols:
GOAALLL!: Using sentiment in the World Cup to explore theories of emotion. 898-903 - Dongrui Wu, Chun-Hsiang Chuang, Chin-Teng Lin:
Online driver's drowsiness estimation using domain adaptation with model fusion. 904-910 - He Xu, Eleni Kroupi, Touradj Ebrahimi:
Functional connectivity from EEG signals during perceiving pleasant and unpleasant odors. 911-916 - Wei-Long Zheng, Yong-Qi Zhang, Jia-Yi Zhu, Bao-Liang Lu:
Transfer components between subjects for EEG-based emotion recognition. 917-922 - Ian Daly, Asad Malik, James Weaver, Faustina Hwang, Slawomir J. Nasuto, Duncan Williams, Alexis Kirke, Eduardo Reck Miranda:
Identifying music-induced emotions from EEG for use in brain-computer music interfacing. 923-929 - Doron Friedman, Shai Shapira, Liron Jacobson, Michal Gruberger:
A data-driven validation of frontal EEG asymmetry using a consumer device. 930-937 - Yu Hao, Donghai Wang, James G. Budd:
Design of intelligent emotion feedback to assist users regulate emotions: Framework and principles. 938-943 - Nadine Glas, Catherine Pelachaud:
Definitions of engagement in human-agent interaction. 944-949 - Gary McKeown:
Turing's menagerie: Talking lions, virtual bats, electric sheep and analogical peacocks: Common ground and common interest are necessary components of engagement. 950-955 - Karl Drejing, Serge Thill, Paul Hemeren:
Engagement: A traceable motivational concept in human-robot interaction. 956-961 - Sabrina Campano, Caroline Langlet, Nadine Glas, Chloé Clavel, Catherine Pelachaud:
An ECA expressing appreciations. 962-967 - Stefan Rank, Cathy Lu:
PhysSigTK: Enabling engagement experiments with physiological signals for game design. 968-969 - Samit Bhattacharya:
A linear regression model to detect user emotion for touch input interactive systems. 970-975 - Isabel Pfab, Christian J. A. M. Willemse:
Design of a wearable research tool for warm mediated social touches. 976-981 - Bruna Petreca, Sharon Baurley, Nadia Bianchi-Berthouze:
How do designers feel textiles? 982-987 - Yoren Gaffary, David Antonio Gómez Jáuregui, Jean-Claude Martin, Mehdi Ammi:
Gestural and Postural Reactions to Stressful Event: Design of a Haptic Stressful Stimulus. 988-992 - Yoren Gaffary, Jean-Claude Martin, Mehdi Ammi:
Perception of congruent facial and kinesthetic expressions of emotions. 993-998
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.