Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleOctober 2024
Emotion Recognition in HMDs: A Multi-task Approach Using Physiological Signals and Occluded Faces
- Yunqiang Pei,
- Jialei Tang,
- Qihang Tang,
- Mingfeng Zha,
- Dongyu Xie,
- Guoqing Wang,
- Zhitao Liu,
- Ning Xie,
- Peng Wang,
- Yang Yang,
- Hengtao Shen
MM '24: Proceedings of the 32nd ACM International Conference on MultimediaPages 5977–5986https://doi.org/10.1145/3664647.3681365Prior research on emotion recognition in extended reality (XR) has faced challenges due to the occlusion of facial expressions by Head-Mounted Displays (HMDs). This limitation hinders accurate Facial Expression Recognition (FER), which is crucial for ...
- research-articleOctober 2024JUST ACCEPTED
Motives and risks of self-disclosure to robots versus humans
The extent to which people self-disclose depends on the valence and intimacy of that information. We developed a 16-item measure that features both dimensions to assess participants’ inclination to self-disclose to humans and robots across three studies, ...
- research-articleOctober 2023
Efficient Labelling of Affective Video Datasets via Few-Shot & Multi-Task Contrastive Learning
- Ravikiran Parameshwara,
- Ibrahim Radwan,
- Akshay Asthana,
- Iman Abbasnejad,
- Ramanathan Subramanian,
- Roland Goecke
MM '23: Proceedings of the 31st ACM International Conference on MultimediaPages 6161–6170https://doi.org/10.1145/3581783.3613784Whilst deep learning techniques have achieved excellent emotion prediction, they still require large amounts of labelled training data, which are (a) onerous and tedious to compile, and (b) prone to errors and biases. We propose Multi-Task Contrastive ...
- short-paperMay 2024
ScripTONES: Sentiment-Conditioned Music Generation for Movie Scripts
AIMLSystems '23: Proceedings of the Third International Conference on AI-ML SystemsArticle No.: 35, Pages 1–6https://doi.org/10.1145/3639856.3639891Film scores are considered an essential part of the film cinematic experience, but the process of film score generation is often expensive and infeasible for small-scale creators. Automating the process of film score composition would provide useful ...
- research-articleJanuary 2023
Generalization and Personalization of Mobile Sensing-Based Mood Inference Models: An Analysis of College Students in Eight Countries
- Lakmal Meegahapola,
- William Droz,
- Peter Kun,
- Amalia de Götzen,
- Chaitanya Nutakki,
- Shyam Diwakar,
- Salvador Ruiz Correa,
- Donglei Song,
- Hao Xu,
- Miriam Bidoglia,
- George Gaskell,
- Altangerel Chagnaa,
- Amarsanaa Ganbold,
- Tsolmon Zundui,
- Carlo Caprini,
- Daniele Miorandi,
- Alethia Hume,
- Jose Luis Zarza,
- Luca Cernuzzi,
- Ivano Bison,
- Marcelo Rodas Britez,
- Matteo Busso,
- Ronald Chenu-Abente,
- Can Günel,
- Fausto Giunchiglia,
- Laura Schelenz,
- Daniel Gatica-Perez
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Volume 6, Issue 4Article No.: 176, Pages 1–32https://doi.org/10.1145/3569483Mood inference with mobile sensing data has been studied in ubicomp literature over the last decade. This inference enables context-aware and personalized user experiences in general mobile apps and valuable feedback and interventions in mobile health ...
-
- research-articleAugust 2022
Federated learning to understand human emotions via smart clothing: research proposal
MMSys '22: Proceedings of the 13th ACM Multimedia Systems ConferencePages 408–411https://doi.org/10.1145/3524273.3533936This paper contains the research proposal of Mary Pidgeon that was presented at the MMSys 2022 doctoral symposium. Emotion recognition from physiological signals has seen a huge growth in recent decades. Wearables such as smart watches now have sensors ...
- ArticleApril 2023
The Emotion Code in Sensory Modalities: An Investigation of the Relationship Between Sensorimotor Dimensions and Emotional Valence-Arousal
AbstractHuman sensations and emotions are our primary embodied feelings in experiencing the outside world. The two systems are closely intertwined and jointly contribute to cognitive processes such as language use. However, how the two systems interact as ...
- short-paperJanuary 2021
It’s LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment
CODS-COMAD '21: Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD)Pages 238–242https://doi.org/10.1145/3430984.3431037In recent years, great strides have been made in the field of affective computing. Several models have been developed to represent and quantify emotions. Two popular ones include (i) categorical models which represent emotions as discrete labels, and (...
- short-paperDecember 2020
It's Not What They Play, It's What You Hear: Understanding Perceived vs. Induced Emotions in Hindustani Classical Music
ICMI '20 Companion: Companion Publication of the 2020 International Conference on Multimodal InteractionPages 42–46https://doi.org/10.1145/3395035.3425246Music is an efficient medium to elicit and convey emotions. The comparison between perceived and induced emotions from western music has been widely studied. However, this relationship has not been studied from the perspective of Hindustani classical ...
- research-articleOctober 2020
Predicting Video Affect via Induced Affection in the Wild
ICMI '20: Proceedings of the 2020 International Conference on Multimodal InteractionPages 442–451https://doi.org/10.1145/3382507.3418838Curating large and high quality datasets for studying affect is a costly and time consuming process, especially when the labels are continuous. In this paper, we examine the potential to use unlabeled public reactions in the form of textual comments to ...
- research-articleOctober 2020
StarGAN-EgVA: Emotion Guided Continuous Affect Synthesis
HuMA'20: Proceedings of the 1st International Workshop on Human-centric Multimedia AnalysisPages 53–61https://doi.org/10.1145/3422852.3423483Recent advancement of Generative Adversarial Network (GAN) based architectures has achieved impressive performance on static facial expression synthesis. Continuous affect synthesis, which has applications in generating videos and movies, is ...
- research-articleOctober 2019
A Multimodal Framework for State of Mind Assessment with Sentiment Pre-classification
AVEC '19: Proceedings of the 9th International on Audio/Visual Emotion Challenge and WorkshopPages 13–18https://doi.org/10.1145/3347320.3357689In this paper, we aim at the AVEC2019 State of Mind Sub-Challenge (SoMS), and propose a multimodal state of mind assessment framework, for valence and arousal, respectively. For valence, sentiment analysis is firstly performed on the English text ...
- research-articleOctober 2019
Evidence for Communicative Compensation in Debt Advice with Reduced Multimodality
ICMI '19: 2019 International Conference on Multimodal InteractionPages 210–219https://doi.org/10.1145/3340555.3353757Research has found that professional advice with empathy displays and signs of listening lead to more successful outcomes. These skills are typically displayed through visual nonverbal signals, whereas reduced multimodal contexts have to use other ...
- research-articleSeptember 2019
Audio Emotion Recognition using Machine Learning to support Sound Design
AM '19: Proceedings of the 14th International Audio Mostly Conference: A Journey in SoundPages 116–123https://doi.org/10.1145/3356590.3356609In recent years, the field of Music Emotion Recognition has become established. Less attention has been directed towards the counterpart domain of Audio Emotion Recognition, which focuses upon detection of emotional stimuli resulting from non-musical ...
- research-articleApril 2019
MarValous: machine learning based detection of emotions in the valence-arousal space in software engineering text
SAC '19: Proceedings of the 34th ACM/SIGAPP Symposium on Applied ComputingPages 1786–1793https://doi.org/10.1145/3297280.3297455Emotion analysis in text has drawn recent interests in the software engineering (SE) community. Existing domain-independent techniques for automated emotion/sentiment analysis perform poorly when operated on SE text. Thus, a few SE domain-specific tools ...
- research-articleOctober 2018
Looking Beyond a Clever Narrative: Visual Context and Attention are Primary Drivers of Affect in Video Advertisements
ICMI '18: Proceedings of the 20th ACM International Conference on Multimodal InteractionPages 210–219https://doi.org/10.1145/3242969.3242988Emotion evoked by an advertisement plays a key role in influencing brand recall and eventual consumer choices. Automatic ad affect recognition has several useful applications. However, the use of content-based feature representations does not give ...
- posterSeptember 2018
High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing
AM '18: Proceedings of the Audio Mostly 2018 on Sound in Immersion and EmotionArticle No.: 37, Pages 1–4https://doi.org/10.1145/3243274.3243313Emotional analysis continues to be a topic that receives much attention in the audio and music community. The potential to link together human affective state and the emotional content or intention of musical audio has a variety of application areas in ...
- extended-abstractJune 2018
Time to Compile
MOCO '18: Proceedings of the 5th International Conference on Movement and ComputingArticle No.: 53, Pages 1–4https://doi.org/10.1145/3212721.3212888"Time to Compile"1 is the result of an extended in-house residency of an artist in a robotics lab. The piece explores the temporal and spatial dislocations enabled by digital technology and the internet and plays with human responses to articulated ...
- research-articleApril 2018
DEVA: sensing emotions in the valence arousal space in software engineering text
SAC '18: Proceedings of the 33rd Annual ACM Symposium on Applied ComputingPages 1536–1543https://doi.org/10.1145/3167132.3167296Existing tools for automated sentiment analysis in software engineering text suffer from either or both of two limitations. First, they are developed for non-technical domain and perform poorly when operated on software engineering text. Second, those ...
- research-articleJanuary 2018
Emotional Influences on Cryptographic Key Generation Systems using EEG signals
Procedia Computer Science (PROCS), Volume 126, Issue CPages 703–712https://doi.org/10.1016/j.procs.2018.08.004AbstractThis paper presents a research conducted to verify the influences of emotion on electroencephalogram (EEG)-based cryptographic key generation system. Emotion, such as negative and positive feelings, involves in EEG signal, and hence it may ...