Nothing Special   »   [go: up one dir, main page]

skip to main content
Reflects downloads up to 21 Dec 2024Bibliometrics
research-article
NEMO: A Database for Emotion Analysis Using Functional Near-Infrared Spectroscopy

We present a dataset for the analysis of human affective states using functional near-infrared spectroscopy (fNIRS). Data were recorded from thirty-one participants who engaged in two tasks. In the emotional perception task the participants passively ...

research-article
Chronic Stress Recognition Based on Time-Slot Analysis of Ambulatory Electrocardiogram and Tri-Axial Acceleration

Stress, especially chronic stress, is a high risk factor of many physical and mental health problems. This work acquired 702 days of full-day ambulatory electrocardiogram (ECG) and Tri-axial acceleration (T-ACC) data from 104 healthy college students and ...

research-article
Style-exprGAN: Diverse Smile Style Image Generation Via Attention-Guided Adversarial Networks

This article proposes a data-driven approach for generating personalized smile style images for neutral expressions, which aims to produce diverse smile styles while preserving individual features. Unlike other generator models that require expensive ...

research-article
EmoStim: A Database of Emotional Film Clips With Discrete and Componential Assessment

Emotion elicitation using emotional film clips is one of the most common and ecologically valid methods in Affective Computing. However, selecting and validating appropriate materials that evoke a range of emotions is challenging. Here, we present EmoStim:...

research-article
Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort

The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current ...

research-article
Enhancing EEG-Based Decision-Making Performance Prediction by Maximizing Mutual Information Between Emotion and Decision-Relevant Features

Emotions are important factors in decision-making. With the advent of brain-computer interface (BCI) techniques, researchers developed a strong interest in predicting decisions based on emotions, which is a challenging task. To predict decision-making ...

research-article
Dual Learning for Conversational Emotion Recognition and Emotional Response Generation

Emotion recognition in conversation (ERC) and emotional response generation (ERG) are two important NLP tasks. ERC aims to detect the utterance-level emotion from a dialogue, while ERG focuses on expressing a desired emotion. Essentially, ERC is a ...

research-article
Open Access
Empathy by Design: The Influence of Trembling AI Voices on Prosocial Behavior

Recent advances in artificial speech synthesis and machine learning equip AI-powered conversational agents, from voice assistants to social robots, with the ability to mimic human emotional expression during their interactions with users. One unexplored ...

research-article
Image-to-Text Conversion and Aspect-Oriented Filtration for Multimodal Aspect-Based Sentiment Analysis

Multimodal aspect-based sentiment analysis (MABSA) aims to determine the sentiment polarity of each aspect mentioned in the text based on multimodal content. Various approaches have been proposed to model multimodal sentiment features for each aspect via ...

research-article
Improving Multi-Label Facial Expression Recognition With Consistent and Distinct Attentions

Facial expression recognition (FER) attracts much attention in computer vision. Previous works mostly study the single-label FER problem. The more complex multi-label facial expression recognition task is underexplored. Multi-label FER is more challenging ...

research-article
Emotion Dictionary Learning With Modality Attentions for Mixed Emotion Exploration

Most existing multi-modal emotion recognition studies are targeted at a classification task that aims to assign a specific emotion category to a combination of several heterogeneous input data, including multimedia signals and physiological signals. A ...

research-article
Movement Representation Learning for Pain Level Classification

Self-supervised learning has shown value for uncovering informative movement features for human activity recognition. However, there has been minimal exploration of this approach for affect recognition where availability of large labelled datasets is ...

research-article
Emotion Recognition From Few-Channel EEG Signals by Integrating Deep Feature Aggregation and Transfer Learning

Electroencephalogram (EEG) signals have been widely studied in human emotion recognition. The majority of existing EEG emotion recognition algorithms utilize dozens or hundreds of electrodes covering the whole scalp region (denoted as full-channel EEG ...

research-article
Olfactory-Enhanced VR: What's the Difference in Brain Activation Compared to Traditional VR for Emotion Induction?

Olfactory-enhanced virtual reality (OVR) creates a complex and rich emotional experience, thus promoting a new generation of human-computer interaction experiences in real-world scenarios. However, with the rise of virtual reality (VR) as a mood induction ...

research-article
Geometric Graph Representation With Learnable Graph Structure and Adaptive AU Constraint for Micro-Expression Recognition

Micro-expression recognition (MER) holds significance in uncovering hidden emotions. Most works take image sequences as input and cannot effectively explore ME information because subtle ME-related motions are easily submerged in unrelated information. ...

research-article
Dynamic Confidence-Aware Multi-Modal Emotion Recognition

Multi-modal emotion recognition has attracted increasing attention in human-computer interaction, as it extracts complementary information from physiological and behavioral features. Compared to single modal approaches, multi-modal fusion methods are more ...

research-article
DFME: A New Benchmark for Dynamic Facial Micro-Expression Recognition

One of the most important subconscious reactions, micro-expression (ME), is a spontaneous, subtle, and transient facial expression that reveals human beings’ genuine emotion. Therefore, automatically recognizing ME (MER) is becoming increasingly ...

research-article
Open Access
A Classification Framework for Depressive Episode Using R-R Intervals From Smartwatch

Depressive episode is key symptom collection of mood disorders. Early intervention can prevent it from happening or reduce its impact, and close monitoring can greatly improve medical management. However, most current monitoring methods are ex post facto, ...

research-article
Continuously Controllable Facial Expression Editing in Talking Face Videos

Recently audio-driven talking face video generation has attracted considerable attention. However, very few researches address the issue of emotional editing of these talking face videos with continuously controllable expressions, which is a strong demand ...

research-article
Research on the Association Mechanism and Evaluation Model Between fNIRS Data and Aesthetic Quality in Product Aesthetic Quality Evaluation

Aesthetic quality evaluation has been an important research question in the field of user experience in product design. However, the feasibility and accuracy of using fNIRS data for product aesthetic quality evaluation are unknown. In this article, we ...

research-article
Open Access
Interaction Between Dynamic Affection and Arithmetic Cognitive Ability: A Practical Investigation With EEG Measurement

Emotions play an essential role in affecting the performance of cognitive abilities in continuous cognitive tasks. Most previous studies share a common issue in that the evoked emotions are simply presumed to be real emotions, without taking into account ...

research-article
MASANet: Multi-Aspect Semantic Auxiliary Network for Visual Sentiment Analysis

Recently, multi-modal affective computing has demonstrated that introducing multi-modal information can enhance performance. However, multi-modal research faces significant challenges due to its high requirements regarding data acquisition, modal ...

research-article
Gusa: Graph-Based Unsupervised Subdomain Adaptation for Cross-Subject EEG Emotion Recognition

EEG emotion recognition has been hampered by the clear individual differences in the electroencephalogram (EEG). Nowadays, domain adaptation is a good way to deal with this issue because it aligns the distribution of data across subjects. However, the ...

research-article
Open Access
Anthropomorphism and Affective Perception: Dimensions, Measurements, and Interdependencies in Aerial Robotics

Assigning lifelike qualities to robotic agents (Anthropomorphism) is associated with complex affective interpretations of their behavior. These anthropomorphized perceptions are traditionally elicited through robots’ designs. Yet, aerial robots (or ...

research-article
Open Access
How Virtual Reality Therapy Affects Refugees From Ukraine - Acute Stress Reduction Pilot Study

This article extends and builds upon our previous research concerning Virtual Reality (VR) with bilateral stimulation as an automated stress-reduction therapy tool. The study coincided with Russia's invasion of Ukraine, thus the software was ...

research-article
Open Access
Show me How You Use Your Mouse and I Tell You How You Feel? Sensing Affect With the Computer Mouse

Computer mouse tracking is a simple and cost-efficient way to gather continuous behavioral data. As theory suggests a relationship between affect and sensorimotor processes, the computer mouse might be usable for affect sensing. However, the processes ...

research-article
CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition

Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease ...

research-article
Exploring Retrospective Annotation in Long-Videos for Emotion Recognition

Emotion recognition systems are typically trained to classify a given psychophysiological state into emotion categories. Current platforms for emotion ground-truth collection show limitations for real-world scenarios of long-duration content (e.g. <inline-...

research-article
Modeling the Interplay Between Cohesion Dimensions: A Challenge for Group Affective Emergent States

Emergent states are temporal group phenomena that arise from collective affective, behavioral, and cognitive processes shared among the group&#x0027;s members during their interactions. Cohesion is one such state, mainly conceptualized by scholars as ...

research-article
Learning With Rater-Expanded Label Space to Improve Speech Emotion Recognition

Automatic sensing of emotional information in speech is important for numerous everyday applications. Conventional Speech Emotion Recognition (SER) models rely on averaging or consensus of human annotations for training, but emotions and raters&#x2019; ...

Comments

Please enable JavaScript to view thecomments powered by Disqus.