Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,051)

Search Parameters:
Keywords = facial expressions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 4882 KiB  
Article
Empowering Recovery: The T-Rehab System’s Semi-Immersive Approach to Emotional and Physical Well-Being in Tele-Rehabilitation
by Hayette Hadjar, Binh Vu and Matthias Hemmje
Electronics 2025, 14(5), 852; https://doi.org/10.3390/electronics14050852 - 21 Feb 2025
Abstract
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using [...] Read more.
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using a webcam with MediaPipe to track their hand movements for interactive exercises, allowing the system to tailor treatment content for increased engagement and comfort. T-Rehab’s evaluation comprises two assessments: system performance and cognitive walkthroughs. The first evaluation focuses on system performance, assessing the tested game, middleware, and facial emotion monitoring to ensure hardware compatibility and effective support for AC, gaming, and tele-rehabilitation. The second evaluation uses cognitive walkthroughs to examine usability, identifying potential issues in emotion detection and tele-rehabilitation. Together, these evaluations provide insights into T-Rehab’s functionality, usability, and impact in supporting both physical rehabilitation and emotional well-being. The thorough integration of technology inside T-Rehab ensures a holistic approach to tele-rehabilitation, allowing patients to participate comfortably and efficiently from anywhere. This technique not only improves physical therapy outcomes but also promotes mental resilience, marking an important step advance in tele-rehabilitation practices. Full article
Show Figures

Figure 1

Figure 1
<p>MediaPipe hand landmarks [<a href="#B18-electronics-14-00852" class="html-bibr">18</a>].</p>
Full article ">Figure 2
<p>T-Rehab Intervention—UML use case diagram for applied gaming with emotion recognition.</p>
Full article ">Figure 3
<p>T-Rehab architecture model.</p>
Full article ">Figure 4
<p>Implementation architecture of T-Rehab.</p>
Full article ">Figure 5
<p>Overview of the T-Rehab system for hyper-limb remote rehabilitation.</p>
Full article ">Figure 6
<p>Data fusion of facial expression analysis and contactless heart rate monitoring.</p>
Full article ">Figure 7
<p>Fusion of facial expressions and heart rate monitoring to detect stress and pain during rehab.</p>
Full article ">Figure 8
<p>Performance measurement of T-Rehab prototype middleware using curl.</p>
Full article ">Figure 9
<p>Comparison of positive and negative emotion detection accuracy across age groups.</p>
Full article ">Figure 10
<p>Confidence score analysis: (<b>a</b>) confidence scores for correct predictions; (<b>b</b>) confidence scores for incorrect predictions.</p>
Full article ">Figure 11
<p>Inference time analysis: model performance per image.</p>
Full article ">Figure 12
<p>Real-time feedback display of emotional and physiological states using AI avatar in T-Rehab game.</p>
Full article ">Figure 13
<p>Hand-tracking interaction in ColorMatch Rehab.</p>
Full article ">
13 pages, 35894 KiB  
Article
An Artificial Intelligence Approach to the Craniofacial Recapitulation of Crisponi/Cold-Induced Sweating Syndrome 1 (CISS1/CISS) from Newborns to Adolescent Patients
by Giulia Pascolini, Dario Didona and Luigi Tarani
Diagnostics 2025, 15(5), 521; https://doi.org/10.3390/diagnostics15050521 - 21 Feb 2025
Abstract
Background/Objectives: Crisponi/cold-induced sweating syndrome 1 (CISS1/CISS, MIM#272430) is a genetic disorder due to biallelic variants in CRFL1 (MIM*604237). The related phenotype is mainly characterized by abnormal thermoregulation and sweating, facial muscle contractions in response to tactile and crying-inducing stimuli at an early [...] Read more.
Background/Objectives: Crisponi/cold-induced sweating syndrome 1 (CISS1/CISS, MIM#272430) is a genetic disorder due to biallelic variants in CRFL1 (MIM*604237). The related phenotype is mainly characterized by abnormal thermoregulation and sweating, facial muscle contractions in response to tactile and crying-inducing stimuli at an early age, skeletal anomalies (camptodactyly of the hands, scoliosis), and craniofacial dysmorphisms, comprising full cheeks, micrognathia, high and narrow palate, low-set ears, and a depressed nasal bridge. The condition is associated with high lethality during the neonatal period and can benefit from timely symptomatic therapy. Methods: We collected frontal images of all patients with CISS1/CISS published to date, which were analyzed with Face2Gene (F2G), a machine-learning technology for the facial diagnosis of syndromic phenotypes. In total, 75 portraits were subdivided into three cohorts, based on age (Cohort 1 and 2) and the presence of the typical facial trismus (Cohort 3). These portraits were uploaded to F2G to test their suitability for facial analysis and to verify the capacity of the AI tool to correctly recognize the syndrome based on the facial features only. The photos which passed this phase (62 images) were fed to three different AI algorithms—DeepGestalt, Facial D-Score, and GestaltMatcher. Results: The DeepGestalt algorithm results, including the correct diagnosis using a frontal portrait, suggested a similar facial phenotype in the first two cohorts. Cohort 3 seemed to be highly differentiable. The results were expressed in terms of the area under the curve (AUC) of the receiver operating characteristic (ROC) curve and p Value. The Facial D-Score values indicated the presence of a consistent degree of dysmorphic signs in the three cohorts, which was also confirmed by the GestaltMatcher algorithm. Interestingly, the latter allowed us to identify overlapping genetic disorders. Conclusions: This is the first AI-powered image analysis in defining the craniofacial contour of CISS1/CISS and in determining the feasibility of training the tool used in its clinical recognition. The obtained results showed that the use of F2G can reveal valid support in the diagnostic process of CISS1/CISS, especially in more severe phenotypes, manifesting with facial contractions and potentially lethal consequences. Full article
Show Figures

Figure 1

Figure 1
<p>Artistic reproduction of the typical facial muscle paroxysmal contraction in response to handling or crying. The portrait has been drawn by the illustrator Susanna Brusa.</p>
Full article ">Figure 2
<p><b>DeepGestalt analysis of the studied cohorts and composite photos.</b> Composite photos of the three groups of patients generated by F2G. These have been obtained using F2G RESEARCH, after the upload of a frontal image into the CLINIC section, which automatically identified the suitable portraits for the experiment.</p>
Full article ">Figure 3
<p>Multiclass Comparison and Confusion Matrix. In the Confusion Matrix, the True Positive (IP) values are highlighted diagonally while errors (false positives and false negatives) are reported in other rates. TP values of Cohorts 2 and 3 are significantly higher than the random chance for comparison and Cohort 2 is lower, indicating no recognition by the algorithm.</p>
Full article ">Figure 4
<p>(<b>A</b>–<b>C</b>) <b>Binary comparison experiments of the three cohorts</b>. The results of the binary comparison between Cohort 1 and 2 (<b>A</b>), Cohort 1 and 3 (<b>B</b>), Cohort 2 and 3 (<b>C</b>), including the AUC and <span class="html-italic">p</span> Value are shown.</p>
Full article ">Figure 5
<p>(<b>A</b>–<b>C</b>) <b>Binary comparison experiments of the three cohorts between themselves.</b> A greater separation of the two curves is observable in the comparison of Cohort 3 with the other two (<b>A</b>). Conversely, the comparison of Cohorts 1 and 2 with the other is characterized by overlapping curves (<b>B</b>,<b>C</b>).</p>
Full article ">Figure 6
<p><b>DeepGestalt performance on CISS1/CISS facial recognition</b>. By uploading one frontal image in F2G CLINIC, the capacity of the platform to recognize the correct diagnosis has been tested for the three cohorts. The results are displayed as a failure, CISS/CISS recognition within the highest 10 and 30 ranking syndromes, and as the first probable diagnosis.</p>
Full article ">Figure 7
<p><b>GestaltMatcher experiment results and Pairwise Comparison Matrix (PMC).</b> Our cohorts’ images were compared with FDNA’s GestaltMatcher gallery (4300 images), outputting similarity ranks. Furthermore, a clustering method has been applied to the matrix (represented by a dendogram), clustering similar ranks together. The number in each of the matrix cells represents the similarity rank achieved. Dark green values (low rank) indicate higher similarity in facial phenotypic features within the test cohort.</p>
Full article ">Figure 8
<p><b>GestaltMatcher analysis and t-SNE visualization</b>. The 3 most similar syndromes (NS in blue, CDLS in orange and CISS in green) to our 3 cohorts (Cohort 1 in red, Cohort 2 in violet, and Cohort 3 in brown) when compared to the GestaltMatcher database, are shown.</p>
Full article ">
73 pages, 4804 KiB  
Systematic Review
From Neural Networks to Emotional Networks: A Systematic Review of EEG-Based Emotion Recognition in Cognitive Neuroscience and Real-World Applications
by Evgenia Gkintoni, Anthimos Aroutzidis, Hera Antonopoulou and Constantinos Halkiopoulos
Brain Sci. 2025, 15(3), 220; https://doi.org/10.3390/brainsci15030220 - 20 Feb 2025
Abstract
Background/Objectives: This systematic review presents how neural and emotional networks are integrated into EEG-based emotion recognition, bridging the gap between cognitive neuroscience and practical applications. Methods: Following PRISMA, 64 studies were reviewed that outlined the latest feature extraction and classification developments using deep [...] Read more.
Background/Objectives: This systematic review presents how neural and emotional networks are integrated into EEG-based emotion recognition, bridging the gap between cognitive neuroscience and practical applications. Methods: Following PRISMA, 64 studies were reviewed that outlined the latest feature extraction and classification developments using deep learning models such as CNNs and RNNs. Results: Indeed, the findings showed that the multimodal approaches were practical, especially the combinations involving EEG with physiological signals, thus improving the accuracy of classification, even surpassing 90% in some studies. Key signal processing techniques used during this process include spectral features, connectivity analysis, and frontal asymmetry detection, which helped enhance the performance of recognition. Despite these advances, challenges remain more significant in real-time EEG processing, where a trade-off between accuracy and computational efficiency limits practical implementation. High computational cost is prohibitive to the use of deep learning models in real-world applications, therefore indicating a need for the development and application of optimization techniques. Aside from this, the significant obstacles are inconsistency in labeling emotions, variation in experimental protocols, and the use of non-standardized datasets regarding the generalizability of EEG-based emotion recognition systems. Discussion: These challenges include developing adaptive, real-time processing algorithms, integrating EEG with other inputs like facial expressions and physiological sensors, and a need for standardized protocols for emotion elicitation and classification. Further, related ethical issues with respect to privacy, data security, and machine learning model biases need to be much more proclaimed to responsibly apply research on emotions to areas such as healthcare, human–computer interaction, and marketing. Conclusions: This review provides critical insight into and suggestions for further development in the field of EEG-based emotion recognition toward more robust, scalable, and ethical applications by consolidating current methodologies and identifying their key limitations. Full article
(This article belongs to the Section Computational Neuroscience and Neuroinformatics)
Show Figures

Figure 1

Figure 1
<p>Flowchart of PRISMA methodology.</p>
Full article ">Figure 2
<p>Risk of bias assessment across five domains.</p>
Full article ">Figure 3
<p>Comparison of EEG techniques based on research insights.</p>
Full article ">Figure 4
<p>EEG electrode placement (10–20 system).</p>
Full article ">Figure 5
<p>Radar chart of EEG emotion recognition techniques and performance.</p>
Full article ">Figure 6
<p>Heatmap of EEG technologies: ecological validity and additional metrics.</p>
Full article ">Figure 7
<p>Strengths, limitations, and multimodal integration in EEG-based emotion recognition.</p>
Full article ">Figure 8
<p>EEG variations across different emotional intelligence levels.</p>
Full article ">Figure 9
<p>Flowchart of EEG-based emotional regulation interventions.</p>
Full article ">Figure 10
<p>EEG-based emotional regulation interventions.</p>
Full article ">Figure 11
<p>Comparative EEG heatmap: healthy vs. clinical populations.</p>
Full article ">Figure 12
<p>Comparative EEG Flowchart: healthy vs. clinical populations.</p>
Full article ">Figure 13
<p>Circular graph shows relationships between neuroimaging techniques, machine learning methods, applications, and ethical challenges.</p>
Full article ">Figure 14
<p>Neuroimaging modalities across application domains.</p>
Full article ">Figure 15
<p>Comparative radar chart illustrates the strengths and limitations of EEG for emotion recognition.</p>
Full article ">
32 pages, 4102 KiB  
Article
A Multimodal Pain Sentiment Analysis System Using Ensembled Deep Learning Approaches for IoT-Enabled Healthcare Framework
by Anay Ghosh, Saiyed Umer, Bibhas Chandra Dhara and G. G. Md. Nawaz Ali
Sensors 2025, 25(4), 1223; https://doi.org/10.3390/s25041223 - 17 Feb 2025
Abstract
This study introduces a multimodal sentiment analysis system to assess and recognize human pain sentiments within an Internet of Things (IoT)-enabled healthcare framework. This system integrates facial expressions and speech-audio recordings to evaluate human pain intensity levels. This integration aims to enhance the [...] Read more.
This study introduces a multimodal sentiment analysis system to assess and recognize human pain sentiments within an Internet of Things (IoT)-enabled healthcare framework. This system integrates facial expressions and speech-audio recordings to evaluate human pain intensity levels. This integration aims to enhance the recognition system’s performance and enable a more accurate assessment of pain intensity. Such a multimodal approach supports improved decision making in real-time patient care, addressing limitations inherent in unimodal systems for measuring pain sentiment. So, the primary contribution of this work lies in developing a multimodal pain sentiment analysis system that integrates the outcomes of image-based and audio-based pain sentiment analysis models. The system implementation contains five key phases. The first phase focuses on detecting the facial region from a video sequence, a crucial step for extracting facial patterns indicative of pain. In the second phase, the system extracts discriminant and divergent features from the facial region using deep learning techniques, utilizing some convolutional neural network (CNN) architectures, which are further refined through transfer learning and fine-tuning of parameters, alongside fusion techniques aimed at optimizing the model’s performance. The third phase performs the speech-audio recording preprocessing; the extraction of significant features is then performed through conventional methods followed by using the deep learning model to generate divergent features to recognize audio-based pain sentiments in the fourth phase. The final phase combines the outcomes from both image-based and audio-based pain sentiment analysis systems, improving the overall performance of the multimodal system. This fusion enables the system to accurately predict pain levels, including ‘high pain’, ‘mild pain’, and ‘no pain’. The performance of the proposed system is tested with the three image-based databases such as a 2D Face Set Database with Pain Expression, the UNBC-McMaster database (based on shoulder pain), and the BioVid database (based on heat pain), along with the VIVAE database for the audio-based dataset. Extensive experiments were performed using these datasets. Finally, the proposed system achieved accuracies of 76.23%, 84.27%, and 38.04% for two, three, and five pain classes, respectively, on the 2D Face Set Database with Pain Expression, UNBC, and BioVid datasets. The VIVAE audio-based system recorded a peak performance of 97.56% and 98.32% accuracy for varying training–testing protocols. These performances were compared with some state-of-the-art methods that show the superiority of the proposed system. By combining the outputs of both deep learning frameworks on image and audio datasets, the proposed multimodal pain sentiment analysis system achieves accuracies of 99.31% for the two-class, 99.54% for the three-class, and 87.41% for the five-class pain problems. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Pictorial representation of the proposed multimodal pain sentiment analysis system (PSAS) for smart healthcare framework.</p>
Full article ">Figure 2
<p>Detecting facial regions in input images for the image-based PSAS.</p>
Full article ">Figure 3
<p>Demonstration of the <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <msub> <mi>N</mi> <mi>A</mi> </msub> </mrow> </semantics></math> architecture for image-based PSAS.</p>
Full article ">Figure 4
<p>Illustration of the <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <msub> <mi>N</mi> <mi>B</mi> </msub> </mrow> </semantics></math> architecture.</p>
Full article ">Figure 5
<p>Executed <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <msub> <mi>N</mi> <mn>1</mn> </msub> </mrow> </semantics></math> framework.</p>
Full article ">Figure 6
<p>Examples of some image samples from UNBC-McMaster [<a href="#B60-sensors-25-01223" class="html-bibr">60</a>] database.</p>
Full article ">Figure 7
<p>Examples of some image samples from <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>D</mi> <mi>F</mi> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math> [<a href="#B61-sensors-25-01223" class="html-bibr">61</a>] database.</p>
Full article ">Figure 8
<p>Samples of some image specimens from BioVid Heat Pain Database [<a href="#B62-sensors-25-01223" class="html-bibr">62</a>].</p>
Full article ">Figure 9
<p>Demonstration of utilization of <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>m</mi> <msub> <mi>e</mi> <mn>1</mn> </msub> </mrow> </semantics></math> experiments, exploring the effect of batch size vs. epochs on the proposed system’s performance.</p>
Full article ">Figure 10
<p>Demonstration of <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>m</mi> <msub> <mi>e</mi> <mn>1</mn> </msub> </mrow> </semantics></math> experiments performing multi-resolution image analysis on the performance of the proposed system.</p>
Full article ">Figure 11
<p>Demonstration of some image samples of AffectNet dataset [<a href="#B64-sensors-25-01223" class="html-bibr">64</a>] with ethnic diversity and variations in age among the subjects to validate the robustness of the proposed methodology.</p>
Full article ">Figure 12
<p>The performance outcome of the proposed pain SAS using audio features with (<b>a</b>) 50–50% training–testing, and (<b>b</b>) 75–25% training–testing sets.</p>
Full article ">Figure 13
<p>Performance of the proposed pain sentiment analysis system using the performance reported in <a href="#sensors-25-01223-t011" class="html-table">Table 11</a> and <a href="#sensors-25-01223-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>Performance of the proposed multimodal pain SAS (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>A</mi> <msub> <mi>S</mi> <mn>1</mn> </msub> </mrow> </semantics></math>) using 2-class <math display="inline"><semantics> <mrow> <mn>2</mn> <mi>D</mi> <mi>F</mi> <mi>P</mi> <mi>E</mi> </mrow> </semantics></math> and VIVAE databases.</p>
Full article ">Figure 15
<p>Performance of the proposed multimodal pain SAS (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>A</mi> <msub> <mi>S</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) using 3-Class UNBC-McMaster and VIVAE databases.</p>
Full article ">Figure 16
<p>Performance of the proposed multimodal pain SAS (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>S</mi> <mi>A</mi> <msub> <mi>S</mi> <mn>3</mn> </msub> </mrow> </semantics></math>) using 4-class BioVid and VIVAE databases.</p>
Full article ">
24 pages, 2289 KiB  
Article
A Non-Invasive Approach for Facial Action Unit Extraction and Its Application in Pain Detection
by Mondher Bouazizi, Kevin Feghoul, Shengze Wang, Yue Yin and Tomoaki Ohtsuki
Bioengineering 2025, 12(2), 195; https://doi.org/10.3390/bioengineering12020195 - 17 Feb 2025
Abstract
A significant challenge that hinders advancements in medical research is the sensitive and confidential nature of patient data in available datasets. In particular, sharing patients’ facial images poses considerable privacy risks, especially with the rise of generative artificial intelligence (AI), which could misuse [...] Read more.
A significant challenge that hinders advancements in medical research is the sensitive and confidential nature of patient data in available datasets. In particular, sharing patients’ facial images poses considerable privacy risks, especially with the rise of generative artificial intelligence (AI), which could misuse such data if accessed by unauthorized parties. However, facial expressions are a valuable source of information for doctors and researchers, which creates a need for methods to derive them without compromising patient privacy or safety by exposing identifiable facial images. To address this, we present a quick, computationally efficient method for detecting action units (AUs) and their intensities—key indicators of health and emotion—using only 3D facial landmarks. Our proposed framework extracts 3D face landmarks from video recordings and employs a lightweight neural network (NN) to identify AUs and estimate AU intensities based on these landmarks. Our proposed method reaches a 79.25% F1-score in AU detection for the main AUs, and 0.66 in AU intensity estimation Root Mean Square Error (RMSE). This performance shows that it is possible for researchers to share 3D landmarks, which are far less intrusive, instead of facial images while maintaining high accuracy in AU detection. Moreover, to showcase the usefulness of our AU detection model, using the detected AUs and estimated intensities, we trained state-of-the-art Deep Learning (DL) models to detect pain. Our method reaches 91.16% accuracy in pain detection, which is not far behind the 93.14% accuracy obtained when employing a convolutional neural network (CNN) with residual blocks trained on actual images and the 92.11% accuracy obtained when employing all the ground-truth AUs. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

Figure 1
<p>An example of the human face mesh superimposed on the human face itself. Areas around the eyes, the nose, and the mouth have higher landmark density than the remaining parts of the face.</p>
Full article ">Figure 2
<p>A flowchart of the proposed framework. The framework is composed of 3 main components: an anonymizer, an AU detector, and a pain detector.</p>
Full article ">Figure 3
<p>A diagram block of the proposed framework: Upon generating the 3D face landmarks, a 2-layer FCNN with multiple outputs is used to detect the AUs. The sequence of detected AUs is then processed through a Transformer encoder to identify the class (pain).</p>
Full article ">Figure 4
<p>The structure of the Transformer encoder used in our work.</p>
Full article ">Figure 5
<p>Example of consecutive frames from the dataset (a few seconds apart) along with their detected face landmarks.</p>
Full article ">Figure 6
<p>Distribution in percent of the different AUs in our dataset.</p>
Full article ">Figure 7
<p>Precision, recall and F1-scores of the detection of the secondary AUs.</p>
Full article ">Figure 8
<p>Distribution of the intensity level for each action unit in our dataset.</p>
Full article ">
23 pages, 2838 KiB  
Article
Investigating Eye Movements to Examine Attachment-Related Differences in Facial Emotion Perception and Face Memory
by Karolin Török-Suri, Kornél Németh, Máté Baradits and Gábor Csukly
J. Imaging 2025, 11(2), 60; https://doi.org/10.3390/jimaging11020060 - 16 Feb 2025
Abstract
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, [...] Read more.
Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, which measures the avoidance and anxiety dimensions of attachment), facial emotion perception and face memory in a neurotypical sample. Trait and state anxiety were also measured as covariates. Eye-tracking was used during the emotion decision task (happy vs. sad faces) and the subsequent facial recognition task; the length of fixations to different face regions was measured as the dependent variable. Linear mixed models suggested that differences during emotion perception may result from longer fixations in individuals with insecure (anxious or avoidant) attachment orientations. This effect was also influenced by individual state and trait anxiety measures. Eye movements during the recognition memory task, however, were not related to either of the attachment dimensions; only trait anxiety had a significant effect on the length of fixations in this condition. The results of our research may contribute to a more accurate understanding of facial emotion perception in the light of attachment styles, and their interaction with anxiety characteristics. Full article
(This article belongs to the Special Issue Human Attention and Visual Cognition (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>The original (<b>left</b>) vs. the edited (<b>right</b>) happy face (example; RaFD identity no. 31). <a href="#jimaging-11-00060-f001" class="html-fig">Figure 1</a> has been adapted from the Radboud Faces Database [<a href="#B62-jimaging-11-00060" class="html-bibr">62</a>]. These images are available on request from the authors’ website (<a href="https://rafd.socsci.ru.nl/" target="_blank">https://rafd.socsci.ru.nl/</a>), no copyright permission is needed.</p>
Full article ">Figure 2
<p>The seven levels of emotional expressions used in the emotion decision task. Outer parts of the facial stimuli (hair, ears, neck) were covered with a grey mask. (Example: RaFD Identity no. 8). <a href="#jimaging-11-00060-f002" class="html-fig">Figure 2</a> has been adapted from the Radboud Faces Database [<a href="#B62-jimaging-11-00060" class="html-bibr">62</a>]. These images are available on request from the authors’ website (<a href="https://rafd.socsci.ru.nl/" target="_blank">https://rafd.socsci.ru.nl/</a>), no copyright permission is needed.</p>
Full article ">Figure 3
<p>AoI-s based on the Voronoi-method [<a href="#B63-jimaging-11-00060" class="html-bibr">63</a>]: left eye, right eye, nose, mouth. The two eye regions are named according to the “director’s view” (i.e., anatomically reversed). The position of the fixation cross is marked in red. <a href="#jimaging-11-00060-f003" class="html-fig">Figure 3</a> has been adapted from the Radboud Faces Database [<a href="#B62-jimaging-11-00060" class="html-bibr">62</a>]. These images are available on request from the authors’ website (<a href="https://rafd.socsci.ru.nl/" target="_blank">https://rafd.socsci.ru.nl/</a>), no copyright permission is needed.</p>
Full article ">Figure 4
<p>Interaction of <span class="html-italic">stimulus type</span> and <span class="html-italic">AoI</span>, in the context of percentile scores on the ECR avoidance subscale. The trendline highlights the positive association between fixation duration and scores on the avoidance dimension of the ECR. Significant differences between estimated fixation lengths to different AoI-s are marked with an asterisk (<span class="html-italic">p</span> &lt; 0.05). Error bars represent standard errors (SE-s).</p>
Full article ">Figure 5
<p>Interaction of <span class="html-italic">stimulus type</span> and <span class="html-italic">AoI</span>, in the context of percentile scores on the ECR anxiety subscale. The trendline highlights the positive association between fixation duration and scores on the anxiety dimension of the ECR. Significant differences between estimated fixation lengths to different AoI-s are marked with an asterisk (<span class="html-italic">p</span> &lt; 0.05). Error bars represent standard errors (SE-s).</p>
Full article ">Figure 6
<p>Interaction of <span class="html-italic">stimulus type</span> and <span class="html-italic">AoI</span>, in the context of percentile scores on the STAI-T subscale. The trendline highlights the negative association between fixation duration and scores on the trait version of the STAI. Significant differences between estimated fixation lengths to different AoI-s are marked with an asterisk (<span class="html-italic">p</span> &lt; 0.05). Error bars represent standard errors (SE-s).</p>
Full article ">Figure 7
<p>Interaction of <span class="html-italic">stimulus type</span> and <span class="html-italic">AoI</span>, in the context of percentile scores on the STAI-S subscale. The trendline highlights the negative association between fixation duration and scores on the state version of the STAI. Significant differences between estimated fixation lengths to different AoI-s are marked with an asterisk (<span class="html-italic">p</span> &lt; 0.05). Error bars represent standard errors (SE-s).</p>
Full article ">
18 pages, 6370 KiB  
Review
Anatomy-Based Filler Injection: Treatment Techniques for Supraorbital Hollowness and Charming Roll
by Gi-Woong Hong, Wonseok Choi, Jovian Wan, Song Eun Yoon, Carlos Bautzer, Lucas Basmage, Patricia Leite and Kyu-Ho Yi
Life 2025, 15(2), 304; https://doi.org/10.3390/life15020304 - 15 Feb 2025
Abstract
Supraorbital hollowness and pretarsal fullness, commonly known as the sunken eyelid and charming roll, respectively, are significant anatomical features that impact the aesthetic appearance of the periorbital region. Supraorbital hollowness is characterized by a recessed appearance of the upper eyelid, often attributed to [...] Read more.
Supraorbital hollowness and pretarsal fullness, commonly known as the sunken eyelid and charming roll, respectively, are significant anatomical features that impact the aesthetic appearance of the periorbital region. Supraorbital hollowness is characterized by a recessed appearance of the upper eyelid, often attributed to genetic factors, aging, or surgical alterations, such as excessive fat removal during blepharoplasty. This condition is particularly prevalent among East Asians due to anatomical differences, such as weaker levator muscles and unique fat distribution patterns. Pretarsal fullness, also known as aegyo-sal, enhances the youthful and expressive appearance of the lower eyelid, forming a roll above the lash line that is considered aesthetically desirable in East Asian culture. Anatomical-based filler injection techniques are critical for correcting these features, involving precise placement within the correct tissue planes to avoid complications and achieve natural-looking results. This approach not only improves the aesthetic appeal of the eyelid but also enhances the overall facial harmony, emphasizing the importance of tailored procedures based on individual anatomy and cultural preferences. Full article
Show Figures

Figure 1

Figure 1
<p>Before (<b>A</b>) and after (<b>B</b>) treatment of supraorbital hollowness.</p>
Full article ">Figure 2
<p>Anatomical layers of the supraorbital region.</p>
Full article ">Figure 3
<p>Vascular structures of the orbital region.</p>
Full article ">Figure 4
<p>Injection entry point and technique for the cannula. Injection entry point: Vertical line drawn above or outside the lateral canthus, around the lower margin of the superior orbital rim. Focus on the medial and middle parts of the periorbital rim, under the brow, to avoid the supraorbital and supratrochlear main arteries. Above the supratarsal lid crease and below the orbicularis retaining ligament. Injection technique: Patient in vertical sitting position with voluntarily opened eyes. Retrograde linear tiny injection technique with very slow release.</p>
Full article ">Figure 5
<p>Anatomy of the preseptal space.</p>
Full article ">Figure 6
<p>Injection planes: Supraperiosteal and submuscular injections around the orbital rim over the orbital septum to fill the hollowness. Subdermal injection of very soft HA filler to smooth the surface and remove unnecessary multiple eyelid lines.</p>
Full article ">Figure 7
<p>Ideal position and shape of the eyebrow.</p>
Full article ">Figure 8
<p>Ratio difference between the size of the eye and eyebrow.</p>
Full article ">Figure 9
<p>Common classification of eyebrow shapes around the world.</p>
Full article ">Figure 10
<p>Retro-orbicularis oculi fat (ROOF) in the eyebrow region.</p>
Full article ">Figure 11
<p>Injection plane for the cannula. Submuscular injection into ROOF (retro-orbicularis oculi fat) for eyebrow augmentation. Subdermal injection of very soft filler to even out the surface and remove unnecessary multiple eyelid lines.</p>
Full article ">Figure 12
<p>Structure of the lower eyelid roll muscle.</p>
Full article ">Figure 13
<p>Injection techniques for the cannula or needle. Linear threading, retrograde tiny injection, very slow release, serial puncture, and tenting technique.</p>
Full article ">Figure 14
<p>Injection planes: Deep subdermal or supramuscular injections. Subdermal injection to smooth the surface, close to the eyelash.</p>
Full article ">Figure 15
<p>Anatomy of the superior and inferior palpebral arteries.</p>
Full article ">
15 pages, 4374 KiB  
Article
An Artificial Intelligence Model for Sensing Affective Valence and Arousal from Facial Images
by Hiroki Nomiya, Koh Shimokawa, Shushi Namba, Masaki Osumi and Wataru Sato
Sensors 2025, 25(4), 1188; https://doi.org/10.3390/s25041188 - 15 Feb 2025
Abstract
Artificial intelligence (AI) models can sense subjective affective states from facial images. Although recent psychological studies have indicated that dimensional affective states of valence and arousal are systematically associated with facial expressions, no AI models have been developed to estimate these affective states [...] Read more.
Artificial intelligence (AI) models can sense subjective affective states from facial images. Although recent psychological studies have indicated that dimensional affective states of valence and arousal are systematically associated with facial expressions, no AI models have been developed to estimate these affective states from facial images based on empirical data. We developed a recurrent neural network-based AI model to estimate subjective valence and arousal states from facial images. We trained our model using a database containing participant valence/arousal states and facial images. Leave-one-out cross-validation supported the validity of the model for predicting subjective valence and arousal states. We further validated the effectiveness of the model by analyzing a dataset containing participant valence/arousal ratings and facial videos. The model predicted second-by-second valence and arousal states, with prediction performance comparable to that of FaceReader, a commercial AI model that estimates dimensional affective states based on a different approach. We constructed a graphical user interface to show real-time affective valence and arousal states by analyzing facial video data. Our model is the first distributable AI model for sensing affective valence and arousal from facial images/videos to be developed based on an empirical database; we anticipate that it will have many practical uses, such as in mental health monitoring and marketing research. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

Figure 1
<p>Structure of a gated recurrent unit (GRU)-based estimation model.</p>
Full article ">Figure 2
<p>Videos of a participant in the RIKEN facial expression database.</p>
Full article ">Figure 3
<p>Mean (standard error) Pearson’s correlation coefficients between the actual and estimated ratings. *** <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.001</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Pearson’s correlation coefficients between the actual and estimated ratings of valence in the model development.</p>
Full article ">Figure 5
<p>Pearson’s correlation coefficients between the actual and estimated ratings of arousal in the model development.</p>
Full article ">Figure 6
<p>Results of drop-column importance for training data.</p>
Full article ">Figure 7
<p>Mean (standard error) Pearson’s correlation coefficients between actual and estimated valence and arousal ratings estimated by our model and FaceReader 9. *** <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.001</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Pearson’s correlation coefficients between the actual and estimated ratings of valence in the experiment.</p>
Full article ">Figure 9
<p>Pearson’s correlation coefficients between the actual and estimated ratings of arousal in the experiment.</p>
Full article ">Figure 10
<p>Results of drop-column importance for test data.</p>
Full article ">Figure 11
<p>The GUI-based valence/arousal estimation system. <b>Top left</b>, input video; <b>top right</b>, current valence and arousal intensity values, represented as a point in two-dimensional space; <b>bottom</b>, graph showing changes in intensity values. The person shown is one of authors who agreed to show his face.</p>
Full article ">Figure 12
<p>Summary of the estimation results. Top, temporal changes in valence and arousal intensity throughout the entire video. Bottom, distribution of intensity values. The two-dimensional space representing valence and arousal is divided into 5 × 5 regions; in each region, the frequency of the intensity value is represented by color intensity.</p>
Full article ">
28 pages, 9455 KiB  
Article
Advancing Emotionally Aware Child–Robot Interaction with Biophysical Data and Insight-Driven Affective Computing
by Diego Resende Faria, Amie Louise Godkin and Pedro Paulo da Silva Ayrosa
Sensors 2025, 25(4), 1161; https://doi.org/10.3390/s25041161 - 14 Feb 2025
Abstract
This paper investigates the integration of affective computing techniques using biophysical data to advance emotionally aware machines and enhance child–robot interaction (CRI). By leveraging interdisciplinary insights from neuroscience, psychology, and artificial intelligence, the study focuses on creating adaptive, emotion-aware systems capable of dynamically [...] Read more.
This paper investigates the integration of affective computing techniques using biophysical data to advance emotionally aware machines and enhance child–robot interaction (CRI). By leveraging interdisciplinary insights from neuroscience, psychology, and artificial intelligence, the study focuses on creating adaptive, emotion-aware systems capable of dynamically recognizing and responding to human emotional states. Through a real-world CRI pilot study involving the NAO robot, this research demonstrates how facial expression analysis and speech emotion recognition can be employed to detect and address negative emotions in real time, fostering positive emotional engagement. The emotion recognition system combines handcrafted and deep learning features for facial expressions, achieving an 85% classification accuracy during real-time CRI, while speech emotions are analyzed using acoustic features processed through machine learning models with an 83% accuracy rate. Offline evaluation of the combined emotion dataset using a Dynamic Bayesian Mixture Model (DBMM) achieved a 92% accuracy for facial expressions, and the multilingual speech dataset yielded 98% accuracy for speech emotions using the DBMM ensemble. Observations from psychological and technological aspects, coupled with statistical analysis, reveal the robot’s ability to transition negative emotions into neutral or positive states in most cases, contributing to emotional regulation in children. This work underscores the potential of emotion-aware robots to support therapeutic and educational interventions, particularly for pediatric populations, while setting a foundation for developing personalized and empathetic human–machine interactions. These findings demonstrate the transformative role of affective computing in bridging the gap between technological functionality and emotional intelligence across diverse domains. Full article
(This article belongs to the Special Issue Multisensory AI for Human-Robot Interaction)
Show Figures

Figure 1

Figure 1
<p>Overview of a child–robot interaction session with specific frames from each stage defined.</p>
Full article ">Figure 2
<p>Examples of facial landmark detection and expression recognition performed on images captured by the NAO robot’s camera. The images are intentionally filtered to blur the children’s faces, ensuring their privacy.</p>
Full article ">Figure 3
<p>Facial expression distribution per stage. This figure illustrates the distribution of emotions, including happy, neutral, afraid, sad, and surprised, observed across the five stages of the child–robot interaction.</p>
Full article ">Figure 4
<p>Speech emotion distribution per stage. Our approach could detect only neutral and positive emotions. Other emotions like fear, sadness, and anger were not detected.</p>
Full article ">Figure 5
<p>Text sentiment distribution per stage. Based on the sentiment analysis of text converted from speech, this figure shows the proportions of positive, neutral, and negative sentiments across the stages.</p>
Full article ">Figure 6
<p>Summary of positive, neutral, and negative emotional states per stage. This figure aggregates the results from facial expressions, speech emotions, and text sentiment, summarizing the emotional states into positive (happy and surprised), neutral, and negative (sad, afraid, angry, and disgusted).</p>
Full article ">Figure 7
<p>Emotional response distribution (children vs. mothers).</p>
Full article ">Figure 8
<p>Child emotional trends during interaction.</p>
Full article ">Figure 9
<p>Parent–child emotional concordance.</p>
Full article ">Figure 10
<p>Sample frames showcasing interactions between boys and girls with the NAO robot. The examples highlight their proxemics relative to the robot and instances where mothers participated during specific parts of the session. The images are captured from both the environment camera and the robot’s onboard camera.</p>
Full article ">
21 pages, 1123 KiB  
Article
Cognitive Mechanisms Underlying the Influence of Facial Information Processing on Estimation Performance
by Xinqi Huang, Xiaofan Zhou, Mingyi Xu, Zhihao Liu, Yilin Ma, Chuanlin Zhu and Dongquan Kou
Behav. Sci. 2025, 15(2), 212; https://doi.org/10.3390/bs15020212 - 14 Feb 2025
Abstract
This study aimed to investigate the roles of facial information processing and math anxiety in estimation performance. Across three experiments, participants completed a two-digit multiplication estimation task under the conditions of emotion judgment (Experiment 1), identity judgment (Experiment 2), and combined emotion and [...] Read more.
This study aimed to investigate the roles of facial information processing and math anxiety in estimation performance. Across three experiments, participants completed a two-digit multiplication estimation task under the conditions of emotion judgment (Experiment 1), identity judgment (Experiment 2), and combined emotion and identity judgment (Experiment 3). In the estimation task, participants used either the down-up or up-down problem to select approximate answers. In Experiment 1, we found that negative emotions impair estimation performance, while positive and consistent emotions have a facilitating effect on estimation efficiency. In Experiment 2, we found that emotion and identity consistency interact with each other, and negative emotions actually promote estimation efficiency when identity is consistent. In Experiment 3, we found that emotion, identity consistency, and emotional consistency have complex interactions on estimation performance. Moreover, in most face-processing conditions, participants’ estimation performance is not affected by their level of math anxiety. However, in a small number of cases, mean proportions under happy and fearful conditions are negatively correlated with math anxiety. Full article
(This article belongs to the Section Cognition)
Show Figures

Figure 1

Figure 1
<p>Trial structure in Experiment 1. The facial expression images used in the actual experiment were sourced from the NimStim Facial Expression Set, but due to copyright restrictions, the images in the figure are provided by the authors (Experiment 1).</p>
Full article ">Figure 2
<p>Trial structure in Experiment 2. The facial expression images used in the actual experiment were sourced from the NimStim Facial Expression Set, but due to copyright restrictions, the images in the figure are provided by the authors (Experiment 2).</p>
Full article ">Figure 3
<p>Trial structure in Experiment 3. The facial expression images used in the actual experiment were sourced from the NimStim Facial Expression Set, but due to copyright restrictions, the images in the figure are provided by the authors (Experiment 3).</p>
Full article ">
35 pages, 2088 KiB  
Article
The Influence of Face Masks on Micro-Expression Recognition
by Yunqiu Zhang and Chuanlin Zhu
Behav. Sci. 2025, 15(2), 200; https://doi.org/10.3390/bs15020200 - 13 Feb 2025
Abstract
This study aimed to explore the influence of various mask attributes on the recognition of micro-expressions (happy, neutral, and fear) and facial favorability under different background emotional conditions (happy, neutral, and fear). The participants were asked to complete an ME (micro-expression) recognition task, [...] Read more.
This study aimed to explore the influence of various mask attributes on the recognition of micro-expressions (happy, neutral, and fear) and facial favorability under different background emotional conditions (happy, neutral, and fear). The participants were asked to complete an ME (micro-expression) recognition task, and the corresponding accuracy (ACC), reaction time (RT), and facial favorability were analyzed. Results: (1) Background emotions significantly impacted the RT and ACC in micro-expression recognition, with fear backgrounds hindering performance. (2) Mask wearing, particularly opaque ones, prolonged the RT but had little effect on the ACC. Transparent masks and non-patterned masks increased facial favorability. (3) There was a significant interaction between background emotions and mask attributes; negative backgrounds amplified the negative effects of masks on recognition speed and favorability, while positive backgrounds mitigated these effects. This study provides insights into how masks influence micro-expression recognition, crucial for future research in this area. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of a single trial in Experiment 1 (<b>a</b>) illustrates the micro-expression recognition process without wearing a mask, while (<b>b</b>) shows the micro-expression recognition process with a mask on).</p>
Full article ">Figure 2
<p>Flowchart of a single trial in Experiment 2 (<b>a</b>) illustrates the micro-expression recognition process without wearing a mask, while (<b>b</b>) shows the micro-expression recognition process with a transparent mask on).</p>
Full article ">Figure 3
<p>Flowchart of a single trial in Experiment 3 ((<b>a</b>) illustrates the micro-expression recognition process without wearing a mask, while (<b>b</b>) shows the micro-expression recognition process with a patterned mask on). The asterisk in the lower right corner of the mask surface indicates that this is a mask with a pattern.</p>
Full article ">
20 pages, 1504 KiB  
Article
Unveiling the Truth in Pain: Neural and Behavioral Distinctions Between Genuine and Deceptive Pain
by Vanessa Zanelli, Fausta Lui, Claudia Casadio, Francesco Ricci, Omar Carpentiero, Daniela Ballotta, Marianna Ambrosecchia, Martina Ardizzi, Vittorio Gallese, Carlo Adolfo Porro and Francesca Benuzzi
Brain Sci. 2025, 15(2), 185; https://doi.org/10.3390/brainsci15020185 - 12 Feb 2025
Abstract
Background/Objectives: Fake pain expressions are more intense, prolonged, and include non-pain-related actions compared to genuine ones. Despite these differences, individuals struggle to detect deception in direct tasks (i.e., when asked to detect liars). Regarding neural correlates, while pain observation has been extensively [...] Read more.
Background/Objectives: Fake pain expressions are more intense, prolonged, and include non-pain-related actions compared to genuine ones. Despite these differences, individuals struggle to detect deception in direct tasks (i.e., when asked to detect liars). Regarding neural correlates, while pain observation has been extensively studied, little is known about the neural distinctions between processing genuine, fake, and suppressed pain facial expressions. This study seeks to address this gap using authentic pain stimuli and an implicit emotional processing task. Methods: Twenty-four healthy women underwent an fMRI study, during which they were instructed to complete an implicit gender discrimination task. Stimuli were video clips showing genuine, fake, suppressed pain, and neutral facial expressions. After the scanning session, participants reviewed the stimuli and rated them indirectly according to the intensity of the facial expression (IE) and the intensity of the pain (IP). Results: Mean scores of IE and IP were significantly different for each category. A greater BOLD response for the observation of genuine pain compared to fake pain was observed in the pregenual anterior cingulate cortex (pACC). A parametric analysis showed a correlation between brain activity in the mid-cingulate cortex (aMCC) and the IP ratings. Conclusions: Higher IP ratings for genuine pain expressions and higher IE ratings for fake ones suggest that participants were indirectly able to recognize authenticity in facial expressions. At the neural level, pACC and aMCC appear to be involved in unveiling the genuine vs. fake pain and in coding the intensity of the perceived pain, respectively. Full article
(This article belongs to the Section Sensory and Motor Neuroscience)
Show Figures

Figure 1

Figure 1
<p>Experimental design. Each trial (14 s) was composed of these e a brief warning signal (WS) of 0.5 s, video clip presentation (2.5 s), and a continuous black screen (11 s) until the next trial.</p>
Full article ">Figure 2
<p>Main effect of category on IE ratings. All comparisons were statistically significant (<span class="html-italic">p</span> &lt; 0.001). Error bars depict standard deviation (SD).</p>
Full article ">Figure 3
<p>Main effect of category on IP ratings. All comparisons were statistically significant (<span class="html-italic">p</span> &lt; 0.001), apart from the fake vs suppressed comparison (<span class="html-italic">p</span> &lt; 0.05, does not resist Bonferroni correction). Error bars depict standard deviation (SD).</p>
Full article ">Figure 4
<p>Regions of increased signal for the contrast GP vs. FP (x = 12). L = left; cluster-size threshold k &gt; 46 voxels.</p>
Full article ">Figure 5
<p>Regions whose activity is related to IP ratings (x = −3). L = left; cluster size threshold k &gt; 9.</p>
Full article ">
12 pages, 1714 KiB  
Brief Report
Beauty Is Not Always a Perk: The Role of Attractiveness and Social Interest in Trust Decisions
by Junchen Shang and Yizhuo Zhang
Behav. Sci. 2025, 15(2), 175; https://doi.org/10.3390/bs15020175 - 7 Feb 2025
Abstract
This study examined the impact of males’ facial and vocal attractiveness, as well as social interest in females’ decision-making in a trust game. The results showed that trustees with attractive faces or expressing positive social interest were more likely to receive initial investments. [...] Read more.
This study examined the impact of males’ facial and vocal attractiveness, as well as social interest in females’ decision-making in a trust game. The results showed that trustees with attractive faces or expressing positive social interest were more likely to receive initial investments. Trustees with attractive voices also received more initial investments than unattractive ones in most conditions, except when they had attractive faces and positive interest. Moreover, participants reinvest in trustees with attractive faces or voices, even if they withheld repayment. However, trustees with positive interest would receive more reinvestment only when they reciprocated. In addition, trusters expressing positive social interest were expected to invest and earn repayment at higher rates. Nonetheless, trusters with attractive faces (or voices) were only expected to invest at higher rates when they had attractive voices (or faces) and negative interest. These findings suggest that beauty premium is modulated by participants’ roles, such that the effect of beauty would be stronger when participants encounter trustees rather than trusters. Positive social interest is a perk in most conditions, except when trustees withheld repayment. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the trust game.</p>
Full article ">Figure 2
<p>(<b>A</b>) Mean initial investment rates as a function of facial attractiveness, vocal attractiveness, and social interest in TG1. (<b>B</b>) Mean expected investment rates as a function of facial attractiveness, vocal attractiveness, and social interest in TG2. The error bars represent standard errors. * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, and *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">
21 pages, 2814 KiB  
Article
Three-Dimensional Geometric Morphometric Characterization of Facial Sexual Dimorphism in Juveniles
by Riccardo Solazzo, Annalisa Cappella, Daniele Gibelli, Claudia Dolci, Gianluca Tartaglia and Chiarella Sforza
Diagnostics 2025, 15(3), 395; https://doi.org/10.3390/diagnostics15030395 - 6 Feb 2025
Abstract
Background: The characterization of facial sexual dimorphic patterns in healthy populations serves as valuable normative data to tailor functionally effective surgical treatments and predict their aesthetic outcomes and to identify dysmorphic facial traits related to hormonal disorders and genetic syndromes. Although the analysis [...] Read more.
Background: The characterization of facial sexual dimorphic patterns in healthy populations serves as valuable normative data to tailor functionally effective surgical treatments and predict their aesthetic outcomes and to identify dysmorphic facial traits related to hormonal disorders and genetic syndromes. Although the analysis of facial sexual differences in juveniles of different ages has already been investigated, few studies have approached this topic with three-dimensional (3D) geometric morphometric (GMM) analysis, whose interpretation may add important clinical insight to the current understanding. This study aims to investigate the location and extent of facial sexual variations in juveniles through a spatially dense GMM analysis. Methods: We investigated 3D stereophotogrammetric facial scans of 304 healthy Italians aged 3 to 18 years old (149 males, 155 females) and categorized into four different age groups: early childhood (3–6 years), late childhood (7–12 years), puberty (13–15 years), and adolescence (16–18 years). Geometric morphometric analyses of facial shape (allometry, general Procrustes analysis, Principal Component Analysis, Procrustes distance, and Partial Least Square Regression) were conducted to detail sexually dimorphic traits in each age group. Results: The findings confirmed that males have larger faces than females of the same age, and significant differences in facial shape between the two sexes exist in all age groups. Juveniles start to express sexual dimorphism from 3 years, even though biological sex becomes a predictor of facial soft tissue morphology from the 7th year of life, with males displaying more protrusive medial facial features and females showing more outwardly placed cheeks and eyes. Conclusions: We provided a detailed characterization of facial change trajectories in the two sexes along four age classes, and the provided data can be valuable for several clinical disciplines dealing with the craniofacial region. Our results may serve as comparative data in the early diagnosis of craniofacial abnormalities and alterations, as a reference in the planning of personalized surgical and orthodontic treatments and their outcomes evaluation, as well as in several forensic applications such as the prediction of the face of missing juveniles. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Template and (<b>b</b>) target mesh with the annotated landmarks represented by the orange and black dots, respectively (tr: trichion; n: nasion; prn: pronasale; sn: subnasale; sl: sublabiale; gn: gnathion; ft: frontotemporale; zy: zygion; t:tragion; go: gonion); (<b>c</b>) rough alignment based on landmarks and rigid registration to approach the two meshes and to match the translation, rotation, and scaling of the template with those of the target; (<b>d</b>) non-rigid registration where the template is modified to represent the target; (<b>e</b>) final representation of the target after fine alignment of the template.</p>
Full article ">Figure 2
<p>Position of the digitized anatomical landmarks (tr: trichion; n: nasion; prn: pronasale; sn: subnasale; sl: sublabiale; gn: gnathion; ft: frontotemporale; zy: zygion; t:tragion; go: gonion).</p>
Full article ">Figure 3
<p>Plots of the first and second principal components with a 95% confidence interval.</p>
Full article ">Figure 4
<p>Effects of sex, size, and the interaction on the facial shape. Sex evaluates the female-to-male transition, while size is from narrower to larger faces (centroid size).</p>
Full article ">
17 pages, 1537 KiB  
Review
Advanced Surgical Approaches for the Rejuvenation of the Submental and Cervicofacial Regions: A Literature Review for a Personalized Approach
by Anastasiya S. Borisenko, Valentin I. Sharobaro, Nigora S. Burkhonova, Alexey E. Avdeev and Yousif M. Ahmed Alsheikh
Cosmetics 2025, 12(1), 26; https://doi.org/10.3390/cosmetics12010026 - 5 Feb 2025
Abstract
The quest for surgical advancements regarding the enhancement of the submental and cervicofacial regions has witnessed a remarkable upsurge in recent years. Informed patients are actively seeking sophisticated plastic surgery techniques to achieve comprehensive rejuvenation in these specific areas. Common complaints expressed by [...] Read more.
The quest for surgical advancements regarding the enhancement of the submental and cervicofacial regions has witnessed a remarkable upsurge in recent years. Informed patients are actively seeking sophisticated plastic surgery techniques to achieve comprehensive rejuvenation in these specific areas. Common complaints expressed by these patients include sagging of the jawline, the emergence of deep perioral wrinkles, and the formation of “marionette lines” within the lower third of the face. Furthermore, the manifestation of age-related signs, including neck laxity, submental adipose accumulation, “witch’s chin” deformity, and weakened platysma musculature, are common within this anatomical region. This literature review aims to summarize recent technical improvements, historical evolution, indications, postoperative care, and challenges for facial rejuvenation of the lower third of the face and neck. The application of minimally invasive procedures as part of a comprehensive approach for an aging face will also be discussed. In this article, an extensive search of the available literature was conducted using leading databases, including PubMed and MEDLINE, with the keywords “neck lift”, “platysmaplasty”, “facial rejuvenation”, “medial platysmaplasty”, “lateral platysmaplasty”, “neck rejuvenation”, and “cervicofacial rejuvenation”. Full article
Show Figures

Figure 1

Figure 1
<p>Submental incision for medial platysmaplasty. (<b>A</b>) Inferior view; note the relationship with the hyoid bone. (<b>B</b>) Dissection area available through the described incision [<a href="#B1-cosmetics-12-00026" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>The ptosis of the submandibular gland: (<b>A</b>) a general view of the location of the submandibular glands with respect to the angles of the lower jaw; (<b>B</b>) the level of resection of the glands; and (<b>C</b>) removed fragments of the glands [<a href="#B1-cosmetics-12-00026" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Correct location for the submental incision. Placing the submental incision 1.5 cm posterior to the arrow (showing the incision location of the submental crease) prevents any accentuation of a “double chin” or “witch’s chin” and allows for easier dissection and suturing in the anterior neck [<a href="#B55-cosmetics-12-00026" class="html-bibr">55</a>].</p>
Full article ">Figure 4
<p>Cervicomental structures affecting the contours of the neck [<a href="#B1-cosmetics-12-00026" class="html-bibr">1</a>].</p>
Full article ">
Back to TopTop