Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Deep Adaptation of Adult-Child Facial Expressions by Fusing Landmark Features

Published: 20 July 2023 Publication History

Abstract

Imaging of facial affects may be used to measure psychophysiological attributes of children through their adulthood for applications in education, healthcare, and entertainment, among others. Deep convolutional neural networks show promising results in classifying facial expressions of adults. However, classifier models trained with adult benchmark data are unsuitable for learning child expressions due to discrepancies in psychophysical development. Similarly, models trained with child data perform poorly in adult expression classification. We propose domain adaptation to concurrently align distributions of adult and child expressions in a shared latent space for robust classification of either domain. Furthermore, age variations in facial images are studied in age-invariant face recognition yet remain unleveraged in adult-child expression classification. We take inspiration from multiple fields and propose deep adaptive FACial Expressions fusing BEtaMix SElected Landmark Features (FACE-BE-SELF) for adult-child expression classification. For the first time in the literature, a mixture of Beta distributions is used to decompose and select facial features based on correlations with expression, domain, and identity factors. We evaluate FACE-BE-SELF using 5-fold cross validation for two pairs of adult-child data sets. Our proposed FACE-BE-SELF approach outperforms transfer learning and other baseline domain adaptation methods in aligning latent representations of adult and child expressions.

References

[1]
A. Amodia-Bidakowska, C. Laverty, and P. G. Ramchandani, “Father-child play: A systematic review of its frequency, characteristics and potential impact on children's development,” Devlop. Rev., vol. 57, 2020, Art. no.
[2]
H. Chen, H. W. Park, and C. Breazeal, “Teaching and learning with children: Impact of reciprocal peer learning with a social robot on children's learning and emotive engagement,” Comput. Educ., vol. 150, 2020, Art. no.
[3]
M. M. Terwogt and H. Stegge, “Children's perspective on the emotional process,” in The Social Child, London, U.K.: Psychology Press, 2021, pp. 249–269.
[4]
P. Goldberg et al., “Attentive or not? Toward a machine learning approach to assessing students’ visible engagement in classroom instruction,” Educ. Psychol. Rev., vol. 33, pp. 27–49, 2021.
[5]
S. K. Gupta, T. Ashwin, and R. M. R. Guddeti, “Students’ affective content analysis in smart classroom environment using deep learning techniques,” Multimedia Tools Appl., vol. 78, pp. 25321–25348, 2019.
[6]
Ö. Sümer, P. Goldberg, S. D'Mello, P. Gerjets, U. Trautwein, and E. Kasneci, “Multimodal engagement analysis from facial videos in the classroom,” IEEE Trans. Affect. Comput., vol. 14, no. 2, pp. 1012–1027, Second Quarter 2023.
[7]
T. Hassan et al., “Automatic detection of pain from facial expressions: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 6, pp. 1815–1831, Jun. 2021.
[8]
G. Zamzmi, R. Paul, D. Goldgof, R. Kasturi, and Y. Sun, “Pain assessment from facial expression: Neonatal convolutional neural network (N-CNN),” in Proc. Int. Joint Conf. Neural Netw., 2019, pp. 1–7.
[9]
Z. Fei et al., “Deep convolution network based emotion analysis towards mental health care,” Neurocomputing, vol. 388, pp. 212–227, 2020.
[10]
C. Su, Z. Xu, J. Pathak, and F. Wang, “Deep learning in mental health outcome research: A scoping review,” Transl. Psychiatry, vol. 10, no. 1, 2020, Art. no.
[11]
K. Owada et al., “Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms,” PLoS One, vol. 13, no. 1, 2018, Art. no.
[12]
M. D. Samad, N. Diawara, J. L. Bobzien, J. W. Harrington, M. A. Witherow, and K. M. Iftekharuddin, “A feasibility study of autism behavioral markers in spontaneous facial, visual, and hand movement response data,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 2, pp. 353–361, Feb. 2018.
[13]
M. D. Samad, N. Diawara, J. L. Bobzien, C. M. Taylor, J. W. Harrington, and K. M. Iftekharuddin, “A pilot study to identify autism related traits in spontaneous facial actions using computer vision,” Res. Autism Spectr. Disord., vol. 65, pp. 14–24, 2019.
[14]
M. T. Akbar, M. N. Ilmi, I. V. Rumayar, J. Moniaga, T.-K. Chen, and A. Chowanda, “Enhancing game experience with facial expression recognition as dynamic balancing,” Procedia Comput. Sci., vol. 157, pp. 388–395, 2019.
[15]
P. M. Blom, S. Bakkes, and P. Spronck, “Modeling and adjusting in-game difficulty based on facial expression analysis,” Entertainment Comput., vol. 31, 2019, Art. no.
[16]
S. Bhattacharya and M. Gupta, “A survey on: Facial emotion recognition invariant to pose, illumination and age,” in Proc. 2nd Int. Conf. Adv. Comput. Commun. Paradigms, 2019, pp. 1–6.
[17]
C. Dalvi, M. Rathod, S. Patil, S. Gite, and K. Kotecha, “A survey of AI-based facial emotion recognition: Features, ML & DL techniques, age-wise datasets and future directions,” IEEE Access, vol. 9, pp. 165806–165840, 2021.
[18]
T. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency, “OpenFace 2.0: Facial behavior analysis toolkit,” in Proc. IEEE 13th Int. Conf. Autom. Face Gesture Recognit., 2018, pp. 59–66.
[19]
Noldus Information Technology bv, “FaceReaderm,” Noldus Information Technology bv., 2021. Accessed: Jul. 27, 2022. [Online]. Available: https://www.noldus.com/facereader
[20]
iMotions A/S, “Facial expession analysis,” 2020. Accessed: Jul. 27, 2022. [Online]. Available: https://imotions.com/biosensor/fea-facial-expression-analysis/
[21]
T. Kanade, J. F. Cohn, and Y. Tian, “Comprehensive database for facial expression analysis,” in Proc. IEEE 4th Int. Conf. Autom. Face Gesture Recognit., 2000.
[22]
P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (CK+): A complete dataset for action unit and emotion-specified expression,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. - Workshops, 2010, pp. 94–101.
[23]
P. Burke and C. Hughes-Lawson, “The growth and development of the soft tissues of the human face,” J. anatomy, vol. 158, 1988, Art. no.
[24]
C. Grossard et al., “Children facial expression production: Influence of age, gender, emotion subtype, elicitation condition and culture,” Front. Psychol., vol. 9, 2018, Art. no.
[25]
A. Dapogny et al., “On automatically assessing children's facial expressions quality: A study, database, and protocol,” Front. Comput. Sci., vol. 1, pp. 1–11, 2019.
[26]
M. Witherow, M. Samad, and K. Iftekharuddin, “Transfer learning approach to multiclass classification of child facial expressions,” in Opt. Eng. + Appl., Bellingham, WA, USA: SPIE, 2019.
[27]
M. Witherow, W. Shields, M. Samad, and K. Iftekharuddin, “Learning latent expression labels of child facial expression images through data-limited domain adaptation and transfer learning,” in Optical Engineering + Applications, Bellingham, WA, USA: SPIE, 2020.
[28]
Z. Zheng, X. Li, J. Barnes, C.-H. Park, and M. Jeon, “Facial expression recognition for children: Can existing methods tuned for adults be adopted for children?,” in Proc. Int. Conf. Hum.-Comput. Interact., Cham, 2019, pp. 201–211.
[29]
V. LoBue and C. Thrasher, “The child affective facial expression (CAFE) set,” Front Psychol., vol. 5, 2014, Art. no.
[30]
V. LoBue and C. Thrasher, “The child affective facial expression (CAFE) set: Validity and reliability from untrained adults,” Front. Psychol. Methods, vol. 5, 2015, Art. no.
[31]
J. G. Negrão et al., “The child emotion facial expression set: A database for emotion recognition in children,” Front. Psychol., vol. 12, 2021, Art. no.
[32]
R. A. Khan, A. Crenn, A. Meyer, and S. Bouakaz, “A novel database of children's spontaneous facial expressions (LIRIS-CSE),” Image Vis. Comput., vol. 83-84, pp. 61–69, 2019.
[33]
T. G. Rebanowako, A. R. Yadav, and R. Joshi, “Age-invariant facial expression classification method using deep learning,” in Proc. 6th Int. Conf. Recent Trends Comput., Singapore, 2021, pp. 571–579.
[34]
G. Guo, R. Guo, and X. Li, “Facial expression recognition influenced by human aging,” IEEE Trans. Affect. Comput., vol. 4, no. 3, pp. 291–298, Sep. 2013.
[35]
R. Angulu, J. R. Tapamo, and A. O. Adewumi, “Age estimation via face images: A survey,” EURASIP J. Image Video Process., vol. 2018, 2018, Art. no.
[36]
R. Angulu, J. R. Tapamo, and A. O. Adewumi, “Age-group estimation using feature and decision level fusion,” Comput. J., vol. 62, no. 3, pp. 346–358, 2018.
[37]
Z. Lou, F. Alnajar, J. M. Alvarez, N. Hu, and T. Gevers, “Expression-invariant age estimation using structured learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 2, pp. 365–375, Feb. 2018.
[38]
S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto, “Unified deep supervised domain adaptation and generalization,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 5716–5726.
[39]
P. Punyani, R. Gupta, and A. Kumar, “Neural networks for facial age estimation: A survey on recent advances,” Artif. Intell. Rev., vol. 53, no. 5, pp. 3299–3347, 2020.
[40]
M. M. Sawant and K. M. Bhurchandi, “Age invariant face recognition: A survey on facial aging databases, techniques and effect of aging,” Artif. Intell. Rev., vol. 52, no. 2, pp. 981–1008, 2019.
[41]
A. S. Osman Ali, V. Sagayan, A. M. Saeed, H. Ameen, and A. Aziz, “Age-invariant face recognition system using combined shape and texture features,” IET Biometrics, vol. 4, no. 2, pp. 98–115, 2015.
[42]
K. Baruni, N. Mokoena, M. Veeraragoo, and R. Holder, “Age Invariant Face Recognition Methods: A Review,” in Proc. Int. Conf. Comput. Sci. Comput. Intell., 2021, pp. 1657–1662.
[43]
A. Juhong and C. Pintavirooj, “Face recognition based on facial landmark detection,” in Proc. 10th Biomed. Eng. Int. Conf., 2017, pp. 1–4.
[44]
A. Chinnnaswamy, P. Kumar, and S. Aravind, “Age group estimation using facial features,” Int. J. Emerg. Technol. Comput. Appl. Sci., vol. 2014, 2014, Art. no.
[45]
A. Srivastava, “Estimation of age groups based on facial features,” Int. J. Eng. Tech. Res., vol. 7, pp. 115–121, 2018.
[46]
S. A. Rizwan, A. Jalal, and K. Kim, “An accurate facial expression detector using multi-landmarks selection and local transform features,” in Proc. 3rd Int. Conf. Advancements Comput. Sci., 2020.
[47]
M. Murtaza, M. Sharif, M. AbdullahYasmin, and T. Ahmad, “Facial expression detection using six facial expressions hexagon (SFEH) model,” in Proc. IEEE 9th Annu. Comput. Commun. Workshop Conf., 2019, pp. 190–195.
[48]
A. Barman and P. Dutta, “Influence of shape and texture features on facial expression recognition,” IET Image Process., vol. 13, no. 8, pp. 1349–1363, 2019.
[49]
K. X. Beh and K. M. Goh, “Micro-expression spotting using facial landmarks,” in Proc. IEEE 15th Int. Colloq. Signal Process. Appl., 2019, pp. 192–197.
[50]
D. Gong, Z. Li, D. Lin, J. Liu, and X. Tang, “Hidden factor analysis for age invariant face recognition,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 2872–2879.
[51]
H. Li, H. Zou, and H. Hu, “Modified hidden factor analysis for cross-age face recognition,” IEEE Signal Process. Lett., vol. 24, no. 4, pp. 465–469, Apr. 2017.
[52]
H. Bar and M. T. Wells, “On graphical models and convex geometry,” Comput. Statist. Data Anal., vol. 187, 2023, Art. no.
[53]
D. Kollias, “ABAW: Valence-arousal estimation, expression recognition, action unit detection & multi-task learning challenges,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2022, pp. 2328–2336.
[54]
D. Kollias, “ABAW: Learning from synthetic data & multi-task learning challenges,” in Proc. European Conference Computer Vision, 2023, pp. 157–172.
[55]
D. Kollias, A. Schulc, E. Hajiyev, and S. Zafeiriou, “Analysing affective behavior in the first ABAW 2020 competition,” in Proc. IEEE 15th Int. Conf. Autom. Face Gesture Recognit., 2020, pp. 637–643.
[56]
D. Kollias, V. Sharmanska, and S. Zafeiriou, “Distribution matching for heterogeneous multi-task learning: A large-scale face study,” 2021,.
[57]
D. Kollias and S. Zafeiriou, “Expression, affect, action unit recognition: Aff-Wild2, multi-task learning and ArcFace,” 2019,.
[58]
D. Kollias and S. Zafeiriou, “Affect analysis in-the-wild: Valence-arousal, expressions, action units and a unified framework,” 2021,.
[59]
D. Kollias and S. Zafeiriou, “Analysing affective behavior in the second ABAW2 competition,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. Workshops, 2021, pp. 3645–3653.
[60]
C. Grossard et al., “Teaching facial expression production in autism: The serious game JEMImE,” Creative Educ., vol. 10, no. 11, 2019, Art. no.
[61]
H. Kumazaki et al., “Job interview training targeting nonverbal communication using an android robot for individuals with autism spectrum disorder,” Autism, vol. 23, no. 6, pp. 1586–1595, 2019.
[62]
W.-T. Tsai, I.-J. Lee, and C.-H. Chen, “Inclusion of third-person perspective in CAVE-like immersive 3D virtual reality role-playing games for social reciprocity training of children with an autism spectrum disorder,” Universal Access Inf. Soc., vol. 20, pp. 375–389, 2021.
[63]
E. M. Medica, “Give me a kiss! An integrative rehabilitative training program with motor imagery and mirror therapy for recovery of facial palsy,” Eur. J. Phys. Rehabil. Med., vol. 56, pp. 1–38, 2019.
[64]
R. Okamoto, K. Adachi, and K. Mizukami, “[Effects of facial rehabilitation exercise on the mood, facial expressions, and facial muscle activities in patients with Parkinson's disease],” Nihon Ronen Igakkai Zasshi, vol. 56, no. 4, pp. 478–486, 2019.
[65]
D. Kollias, V. Sharmanska, and S. Zafeiriou, “Face behavior à la carte: Expressions, affect and action units in a single network,” 2019,.
[66]
D. Kollias et al., “Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond,” Int. J. Comput. Vis., vol. 127, pp. 907–929, 2018.
[67]
S. Zafeiriou, D. Kollias, M. A. Nicolaou, A. Papaioannou, G. Zhao, and I. Kotsia, “Aff-wild: Valence and arousal'In-the-Wild'challenge,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 1980–1987.
[68]
J. N. Kundu et al., “Balancing discriminability and transferability for source-free domain adaptation,” in Proc. Int. Conf. Mach. Learn., 2022, pp. 11710–11728.
[69]
S. Pei, J. Sun, S. Xiang, and G. Meng, “Domain decorrelation with potential energy ranking,” 2022,.
[70]
L. N. Smith, “Cyclical learning rates for training neural networks,” in Proc. IEEE Winter Conf. Appl. Comput. Vis., 2017, pp. 464–472.
[71]
D. Kollias, “ABAW: Learning from synthetic data & multi-task learning challenges,” in Proc. Eur. Conf. Comput. Vis., Israel, 2022, pp. 157–172.
[72]
S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., Long Beach, CA, USA, 2017, pp. 4768–4777.
[73]
G. Erion, J. D. Janizek, P. Sturmfels, S. M. Lundberg, and S.-I. Lee, “Improving performance of deep learning models with axiomatic attribution priors and expected gradients,” Nature Mach. Intell., vol. 3, no. 7, pp. 620–631, 2021.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing  Volume 15, Issue 3
July-Sept. 2024
1087 pages

Publisher

IEEE Computer Society Press

Washington, DC, United States

Publication History

Published: 20 July 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media