Abstract
In recent years, anxiety and depression have placed a significant burden on society. However, the supply of psychological services is inadequate and costly. With advancements in multimedia computing and large language model technologies, there is hope for improving the current shortage of psychological resources. In this demo paper, we proposed a multimodal emotional interaction large language model (MEILLM) and develop EmoAda, A Multimodal Emotion Interaction and Psychological Adaptation System, providing users with cost-effective psychological support. EmoAda possesses multimodal emotional perception, personalized emotional support dialogue, and multimodal emotional interaction capabilities, helping users alleviate psychological stress and enhance psychological adaptation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fan, C., Wang, Z., Li, J., Wang, S., Sun, X.: Robust facial expression recognition with global-local joint representation learning. Multimed. Syst. 1–11 (2022)
Haque, A., Guo, M., Miner, A.S., Fei-Fei, L.: Measuring depression symptom severity from spoken language and 3D facial expressions. arXiv preprint arXiv:1811.08592 (2018)
Huang, Y., Zhai, D., Song, J., Rao, X., Sun, X., Tang, J.: Mental states and personality based on real-time physical activity and facial expression recognition. Front. Psych. 13, 1019043 (2023)
Liu, S., et al.: Towards emotional support dialog systems. arXiv preprint arXiv:2106.01144 (2021)
Santomauro, D.F., et al.: Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet 398, 1700–1712 (2021)
Shen, Y., Song, K., Tan, X., Li, D., Lu, W., Zhuang, Y.: HuggingGPT: solving AI tasks with ChatGPT and its friends in huggingface. arXiv preprint arXiv:2303.17580 (2023)
Sun, X., Huang, J., Zheng, S., Rao, X., Wang, M.: Personality assessment based on multimodal attention network learning with category-based mean square error. IEEE Trans. Image Process. 31, 2162–2174 (2022)
Sun, X., Song, Y., Wang, M.: Toward sensing emotions with deep visual analysis: a long-term psychological modeling approach. IEEE Multimed. 27(4), 18–27 (2020)
Turner, R.J., Brown, R.L.: Social support and mental health. Handb. Study Mental Health Soc. Contexts Theor. Syst. 2, 200–212 (2010)
Wang, J., Sun, X., Wang, M.: Emotional conversation generation with bilingual interactive decoding. IEEE Trans. Comput. Soc. Syst. 9(3), 818–829 (2021)
Yin, S., et al.: A survey on multimodal large language models. arXiv preprint arXiv:2306.13549 (2023)
Acknowledgments
This work was supported by the National Key R&D Programme of China (2022YFC3803202), Major Project of Anhui Province under Grant 202203a05020011 and General Programmer of the National Natural Science Foundation of China (62376084).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Dong, T., Liu, F., Wang, X., Jiang, Y., Zhang, X., Sun, X. (2024). EmoAda: A Multimodal Emotion Interaction and Psychological Adaptation System. In: Rudinac, S., et al. MultiMedia Modeling. MMM 2024. Lecture Notes in Computer Science, vol 14557. Springer, Cham. https://doi.org/10.1007/978-3-031-53302-0_25
Download citation
DOI: https://doi.org/10.1007/978-3-031-53302-0_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-53301-3
Online ISBN: 978-3-031-53302-0
eBook Packages: Computer ScienceComputer Science (R0)