Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Hybrid Data Fusion Architecture for BINDI: A Wearable Solution to Combat Gender-Based Violence

  • Conference paper
  • First Online:
Multimedia Communications, Services and Security (MCSS 2020)

Abstract

Currently, most of the affective computing research is about modifying and adapting the machine behavior based on the human emotional state. Although, the use of the affective state inference can be extended to provide a tool for other fields more society related such as gender violence detection, which is a real global emergency. Based on the World Health Organization (WHO) statistics, one in three women worldwide experiences gender-based violence, often from an intimate partner. Due to this motivation, the authors developed BINDI, which is a wearable solution for detecting automatically those situations. It uses affective computing together with short-term physiological and physical observations. It represents a step toward an autonomous, embedded, non-intrusive, and wearable system for detecting those situations and connecting the victim with a trusted circle. In this work, and as a response for improving the detection capability of BINDI, a novel hybrid data fusion architecture is proposed. This new architecture is intended to improve the already implemented decision level fusion architecture. Further details of the uni-modal systems and the different approaches needed to be explored in the future are given.

E. Rituerto-González and J. A. Miranda—These authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. UC3M4Safety - Multidisciplinary team for detecting, preventing and combating violence against women (2017). https://www.linkedin.com/company/uc3m4safety/

  2. Abadi, M.K., Subramanian, R., Kia, S.M., Avesani, P., Patras, I., Sebe, N.: DECAF: MEG-based multimodal database for decoding affective physiological responses. IEEE Trans. Affect. Comput. 6(3), 209–222 (2015). https://doi.org/10.1109/TAFFC.2015.2392932

    Article  Google Scholar 

  3. Aguiar, A., Kaiseler, M., Cunha, M., Silva, J., Meinedo, H., Almeida, P.: VOCE corpus: ecologically collected speech annotated with physiological and psychological stress assessments. In: LREC 2014: 9th International Conference on Language Resources and Evaluation, May 2014

    Google Scholar 

  4. Akçay, M.B., Oğz, K.: Speech emotion recognition: emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers. Speech Commun. 116, 56–76 (2020). https://doi.org/10.1016/j.specom.2019.12.001. http://www.sciencedirect.com/science/article/pii/S0167639319302262

  5. Campos-Gaviño, M.A., Larrabeiti, D.: Toward court-admissible sensor systems to fight domestic violence. In: IEEE International Conference on Multimedia Communications, Services & Security, MCSS 2020 (2020, submitted)

    Google Scholar 

  6. Demaree, H., Everhart, D., Youngstrom, E., Harrison, D.: Brain lateralization of emotional processing: historical roots and a future incorporating “dominance”. Behav. Cogn. Neurosci. Rev. 4, 3–20 (2005). https://doi.org/10.1177/1534582305276837

    Article  Google Scholar 

  7. Dua, D., Graff, C.: UCI Machine Learning Repository (2017). http://archive.ics.uci.edu/ml

  8. Ekman, P.: Are there basic emotions? Psychol. Rev. 99(3), 550–553 (1992). https://doi.org/10.1037/0033-295X.99.3.550

    Article  Google Scholar 

  9. Hansen, J.H.L.: Speech under simulated and actual stress (SUSAS) database. Linguistic Data Consortium, Philadelphia (1999). https://catalog.ldc.upenn.edu/LDC99S78

  10. Ikeno, A., Varadarajan, V., Patil, S., Hansen, J.H.L.: UT-Scope: speech under Lombard effect and cognitive stress. In: 2007 IEEE Aerospace Conference, pp. 1–7, March 2007. https://doi.org/10.1109/AERO.2007.352975

  11. Kahou, S.E., et al.: EmoNets: multimodal deep learning approaches for emotion recognition in video. CoRR abs/1503.01800 (2015). http://arxiv.org/abs/1503.01800

  12. Khalil, R.A., Jones, E., Babar, M.I., Jan, T., Zafar, M.H., Alhussain, T.: Speech emotion recognition using deep learning techniques: a review. IEEE Access 7, 117327–117345 (2019)

    Article  Google Scholar 

  13. Kim, J., André, E.: Emotion recognition using physiological and speech signal in short-term observation. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Weber, M. (eds.) PIT 2006. LNCS (LNAI), vol. 4021, pp. 53–64. Springer, Heidelberg (2006). https://doi.org/10.1007/11768029_6

    Chapter  Google Scholar 

  14. Koelstra, S., et al.: DEAP: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012). https://doi.org/10.1109/T-AFFC.2011.15

    Article  Google Scholar 

  15. Minguez-Sanchez, A.: Detección de estrés en señales de voz. Bachelor’s thesis (2017). http://hdl.handle.net/10016/27535

  16. Miranda, J.A., Canabal, M.F., Lanza, J.M., Portela-García, M., López-Ongil, C., Alcaide, T.R.: Meaningful data treatment from multiple physiological sensors in a cyber-physical system. In: DCIS 2017: XXXII Conference on Design of Circuits and Integrated Systems, November 2017. http://oa.upm.es/51130/

  17. Miranda, J.A., Canabal, M.F., Lanza-Gutiérrez, J.M., García, M.P., López-Ongil, C.: Toward fear detection using affect recognition. In: 2019 XXXIV Conference on Design of Circuits and Integrated Systems (DCIS), pp. 1–4, November 2019. https://doi.org/10.1109/DCIS201949030.2019.8959852

  18. Miranda, J.A., Canabal, M.F., Portela-García, M., Lopez-Ongil, C.: Embedded emotion recognition: autonomous multimodal affective internet of things. In: CPSWS 2018: Cyber Physical Systems Summer School, Designing Cyber-Physical Systems - From Concepts to Implementation, vol. 2208, pp. 22–29, September 2018

    Google Scholar 

  19. Miranda-Calero, J.A., Marino, R., Lanza-Gutierrez, J.M., Riesgo, T., Garcia-Valderas, M., Lopez-Ongil, C.: Embedded emotion recognition within cyber-physical systems using physiological signals. In: DCIS 2018: XXXIII Conference on Design of Circuits and Integrated Systems, November 2018

    Google Scholar 

  20. Noulas, A., Englebienne, G., Krose, B.J.A.: Multimodal speaker diarization. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 79–93 (2012). https://doi.org/10.1109/TPAMI.2011.47

    Article  Google Scholar 

  21. Pawar, R.V., Jalnekar, R.M., Chitode, J.S.: Review of various stages in speaker recognition system, performance measures and recognition toolkits. Analog Integr. Circuits Sig. Process. 94(2), 247–257 (2017). https://doi.org/10.1007/s10470-017-1069-1

    Article  Google Scholar 

  22. Picard, R.W., Vyzas, E., Healey, J.: Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans. Pattern Anal. Mach. Intell. 23(10), 1175–1191 (2001). https://doi.org/10.1109/34.954607

    Article  Google Scholar 

  23. Poddar, A., Sahidullah, M., Saha, G.: Speaker verification with short utterances: a review of challenges, trends and opportunities. IET Biometr. 7 (2017). https://doi.org/10.1049/iet-bmt.2017.0065

  24. Ren, J.S.J., et al.: Look, listen and learn - a multimodal LSTM for speaker identification. CoRR abs/1602.04364 (2016). http://arxiv.org/abs/1602.04364

  25. Rituerto-González, E., Gallardo-Antolín, A., Peláez-Moreno, C.: Speaker recognition under stress conditions. In: IBERSPEECH, pp. 15–19, November 2018. https://doi.org/10.21437/IberSPEECH.2018-4

  26. Rituerto-González, E., Mínguez Sánchez, A., Gallardo-Antolín, A., Peláez-Moreno, C.: Data augmentation for speaker identification under stress conditions to combat gender-based violence. Appl. Sci. 9, 2298 (2019). https://doi.org/10.3390/app9112298

    Article  Google Scholar 

  27. Russell, J.: Core affect and the psychological construction of emotion. Psychol. Rev. 110, 145–72 (2003). https://doi.org/10.1037//0033-295X.110.1.145

    Article  Google Scholar 

  28. Schmidt, P., Reiss, A., Duerichen, R., Van Laerhoven, K.: Wearable affect and stress recognition: a review, November 2018

    Google Scholar 

  29. Shu, L., et al.: A review of emotion recognition using physiological signals. Sensors 18, 2074 (2018). https://doi.org/10.3390/s18072074

    Article  Google Scholar 

  30. Soleymani, M., Lichtenauer, J., Pun, T., Pantic, M.: A multimodal database for affect recognition and implicit tagging. IEEE Trans. Affect. Comput. 3(1), 42–55 (2012)

    Article  Google Scholar 

  31. Subramanian, R., Wache, J., Abadi, M.K., Vieriu, R.L., Winkler, S., Sebe, N.: ASCERTAIN: emotion and personality recognition using commercial sensors. IEEE Trans. Affect. Comput. 9(2), 147–160 (2018). https://doi.org/10.1109/TAFFC.2016.2625250

    Article  Google Scholar 

  32. Vryzas, N., Vrysis, L., Kotsakis, R., Dimoulas, C.: Speech emotion recognition adapted to multimodal semantic repositories. In: 2018 13th International Workshop on Semantic and Social Media Adaptation and Personalization (SMAP), pp. 31–35, September 2018. https://doi.org/10.1109/SMAP.2018.8501881

  33. World Health Organization: Global and regional estimates of violence against women: prevalence and health effects of intimate partner violence and non-partner sexual violence (2013). https://www.who.int/reproductivehealth/publications/violence/9789241564625/en/

  34. Wu, C., Lin, J., Wei, W., Cheng, K.: Emotion recognition from multi-modal information. In: 2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 1–8, October 2013. https://doi.org/10.1109/APSIPA.2013.6694347

Download references

Acknowledgements

This work has been partially supported by the Department of Research and Innovation of Madrid Regional Authority, in the EMPATIA-CM research project (reference Y2018/TCS-5046). The authors thank the rest of the members of the UC3M4Safety for their contribution and support of the present work. We thank NVIDIA Corporation for the donation of the TITAN Xp used for this research. The authors also thank the Department of Education and Research of the Community of Madrid and the European Social Fund for the conceded Pre-doctoral Research Staff grant for Research Activities, in the CAM Youth Employment Programme (reference PEJD-2019-PRE/TIC-16295).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Angel Miranda .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rituerto-González, E., Miranda, J.A., Canabal, M.F., Lanza-Gutiérrez, J.M., Peláez-Moreno, C., López-Ongil, C. (2020). A Hybrid Data Fusion Architecture for BINDI: A Wearable Solution to Combat Gender-Based Violence. In: Dziech, A., Mees, W., Czyżewski, A. (eds) Multimedia Communications, Services and Security. MCSS 2020. Communications in Computer and Information Science, vol 1284. Springer, Cham. https://doi.org/10.1007/978-3-030-59000-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-59000-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58999-8

  • Online ISBN: 978-3-030-59000-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics