Abstract
Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Department of Health and Human Services: Fatalities and injuries from falls among older adults - United States, 1993–2003 and 2001–2005. pp. 1221–1224, November 2006. Morbidity and Mortality Weekly Report
Schneider, M.: Introduction to Public Health. Jones and Bartlett, Sudbury, MA (2011)
Oneill, T.W., Varlow, J., Silman, A.J., Reeve, J., Reid, D.M., Todd, C., Woolf, A.D.: Age and sex influences on fall characteristics. Ann. Rheum. Dis. 53(11), 773–775 (1994). https://doi.org/10.1136/ard.53.11.773
Lord, S.R., Sherrington, C., Menz, H.B., Close, J.C. (n.d.): Strategies for prevention. Falls in Older People, 173–176. https://doi.org/10.1017/cbo9780511722233.011
Bourke, A., Lyons, G.: A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 30(1), 84–90 (2008). https://doi.org/10.1016/j.medengphy.2006.12.001
Noury, N., Fleury, A., Rumeau, P., Bourke, A., Laighin, G. O., Rialle, V., Lundy, J. (2007). Fall detection - Principles and Methods. 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. https://doi.org/10.1109/iembs.2007.4352627
Wu, Y., Su, Y., Hu, Y., Yu, N., Feng, R.: A multi-sensor fall detection system based on multivariate statistical process analysis. J. Med. Biol. Eng. 39(3), 336–351 (2019). https://doi.org/10.1007/s40846-018-0404-z
Khojasteh, S., Villar, J., Chira, C., González, V., Cal, E.D.: Improving fall detection using an on-wrist wearable accelerometer. Sensors 18(5), 1350 (2018). https://doi.org/10.3390/s18051350
Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013). https://doi.org/10.1016/j.neucom.2011.09.037
Jalal, A., Kamal, S., Kim, D.: A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments. Sensors 14(7), 11735–11759 (2014). https://doi.org/10.3390/s140711735
Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: Training computationally efficient smartphone-based human activity recognition models. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8131 LNCS, pp. 426–433 (2013)
Jatoba, L.C., Grossmann, U., Kunze, C., Ottenbacher, J., Stork, W.: Context-aware mobile health monitoring: Evaluation of different pattern recognition methods for classification of physical activity. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5250–5253 (2008)
Münzner, S., Schmidt, P., Reiss, A., Hanselmann, M., Stiefelhagen, R., Dürucheb, R.: CNN-based sensor fusion techniques for multimodal human activity recognition. In: Proceedings of the 2017 ACM International Symposium on Wearable Computers - ISWC 17. (2017). https://doi.org/10.1145/3123021.3123046
Bortnikov, M., Khan, A., Khattak, A.M., Ahmad, M.: Accident Recognition via 3D CNNs for Automated Traffic Monitoring in Smart Cities. Adv. Intell. Syst. Comput. Adv. Comput. Vis. 256–264 (2019). https://doi.org/10.1007/978-3-030-17798-0_22
Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017). https://doi.org/10.1038/nature21056
Fakhrulddin, A.H., Fei, X., Li, H.: Convolutional neural networks (cnn) based human fall detection on body sensor networks (bsn) sensor data. In 2017 4th ICSAI (Nov 2017)
Núñez-Marcos, A., Azkune, G., Arganda-Carreras, I.: Vision-based fall detection with convolutional neural networks. Wirel. Commun. Mob. Comput. 2017, (2017). https://doi.org/10.1155/2017/9474806
Lu, N., Wu, Y., Feng, L., Song, J.: Deep learning for fall detection: three-dimensional CNN combined with LSTM on video kinematic data. IEEE J. Biomed. Health Inform. 23(1), 314–323 (2019). https://doi.org/10.1109/jbhi.2018.2808281
Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., Peñafort-Asturiano, C.: UP-fall detection dataset: a multimodal approach. Sensors 19(9), 1988 (2019). https://doi.org/10.3390/s19091988
Zhang, Z., Conly, C., Athitsos, V.: A survey on vision-based fall detection. Proceedings of the 8th ACM international conference on Pervasive technologies related to assistive environments. ACM (2015). http://dx.doi.org/10.1145/2769493.2769540
Casilari, E., Santoyo-Ramón, J., Cano-García, J.: Analysis of public datasets for wearable fall detection systems. Sensors 17(7), 1513 (2017). https://doi.org/10.3390/s17071513
Kong, Y., Huang, J., Huang, S., Wei, Z., Wang, S.: Learning spatiotemporal representations for human fall detection in surveillance video. J. Vis. Commun. Image Represent. 59, 215–230 (2019). https://doi.org/10.1016/j.jvcir.2019.01.024
Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15, 1192–1209 (2013)
Yin, J., Yang, Q., Pan, J.: Sensor-based abnormal human-activity detection. IEEE Trans. Knowl. Data Eng. 20(8), 1082–1090 (2008). https://doi.org/10.1109/tkde.2007.1042
Xu, X., Tang, J., Zhang, X., Liu, X., Zhang, H., Qiu, Y.: Exploring techniques for vision based human activity recognition: methods, systems, and evaluation. Sensors 13(2), 1635–1650 (2013). https://doi.org/10.3390/s130201635
Dungkaew, T., Suksawatchon, J., Suksawatchon, U.: Impersonal smartphone-based activity recognition using the accelerometer sensory data. In: 2017 2nd International Conference on Information Technology (INCIT) (2017). https://doi.org/10.1109/incit.2017.8257856
Bharti, P.: Complex activity recognition with multi-modal multi-positional body sensing. J. Biom. Biostat. 08(05) (2017). https://doi.org/10.4172/2155-6180-c1-005
Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)
Mao, A., Ma, X., He, Y., & Luo, J.: Highly portable, sensor-based system for human fall monitoring. Sensors (Switzerland), 17(9). https://doi.org/10.3390/s17092096 (2017)
Chetty, G., White, M., Singh, M., Mishra, A.: Multimodal activity recognition based on automatic feature discovery. in: 2014 International Conference on Computing for Sustainable Global Development (INDIACom) (2014). https://doi.org/10.1109/indiacom.2014.6828039
Kerdjidj, O., Ramzan, N., Ghanem, K., Amira, A., Chouireb, F.: Fall detection and human activity classification using wearable sensors and compressed sensing. J. Ambient. Intell. Humanized Comput. (2019). https://doi.org/10.1007/s12652-019-01214-4
Torres-Huitzil, C., Nuno-Maganda, M.: Robust smartphone-based human activity recognition using a tri-axial accelerometer. In: 2015 IEEE 6th Latin American Symposium on Circuits & Systems (LASCAS) (2015). https://doi.org/10.1109/lascas.2015.7250435
Vilarinho, T., Farshchian, B., Bajer, D.G., Dahl, O.H., Egge, I., Hegdal, S.S., Lønes, A., Slettevold, J.N., Weggersen, S.M.: A combined smartphone and smartwatch fall detection system. In: Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 1443–1448 Liverpool, UK, 26–28 October 2015
Koshmak, G., Loutfi, A., Linden, M.: Challenges and issues in multisensor fusion approach for fall detection: review paper. J. Sens. 2016, 1–12 (2016). https://doi.org/10.1155/2016/6931789
Turaga, P., Chellappa, R., Subrahmanian, V.S., Udrea, O.: Machine recognition of human activities: a survey. IEEE Trans. Circuits Syst. Video Technol. 18, 1473–1488 (2008)
Raty, T.D.: Survey on contemporary remote surveillance systems for public safety. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 40, 493–515 (2010)
Albanese, M., Chellappa, R., Moscato, V., Picariello, A., Subrahmanian, V.S., Turaga, P., Udrea, O.: A constrained probabilistic petri net framework for human activity detection in video. IEEE Trans. Multimed. 10, 1429–1443 (2008)
Zerrouki, N., Houacine, A.: Combined curvelets and hidden Markov models for human fall detection. Multimed. Tools Appl. 77(5), 6405–6424 (2017). https://doi.org/10.1007/s11042-017-4549-5
Auvinet, E., Multon, F., Saint-Arnaud, A., Rousseau, J., Meunier, J.: Fall detection with multiple cameras: an occlusion resistant method based on 3-D silhouette vertical distribution. IEEE Trans. Inf. Technol. Biomed. 15(2), 290–300 (2011)
Charfi, I., Miteran, J., Dubois, J., Atri, M., Tourki, R.: Definition and performance evaluation of a robust SVM based fall detection solution. SITIS 12, 218–224 (2012)
Kozina, S., Gjoreski, H., Gams, M., Luštrek, M.: Efficient activity recognition and fall detection using accelerometers. In: Communications in Computer and Information Science Evaluating AAL Systems Through Competitive Benchmarking pp. 13–23 (2013). https://doi.org/10.1007/978-3-642-41043-7_2
Rougier, C., Meunier, J., St-Arnaud, A., Rousseau, J.: Robust video surveillance for fall detection based on human shape deformation. IEEE Trans. Circuits Syst. Video Technol. 21(5), 611–622 (2011). https://doi.org/10.1109/tcsvt.2011.2129370
Zhang, Z., Conly, C., & Athitsos, V. (2014). Evaluating depth-based computer vision methods for fall detection under occlusions . pp. 196–207. https://doi.org/10.1007/978-3-319-14364-4_19
Thome, N., Miguet, S., Ambellouis, S.: A real-time, multiview fall detection system: a LHMM-based approach. IEEE Trans. Circuits Syst. Video Technol. 18(11), 1522–1532 (2008). https://doi.org/10.1109/tcsvt.2008.2005606
Anderson, D., Luke, R.H., Keller, J.M., Skubic, M., Rantz, M., Aud, M.: Linguistic summarization of video for fall detection using voxel person and fuzzy logic. Comput. Vis. Image Underst. 113(1), 80–89 (2009). https://doi.org/10.1016/j.cviu.2008.07.006
Wang, K., Cao, G., Meng, D. Chen, W., Cao, W.: Automatic fall detection of human in video using combination of features. In: Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2016, pp. 1228–1233, China (December 2016)
Nait Aicha, A., Englebienne, G., van Schooten, K.S., Pijnappels, M., Kröse, B.: Deep learning to predict falls in older adults based on daily-life trunk accelerometry. Sensors (Basel, Switzerland) 18(5), 1–14 (2018). https://doi.org/10.3390/s18051654
Shieh, W., Huang, J.: Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system. Med. Eng. Phys. 34(7), 954–963 (2012). https://doi.org/10.1016/j.medengphy.2011.10.016
Mousse, M.A., Motamed, C., Ezin, E.C.: Percentage of human-occupied areas for fall detection from two views. Vis. Comput. 33(12), 1529–1540 (2016). https://doi.org/10.1007/s00371-016-1296-y
Zhang, S., Li, Z., Wei, Z., Wang, S.: An automatic human fall detection approach using RGBD cameras. in: 2016 5th International Conference on Computer Science and Network Technology (ICCSNT) (2016).https://doi.org/10.1109/iccsnt.2016.8070265
Hekmat, M., Mousavi, Z., Aghajan, H.: Multi-view Feature fusion for activity classification. In: Proceedings of the 10th International Conference on Distributed Smart Camera - ICDSC 16 (2016). https://doi.org/10.1145/2967413.2967434
Su, S., Wu, S., Chen, S., Duh, D., Li, S.: Multi-view fall detection based on spatio-temporal interest points. Multimed. Tools Appl. 75(14), 8469–8492 (2015). https://doi.org/10.1007/s11042-015-2766-3
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Hsieh, Y.-Z., Jeng, Y.-L.: Development of home intelligent fall detection IoT system based on feedback optical flow convolutional neural network. IEEE Access 6, 6048–6057 (2018). https://doi.org/10.1109/access.2017.2771389
Akula, N.V.A., Shah, A.K., Ghosh, R.: A spatio-temporal deep learning approach for human action recognition in infrared videos. Opt. Photonics Inf. Process. XII (2018). https://doi.org/10.1117/12.2502993
Banos, O., Galvez, J.-M., Damas, M., Pomares, H., Rojas, I.: Window size impact in human activity recognition. Sensors 14(4), 6474–6499 (2014). https://doi.org/10.3390/s140406474
Khalid, S., Khalil, T., Nasreen, S.: A survey of feature selection and feature extraction techniques in machine learning. In: Proceedings of the Science and Information Conference (SAI), London, UK, 27–29 August 2014
Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)
Krizhevsky, A., Sutskever, I. Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS ’12), pp. 1097–1105, Lake Tahoe, Nev, USA (December 2012)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV, pp. 818–833 (2014)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014). 1, 2, 3
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. CoRR, abs/1409.4842 (2014) 1, 2
He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceeding 13th European Conference Computer Vision, pp. 346–361 (2014)
Francois, C., et al.: Keras (2015) https://github.com/fchollet/keras
Auvinet, E., Rougier, C., Meunier, J., St-Arnaud, A., Rousseau, J.: Multiple cameras fall dataset. DIRO-Université de Montréal, Tech. Rep. 1350 (2010)
Wang, S., Chen, L., Zhou, Z., Sun, X., Dong, J.: Human fall detection in surveillance video based on PCANet. Multimed. Tools Appl. 75(19), 11603–11613 (2015). https://doi.org/10.1007/s11042-015-2698-y
Sucerquia, A., López, J.D., Vargas-Bonilla, F.: Real-Life/Real-Time Elderly Fall Detection with a Triaxial Accelerometer (2018). https://doi.org/10.20944/preprints201711.0087.v3
Conflict of interest
The authors declare that there are no conflicts of interest regarding the publication of this article.
Funding
The authors declare that this work was performed as part of their employment in Universidad Panamericana (Mexico).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Espinosa, R., Ponce, H., Gutiérrez, S., Martínez-Villaseñor, L., Brieva, J., Moya-Albor, E. (2020). Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras. In: Ponce, H., Martínez-Villaseñor, L., Brieva, J., Moya-Albor, E. (eds) Challenges and Trends in Multimodal Fall Detection for Healthcare. Studies in Systems, Decision and Control, vol 273. Springer, Cham. https://doi.org/10.1007/978-3-030-38748-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-030-38748-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-38747-1
Online ISBN: 978-3-030-38748-8
eBook Packages: EngineeringEngineering (R0)