Human Activity Recognition in the Presence of Occlusion
<p>The 25 skeletal joints extracted by Microsoft Kinect v2. The joints and edges that correspond to the 5 main body parts, namely head-torso, right arm, left arm, right leg, and left leg are colored in blue, red, green, orange, and purple, respectively.</p> "> Figure 2
<p>A sequence of an actor performing the activity <span class="html-italic">wearing a hat</span>. Extracted human skeleton 3D joints using the Kinect SDK have been overlaid. Frames have been taken from the NTU dataset [<a href="#B9-sensors-23-04899" class="html-bibr">9</a>] and have been trimmed for illustration purposes.</p> "> Figure 3
<p>A sequence of an actor performing the activity <span class="html-italic">salute</span>. Extracted human skeleton 3D joints using the Kinect SDK have been overlaid. Frames have been taken from the PKU-MMD dataset [<a href="#B8-sensors-23-04899" class="html-bibr">8</a>] and have been trimmed for illustration purposes.</p> "> Figure 4
<p>Pseudo-coloured images for the PKU-MMD [<a href="#B8-sensors-23-04899" class="html-bibr">8</a>] and the NTU [<a href="#B9-sensors-23-04899" class="html-bibr">9</a>] datasets. LA, RA, LL, and RL denote the cases of occlusion of left arm, right arm, left leg, and right leg.</p> "> Figure 5
<p>Example skeleton sequences of the activity <span class="html-italic">salute</span> from the PKU-MMD dataset [<a href="#B8-sensors-23-04899" class="html-bibr">8</a>]. First row: skeletons include all 25 joints; second row: joints corresponding to the left arm have been discarded.</p> "> Figure 6
<p>Example skeleton sequences of the activity <span class="html-italic">wearing a hat</span> from the NTU dataset [<a href="#B9-sensors-23-04899" class="html-bibr">9</a>].First row: skeletons include all 25 joints; second row: joints corresponding to the left arm have been discarded.</p> "> Figure 7
<p>The Convolutional Neural Network that has been used in this work.</p> ">
Abstract
:1. Introduction
2. Related Works
3. Proposed Methodology
3.1. Visual Data
3.2. Image Representation of Skeletal Data
3.3. Occlusion of Skeletal Data
3.4. Classification
4. Experiments and Results
4.1. Datasets
- a.
- PKU-MMD [8], which is an action recognition dataset that contains 1076 long video sequences in 51 action categories, performed by 66 subjects and in three camera views. The total amount of the action instances is approximately 20,000. For the recording of the dataset they used the Kinect v2 sensor. The 51 action categories are divided into two parts: 41 daily actions (e.g., brushing hair, brushing teeth, eating a meal, etc.) and 10 interaction actions (e.g., handshaking, hugging another person, slapping another person, etc.). For the data collection, three cameras with different viewpoints (, , ) were used.
- b.
- NTU-RGB+D [9], which is a large-scale RGB+D action recognition dataset containing approx. 57,000 video samples from 60 action classes and from 40 distinct subjects. The dataset contains 40 daily actions (drop, stand up, play with phone, etc.), 11 mutual actions (punch, pushing, hugging, etc.), and nine health-related actions (sneeze, nausea, neck pain, etc.). For data collection, the same camera setup as in PKU-MMD was used.
4.2. Experimental Setup and Implementation Details
4.3. Evaluation Protocol and Results
- a.
- Training and test sets consisting of full body parts only;
- b.
- Training set consisting of full body parts only, test set consisting of occluded body parts only;
- c.
- Training set consisting of full and occluded body parts, test set consisting of occluded body parts.
- a.
- Per camera position (single view), where training and test sets derive from the same camera (viewpoint), e.g., both deriving from the middle camera;
- b.
- Cross view experiments, where training and test sets derive from different cameras (viewpoints), e.g., training deriving from the middle camera, testing deriving from the left camera;
- c.
- Cross subject experiments, wherein subjects are split into training and testing groups, i.e., each subject appears in exactly one of these groups.
- a.
- From the PKU-MMD dataset we selected 11 actions related to activities of daily living: eating a meal/snack, falling, handshaking, hugging another person, making a phone call/answering the phone, playing with phone/tablet, reading, sitting down, standing up, typing on a keyboard, and wearing a jacket;
- b.
- From the NTU-RGB+D dataset we selected 12 medical conditions: sneezing/coughing, staggering, falling down, headache, chest pain, back pain, neck pain, nausea/vomiting, fanning self, yawning, stretching oneself, and blowing the nose.
- In almost all experiments, both the F score and the accuracy of the proposed approach are improved compared to the “simple” case of occlusion;
- In general, it is also comparable to the baseline case;
- The only exception to the above-mentioned are the cases of cross-view experiments when using:
- −
- Only the 11 activities of daily living, and specifically in the case where the left and the middle cameras were used for training, while the right one was used for testing; and
- −
- The full PKU-MMD dataset, and specifically in the case where the left camera was used for training, while the right one was used for testing;
- As expected, and due to the activities involved, in cases of the occlusion of legs, a drop of performance is not observed, in general.
- In cases where both metrics indicated improved performance, compared to the “simple” case of occlusion:
- In general, in that case it is also comparable to the baseline case;
- The only exception to the above-mentioned are the cases of the cross-view experiments when using:
- −
- Only the 12 medical conditions, and specifically in the case where the middle camera was used for training, while the right one was used for testing or vice versa;
- −
- The full NTU-RGB+D dataset, and specifically in the case of single view;
- −
- A few combinations of training and testing in the cross-view case;
- Additionally, in this case and as expected, in the case of the occlusion of legs, a drop of performance is not observed, in general.
5. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ADL | Activity of Daily Living |
CCTV | Closed Circuit Television |
CNN | Convolutional Neural Network |
CS | Cross-Subject |
CV | Cross-View |
GPU | Graphics Processing Unit |
HAR | Human Activity Recognition |
LA | Left Arm |
LL | Left Leg |
LSTM | Long-Short Term Memory |
RA | Right Arm |
RL | Right Leg |
RFID | Radio Frequency Identification |
SV | Single-View |
References
- Vrigkas, M.; Nikou, C.; Kakadiaris, I.A. A review of human activity recognition methods. Front. Robot. AI 2015, 2, 28. [Google Scholar] [CrossRef]
- Wang, P.; Li, W.; Ogunbona, P.; Wan, J.; Escalera, S. RGB-D-based human motion recognition with deep learning: A survey. Comput. Vis. Image Underst. 2018, 171, 118–139. [Google Scholar] [CrossRef]
- Pareek, P.; Thakkar, A. A survey on video-based human action recognition: Recent updates, datasets, challenges, and applications. Artif. Intell. Rev. 2021, 54, 2259–2322. [Google Scholar] [CrossRef]
- Debes, C.; Merentitis, A.; Sukhanov, S.; Niessen, M.; Frangiadakis, N.; Bauer, A. Monitoring activities of daily living in smart homes: Understanding human behavior. IEEE Signal Process. Mag. 2016, 33, 81–94. [Google Scholar] [CrossRef]
- Keogh, A.; Dorn, J.F.; Walsh, L.; Calvo, F.; Caulfield, B. Comparing the usability and acceptability of wearable sensors among older irish adults in a real-world context: Observational study. JMIR mHealth uHealth 2020, 8, e15704. [Google Scholar] [CrossRef] [PubMed]
- Majumder, S.; Mondal, T.; Deen, M.J. Wearable sensors for remote health monitoring. Sensors 2017, 17, 130. [Google Scholar] [CrossRef] [PubMed]
- Papadakis, A.; Mathe, E.; Spyrou, E.; Mylonas, P. A geometric approach for cross-view human action recognition using deep learning. In Proceedings of the 11th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Liu, C.; Hu, Y.; Li, Y.; Song, S.; Liu, J. Pku-mmd: A large scale benchmark for continuous multi-modal human action understanding. arXiv 2017, arXiv:1703.07475. [Google Scholar]
- Shahroudy, A.; Liu, J.; Ng, T.T.; Wang, G. Ntu rgb+ d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1010–1019. [Google Scholar]
- Gu, R.; Wang, G.; Hwang, J.N. Exploring severe occlusion: Multi-person 3d pose estimation with gated convolution. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 8243–8250. [Google Scholar]
- Giannakos, I.; Mathe, E.; Spyrou, E.; Mylonas, P. A study on the Effect of Occlusion in Human Activity Recognition. In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, Corfu, Greece, 29 June–2 July 2021; pp. 473–482. [Google Scholar]
- Lawton, M.P.; Brody, E.M. Assessment of older people: Self-maintaining and instrumental activities of daily living. Gerontologist 1969, 9, 179–186. [Google Scholar] [CrossRef] [PubMed]
- Du, Y.; Fu, Y.; Wang, L. Skeleton based action recognition with convolutional neural network. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 579–583. [Google Scholar]
- Hou, Y.; Li, Z.; Wang, P.; Li, W. Skeleton optical spectra-based action recognition using convolutional neural networks. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 807–811. [Google Scholar] [CrossRef]
- Ke, Q.; An, S.; Bennamoun, M.; Sohel, F.; Boussaid, F. Skeletonnet: Mining deep part features for 3-d action recognition. IEEE Signal Process. Lett. 2017, 24, 731–735. [Google Scholar] [CrossRef]
- Li, C.; Hou, Y.; Wang, P.; Li, W. Joint distance maps based action recognition with convolutional neural networks. IEEE Signal Process. Lett. 2017, 24, 624–628. [Google Scholar] [CrossRef]
- Liu, M.; Liu, H.; Chen, C. Enhanced skeleton visualization for view invariant human action recognition. Pattern Recognit. 2017, 68, 346–362. [Google Scholar] [CrossRef]
- Wang, P.; Li, W.; Li, C.; Hou, Y. Action recognition based on joint trajectory maps with convolutional neural networks. Knowl.-Based Syst. 2018, 158, 43–53. [Google Scholar] [CrossRef]
- Iosifidis, A.; Tefas, A.; Pitas, I. Multi-view human action recognition under occlusion based on fuzzy distances and neural networks. In Proceedings of the 20th European Signal Processing Conference (EUSIPCO), Bucharest, Romania, 27–31 August 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 1129–1133. [Google Scholar]
- Papadakis, A.; Mathe, E.; Vernikos, I.; Maniatis, A.; Spyrou, E.; Mylonas, P. Recognizing human actions using 3d skeletal information and CNNs. In Engineering Applications of Neural Networks, Proceedings of the 20th International Conference, EANN 2019, Xersonisos, Crete, Greece, 24–26 May 2019; Springer: Cham, Switzerland, 2019. [Google Scholar]
- Angelini, F.; Fu, Z.; Long, Y.; Shao, L.; Naqvi, S.M. 2D pose-based real-time human action recognition with occlusion-handling. IEEE Trans. Multimed. 2019, 22, 1433–1446. [Google Scholar] [CrossRef]
- Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
- Gkalelis, N.; Kim, H.; Hilton, A.; Nikolaidis, N.; Pitas, I. The i3dpost multi-view and 3d human action/interaction database. In Proceedings of the 2009 Conference for Visual Media Production, London, UK, 12–13 November 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 159–168. [Google Scholar]
- Holte, M.B.; Tran, C.; Trivedi, M.M.; Moeslund, T.B. Human pose estimation and activity recognition from multi-view videos: Comparative explorations of recent developments. IEEE J. Sel. Top. Signal Process. 2012, 6, 538–552. [Google Scholar] [CrossRef]
- Schuldt, C.; Laptev, I.; Caputo, B. Recognizing human actions: A local SVM approach. In Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, UK, 26 August 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 3, pp. 32–36. [Google Scholar]
- Iosifidis, A.; Marami, E.; Tefas, A.; Pitas, I.; Lyroudia, K. The MOBISERV-AIIA eating and drinking multi-view database for vision-based assisted living. J. Inf. Hiding Multimed. Signal Process. 2015, 6, 254–273. [Google Scholar]
- Li, Z.; Li, D. Action recognition of construction workers under occlusion. J. Build. Eng. 2022, 45, 103352. [Google Scholar] [CrossRef]
- Li, W.; Zhang, Z.; Liu, Z. Action recognition based on a bag of 3d points. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 9–14. [Google Scholar]
- Ionescu, C.; Papava, D.; Olaru, V.; Sminchisescu, C. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1325–1339. [Google Scholar] [CrossRef] [PubMed]
- Yang, D.; Wang, Y.; Dantcheva, A.; Garattoni, L.; Francesca, G.; Brémond, F. Self-Supervised Video Pose Representation Learning for Occlusion-Robust Action Recognition. In Proceedings of the 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), Jodhpur, India, 15–18 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
- Das, S.; Dai, R.; Koperski, M.; Minciullo, L.; Garattoni, L.; Bremond, F.; Francesca, G. Toyota smarthome: Real-world activities of daily living. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 833–842. [Google Scholar]
- Wang, J.; Nie, X.; Xia, Y.; Wu, Y.; Zhu, S.C. Cross-view action modeling, learning and recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2649–2656. [Google Scholar]
- Zhang, W.; Zhu, M.; Derpanis, K.G. From actemes to action: A strongly-supervised representation for detailed action understanding. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2248–2255. [Google Scholar]
- Vernikos, I.; Mathe, E.; Papadakis, A.; Spyrou, E.; Mylonas, P. An image representation of skeletal data for action recognition using convolutional neural networks. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments, Rhodes, Greece, 5–7 June 2009; pp. 325–326. [Google Scholar]
- Liu, T.; Sun, J.J.; Zhao, L.; Zhao, J.; Yuan, L.; Wang, Y.; Chen, L.C.; Schroff, F.; Adam, H. View-invariant, occlusion-robust probabilistic embedding for human pose. Int. J. Comput. Vision 2022, 130, 111–135. [Google Scholar] [CrossRef]
- Koutrintzes, D.; Spyrou, E.; Mathe, E.; Mylonas, P. A Multimodal Fusion Approach for Human Activity Recognition. Int. J. Neural Syst. 2022, 33, 2350002. [Google Scholar] [CrossRef] [PubMed]
- Chollet, F. Keras. Available online: https://github.com/fchollet/keras (accessed on 20 March 2023).
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
Setup | Train | Test | Metric | B | LA | LA + LL | LL | LA + RA | LL + RL | RA | RA + RL | RL | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S | A | S | A | S | A | S | A | S | A | S | A | S | A | S | A | |||||
CS | LRM | LRM | F | 0.68 | 0.56 | 0.71 | 0.53 | 0.71 | 0.62 | 0.74 | 0.18 | 0.59 | 0.47 | 0.69 | 0.53 | 0.70 | 0.49 | 0.69 | 0.66 | 0.73 |
Acc. | 0.70 | 0.59 | 0.74 | 0.55 | 0.75 | 0.64 | 0.77 | 0.28 | 0.64 | 0.52 | 0.72 | 0.55 | 0.73 | 0.50 | 0.71 | 0.67 | 0.75 | |||
SV | L | L | F | 0.68 | 0.51 | 0.76 | 0.49 | 0.77 | 0.70 | 0.77 | 0.18 | 0.68 | 0.58 | 0.76 | 0.40 | 0.70 | 0.37 | 0.72 | 0.66 | 0.75 |
Acc. | 0.69 | 0.54 | 0.76 | 0.54 | 0.77 | 0.71 | 0.78 | 0.29 | 0.69 | 0.60 | 0.76 | 0.42 | 0.70 | 0.40 | 0.71 | 0.66 | 0.76 | |||
M | M | F | 0.77 | 0.66 | 0.81 | 0.62 | 0.79 | 0.74 | 0.81 | 0.27 | 0.72 | 0.63 | 0.76 | 0.63 | 0.80 | 0.57 | 0.75 | 0.74 | 0.79 | |
Acc. | 0.78 | 0.67 | 0.81 | 0.64 | 0.79 | 0.75 | 0.81 | 0.37 | 0.72 | 0.66 | 0.76 | 0.65 | 0.79 | 0.59 | 0.75 | 0.74 | 0.79 | |||
R | R | F | 0.75 | 0.54 | 0.75 | 0.49 | 0.71 | 0.71 | 0.71 | 0.16 | 0.63 | 0.64 | 0.70 | 0.59 | 0.71 | 0.59 | 0.73 | 0.72 | 0.73 | |
Acc. | 0.76 | 0.57 | 0.76 | 0.53 | 0.72 | 0.72 | 0.72 | 0.29 | 0.66 | 0.65 | 0.71 | 0.62 | 0.72 | 0.60 | 0.74 | 0.72 | 0.74 | |||
CV | L | M | F | 0.71 | 0.59 | 0.78 | 0.58 | 0.77 | 0.69 | 0.79 | 0.22 | 0.70 | 0.60 | 0.76 | 0.61 | 0.75 | 0.60 | 0.74 | 0.70 | 0.79 |
Acc0. | 0.72 | 0.62 | 0.78 | 0.60 | 0.78 | 0.70 | 0.79 | 0.32 | 0.70 | 0.63 | 0.77 | 0.64 | 0.75 | 0.62 | 0.74 | 0.71 | 0.79 | |||
L | R | F | 0.70 | 0.50 | 0.64 | 0.46 | 0.63 | 0.66 | 0.67 | 0.15 | 0.65 | 0.54 | 0.67 | 0.50 | 0.68 | 0.47 | 0.69 | 0.64 | 0.69 | |
Acc. | 0.70 | 0.52 | 0.65 | 0.48 | 0.64 | 0.66 | 0.68 | 0.24 | 0.66 | 0.58 | 0.67 | 0.53 | 0.69 | 0.51 | 0.69 | 0.65 | 0.70 | |||
M | L | F | 0.71 | 0.58 | 0.77 | 0.56 | 0.77 | 0.68 | 0.76 | 0.19 | 0.67 | 0.61 | 0.71 | 0.48 | 0.71 | 0.44 | 0.69 | 0.70 | 0.74 | |
Acc. | 0.72 | 0.62 | 0.78 | 0.60 | 0.77 | 0.69 | 0.76 | 0.32 | 0.69 | 0.63 | 0.72 | 0.50 | 0.71 | 0.47 | 0.68 | 0.70 | 0.74 | |||
M | R | F | 0.55 | 0.35 | 0.66 | 0.34 | 0.63 | 0.52 | 0.69 | 0.15 | 0.51 | 0.40 | 0.64 | 0.42 | 0.70 | 0.38 | 0.70 | 0.50 | 0.70 | |
Acc. | 0.56 | 0.41 | 0.67 | 0.39 | 0.64 | 0.53 | 0.70 | 0.26 | 0.56 | 0.43 | 0.66 | 0.44 | 0.71 | 0.40 | 0.72 | 0.51 | 0.71 | |||
R | L | F | 0.60 | 0.53 | 0.62 | 0.48 | 0.60 | 0.55 | 0.59 | 0.18 | 0.48 | 0.44 | 0.56 | 0.49 | 0.57 | 0.48 | 0.55 | 0.58 | 0.57 | |
Acc. | 0.59 | 0.54 | 0.64 | 0.49 | 0.62 | 0.54 | 0.59 | 0.29 | 0.53 | 0.47 | 0.56 | 0.50 | 0.57 | 0.49 | 0.55 | 0.58 | 0.57 | |||
R | M | F | 0.73 | 0.60 | 0.70 | 0.52 | 0.69 | 0.65 | 0.74 | 0.25 | 0.61 | 0.57 | 0.72 | 0.60 | 0.71 | 0.57 | 0.70 | 0.72 | 0.74 | |
Acc. | 0.73 | 0.61 | 0.71 | 0.54 | 0.70 | 0.66 | 0.74 | 0.34 | 0.63 | 0.60 | 0.72 | 0.60 | 0.72 | 0.57 | 0.71 | 0.72 | 0.74 | |||
LR | M | F | 0.87 | 0.70 | 0.88 | 0.67 | 0.87 | 0.84 | 0.90 | 0.23 | 0.80 | 0.68 | 0.88 | 0.60 | 0.88 | 0.56 | 0.88 | 0.83 | 0.89 | |
Acc. | 0.87 | 0.71 | 0.88 | 0.68 | 0.86 | 0.85 | 0.90 | 0.36 | 0.79 | 0.71 | 0.88 | 0.64 | 0.88 | 0.59 | 0.88 | 0.83 | 0.89 | |||
LM | R | F | 0.79 | 0.67 | 0.65 | 0.65 | 0.62 | 0.76 | 0.68 | 0.21 | 0.52 | 0.64 | 0.64 | 0.56 | 0.66 | 0.53 | 0.67 | 0.77 | 0.70 | |
Acc. | 0.79 | 0.68 | 0.66 | 0.66 | 0.63 | 0.76 | 0.69 | 0.34 | 0.54 | 0.66 | 0.65 | 0.59 | 0.66 | 0.55 | 0.67 | 0.77 | 0.70 | |||
RM | L | F | 0.74 | 0.68 | 0.79 | 0.69 | 0.79 | 0.74 | 0.78 | 0.22 | 0.68 | 0.62 | 0.71 | 0.55 | 0.73 | 0.54 | 0.71 | 0.72 | 0.76 | |
Acc. | 0.75 | 0.69 | 0.80 | 0.70 | 0.80 | 0.74 | 0.77 | 0.34 | 0.69 | 0.63 | 0.71 | 0.57 | 0.73 | 0.56 | 0.70 | 0.72 | 0.76 |
Setup | Train | Test | Metric | B | LA | LA + LL | LL | LA + RA | LL + RL | RA | RA + RL | RL | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S | A | S | A | S | A | S | A | S | A | S | A | S | A | S | A | |||||
CS | LRM | LRM | F | 0.61 | 0.30 | 0.55 | 0.32 | 0.54 | 0.59 | 0.64 | 0.18 | 0.44 | 0.51 | 0.64 | 0.43 | 0.61 | 0.43 | 0.62 | 0.58 | 0.66 |
Acc. | 0.61 | 0.32 | 0.54 | 0.33 | 0.54 | 0.62 | 0.64 | 0.22 | 0.42 | 0.58 | 0.65 | 0.44 | 0.62 | 0.44 | 0.62 | 0.58 | 0.66 | |||
SV | L | L | F | 0.60 | 0.35 | 0.54 | 0.37 | 0.54 | 0.58 | 0.64 | 0.17 | 0.41 | 0.46 | 0.63 | 0.45 | 0.61 | 0.43 | 0.62 | 0.57 | 0.66 |
Acc. | 0.61 | 0.38 | 0.55 | 0.39 | 0.54 | 0.59 | 0.64 | 0.24 | 0.44 | 0.48 | 0.64 | 0.47 | 0.62 | 0.45 | 0.62 | 0.57 | 0.66 | |||
M | M | F | 0.49 | 0.28 | 0.50 | 0.30 | 0.49 | 0.49 | 0.59 | 0.12 | 0.42 | 0.36 | 0.61 | 0.38 | 0.59 | 0.37 | 0.60 | 0.46 | 0.61 | |
Acc. | 0.50 | 0.30 | 0.50 | 0.32 | 0.49 | 0.49 | 0.60 | 0.19 | 0.43 | 0.38 | 0.61 | 0.41 | 0.59 | 0.38 | 0.60 | 0.46 | 0.62 | |||
R | R | F | 0.53 | 0.29 | 0.48 | 0.30 | 0.48 | 0.51 | 0.59 | 0.11 | 0.37 | 0.44 | 0.62 | 0.39 | 0.54 | 0.37 | 0.55 | 0.52 | 0.61 | |
Acc. | 0.54 | 0.31 | 0.48 | 0.32 | 0.48 | 0.52 | 0.59 | 0.19 | 0.37 | 0.46 | 0.62 | 0.42 | 0.54 | 0.40 | 0.55 | 0.52 | 0.61 | |||
CV | L | M | F | 0.50 | 0.25 | 0.50 | 0.26 | 0.51 | 0.43 | 0.63 | 0.14 | 0.38 | 0.36 | 0.63 | 0.39 | 0.62 | 0.36 | 0.64 | 0.47 | 0.64 |
Acc. | 0.50 | 0.28 | 0.50 | 0.27 | 0.51 | 0.43 | 0.63 | 0.19 | 0.40 | 0.39 | 0.63 | 0.39 | 0.62 | 0.37 | 0.64 | 0.46 | 0.64 | |||
L | R | F | 0.50 | 0.33 | 0.50 | 0.34 | 0.51 | 0.49 | 0.58 | 0.18 | 0.37 | 0.42 | 0.58 | 0.38 | 0.53 | 0.37 | 0.54 | 0.48 | 0.58 | |
Acc. | 0.50 | 0.35 | 0.50 | 0.35 | 0.51 | 0.49 | 0.58 | 0.22 | 0.37 | 0.43 | 0.58 | 0.39 | 0.53 | 0.38 | 0.55 | 0.47 | 0.59 | |||
M | L | F | 0.59 | 0.31 | 0.51 | 0.33 | 0.51 | 0.55 | 0.64 | 0.13 | 0.36 | 0.47 | 0.63 | 0.45 | 0.58 | 0.44 | 0.58 | 0.55 | 0.64 | |
Acc. | 0.59 | 0.33 | 0.51 | 0.34 | 0.51 | 0.55 | 0.64 | 0.19 | 0.39 | 0.49 | 0.63 | 0.47 | 0.59 | 0.44 | 0.58 | 0.55 | 0.65 | |||
M | R | F | 0.61 | 0.37 | 0.42 | 0.36 | 0.45 | 0.57 | 0.53 | 0.12 | 0.30 | 0.49 | 0.54 | 0.42 | 0.49 | 0.40 | 0.50 | 0.56 | 0.53 | |
Acc. | 0.61 | 0.38 | 0.43 | 0.38 | 0.45 | 0.57 | 0.54 | 0.18 | 0.31 | 0.51 | 0.54 | 0.44 | 0.49 | 0.41 | 0.50 | 0.56 | 0.54 | |||
R | L | F | 0.59 | 0.28 | 0.51 | 0.31 | 0.51 | 0.57 | 0.61 | 0.11 | 0.39 | 0.49 | 0.61 | 0.43 | 0.56 | 0.42 | 0.58 | 0.55 | 0.62 | |
Acc. | 0.59 | 0.31 | 0.52 | 0.32 | 0.52 | 0.57 | 0.62 | 0.18 | 0.41 | 0.50 | 0.61 | 0.44 | 0.56 | 0.43 | 0.58 | 0.55 | 0.63 | |||
R | M | F | 0.55 | 0.34 | 0.39 | 0.34 | 0.39 | 0.52 | 0.47 | 0.17 | 0.29 | 0.46 | 0.47 | 0.43 | 0.46 | 0.41 | 0.49 | 0.49 | 0.49 | |
Acc. | 0.55 | 0.35 | 0.40 | 0.34 | 0.39 | 0.52 | 0.47 | 0.21 | 0.31 | 0.49 | 0.47 | 0.42 | 0.46 | 0.42 | 0.48 | 0.51 | 0.49 | |||
LR | M | F | 0.57 | 0.32 | 0.51 | 0.31 | 0.52 | 0.53 | 0.61 | 0.17 | 0.44 | 0.44 | 0.64 | 0.44 | 0.62 | 0.44 | 0.65 | 0.55 | 0.64 | |
Acc. | 0.58 | 0.33 | 0.52 | 0.31 | 0.52 | 0.53 | 0.62 | 0.22 | 0.45 | 0.46 | 0.64 | 0.46 | 0.63 | 0.46 | 0.65 | 0.56 | 0.65 | |||
LM | R | F | 0.60 | 0.28 | 0.51 | 0.29 | 0.52 | 0.54 | 0.61 | 0.14 | 0.37 | 0.47 | 0.61 | 0.44 | 0.55 | 0.43 | 0.56 | 0.56 | 0.60 | |
Acc. | 0.60 | 0.31 | 0.52 | 0.30 | 0.53 | 0.54 | 0.61 | 0.18 | 0.38 | 0.50 | 0.61 | 0.45 | 0.55 | 0.43 | 0.56 | 0.56 | 0.61 | |||
RM | L | F | 0.65 | 0.38 | 0.58 | 0.39 | 0.59 | 0.62 | 0.72 | 0.14 | 0.44 | 0.52 | 0.72 | 0.48 | 0.68 | 0.46 | 0.69 | 0.62 | 0.72 | |
Acc. | 0.65 | 0.39 | 0.58 | 0.39 | 0.59 | 0.62 | 0.73 | 0.21 | 0.45 | 0.55 | 0.72 | 0.50 | 0.69 | 0.48 | 0.69 | 0.62 | 0.73 |
Setup | Train | Test | Metric | B | LA | LA + LL | LL | LA + RA | LL + RL | RA | RA + RL | RL | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S | A | S | A | S | A | S | A | S | A | S | A | S | A | S | A | |||||
CS | LRM | LRM | F | 0.69 | 0.29 | 0.51 | 0.25 | 0.48 | 0.65 | 0.64 | 0.10 | 0.34 | 0.60 | 0.62 | 0.39 | 0.59 | 0.36 | 0.58 | 0.65 | 0.65 |
Acc. | 0.70 | 0.33 | 0.53 | 0.29 | 0.51 | 0.66 | 0.66 | 0.17 | 0.37 | 0.62 | 0.65 | 0.45 | 0.61 | 0.42 | 0.60 | 0.66 | 0.67 | |||
SV | L | L | F | 0.64 | 0.29 | 0.52 | 0.27 | 49 | 0.58 | 0.65 | 0.09 | 0.37 | 0.50 | 0.63 | 0.30 | 0.60 | 0.28 | 0.60 | 0.61 | 0.65 |
Acc. | 0.65 | 0.33 | 0.53 | 0.30 | 0.50 | 0.59 | 0.66 | 0.16 | 0.39 | 0.52 | 0.64 | 0.36 | 0.60 | 0.35 | 0.60 | 0.61 | 0.66 | |||
M | M | F | 0.63 | 0.26 | 0.54 | 0.21 | 0.51 | 0.59 | 0.67 | 0.09 | 38 | 0.50 | 0.67 | 0.36 | 0.65 | 0.33 | 0.65 | 0.62 | 0.68 | |
Acc. | 0.64 | 0.29 | 0.55 | 0.26 | 0.52 | 0.60 | 0.68 | 0.15 | 0.41 | 0.53 | 0.68 | 0.41 | 0.65 | 0.39 | 0.65 | 0.62 | 0.69 | |||
R | R | F | 0.63 | 0.29 | 0.53 | 0.26 | 0.53 | 0.60 | 0.65 | 0.09 | 0.37 | 0.56 | 0.62 | 0.38 | 0.64 | 0.33 | 0.63 | 0.61 | 0.65 | |
Acc. | 0.63 | 0.32 | 0.54 | 0.30 | 0.54 | 0.60 | 0.65 | 0.15 | 0.39 | 0.57 | 0.63 | 0.42 | 0.64 | 0.38 | 0.63 | 0.62 | 0.65 | |||
CV | L | M | F | 0.61 | 0.23 | 0.50 | 0.22 | 0.47 | 0.55 | 0.64 | 0.10 | 0.34 | 0.49 | 0.63 | 0.34 | 0.60 | 0.32 | 0.60 | 0.59 | 0.65 |
Acc. | 0.62 | 0.27 | 0.51 | 0.26 | 0.48 | 0.56 | 0.65 | 0.15 | 0.37 | 0.53 | 0.64 | 0.39 | 0.61 | 0.37 | 0.61 | 0.59 | 0.66 | |||
L | R | F | 0.58 | 0.23 | 0.34 | 0.21 | 0.33 | 0.53 | 0.44 | 0.08 | 0.27 | 0.45 | 0.44 | 0.36 | 0.42 | 0.32 | 0.42 | 0.56 | 0.46 | |
Acc. | 0.59 | 0.28 | 0.37 | 0.25 | 0.35 | 0.54 | 0.46 | 0.14 | 0.30 | 0.49 | 0.46 | 0.40 | 0.44 | 0.36 | 0.45 | 0.57 | 0.48 | |||
M | L | F | 0.61 | 0.28 | 0.53 | 0.26 | 0.52 | 0.54 | 0.63 | 0.08 | 0.34 | 0.49 | 0.62 | 0.34 | 0.58 | 0.33 | 0.58 | 0.58 | 0.64 | |
Acc. | 0.62 | 0.31 | 0.54 | 0.28 | 0.53 | 0.56 | 0.64 | 0.14 | 0.37 | 0.51 | 0.63 | 0.39 | 0.59 | 0.38 | 0.58 | 0.60 | 0.65 | |||
M | R | F | 0.50 | 0.20 | 0.47 | 0.18 | 0.46 | 0.45 | 0.62 | 0.09 | 0.34 | 0.39 | 0.62 | 0.29 | 0.61 | 0.24 | 0.61 | 0.47 | 0.63 | |
Acc. | 0.51 | 0.24 | 0.49 | 0.22 | 0.47 | 0.47 | 0.63 | 0.14 | 0.36 | 0.41 | 0.62 | 0.32 | 0.61 | 0.27 | 0.61 | 0.47 | 0.63 | |||
R | L | F | 0.47 | 0.18 | 0.38 | 0.16 | 0.37 | 0.41 | 0.47 | 0.08 | 0.25 | 0.37 | 0.44 | 0.26 | 0.44 | 0.26 | 0.43 | 0.44 | 0.48 | |
Acc. | 0.48 | 0.22 | 0.40 | 0.19 | 0.39 | 0.43 | 0.49 | 0.16 | 0.28 | 0.40 | 0.47 | 0.32 | 0.46 | 0.33 | 0.45 | 0.46 | 0.50 | |||
R | M | F | 0.60 | 0.25 | 0.48 | 0.22 | 0.47 | 0.54 | 0.62 | 0.08 | 0.33 | 0.46 | 0.61 | 0.38 | 0.58 | 0.33 | 0.57 | 0.57 | 0.64 | |
Acc. | 0.61 | 0.30 | 0.50 | 0.24 | 0.48 | 0.54 | 0.63 | 0.16 | 0.35 | 0.48 | 0.62 | 0.42 | 0.58 | 0.37 | 0.58 | 0.57 | 0.64 | |||
LR | M | F | 0.70 | 0.31 | 0.58 | 0.26 | 0.56 | 0.65 | 0.74 | 0.09 | 0.40 | 0.60 | 0.73 | 0.42 | 0.69 | 0.38 | 0.69 | 0.68 | 0.75 | |
Acc. | 0.70 | 0.34 | 0.58 | 0.30 | 0.56 | 0.67 | 0.74 | 0.15 | 0.41 | 0.63 | 0.73 | 0.46 | 0.69 | 0.43 | 0.68 | 0.68 | 0.75 | |||
LM | R | F | 0.65 | 0.26 | 0.50 | 0.25 | 0.49 | 0.61 | 0.63 | 0.11 | 0.36 | 0.56 | 0.62 | 0.36 | 0.61 | 0.35 | 0.62 | 0.60 | 0.64 | |
Acc. | 0.66 | 0.30 | 0.52 | 0.29 | 0.50 | 0.62 | 0.64 | 0.17 | 0.39 | 0.58 | 64 | 0.42 | 0.62 | 0.41 | 0.62 | 0.61 | 0.65 | |||
RM | L | F | 0.65 | 0.27 | 0.50 | 0.26 | 0.49 | 0.60 | 0.62 | 0.09 | 0.34 | 0.55 | 0.60 | 0.36 | 0.56 | 0.35 | 0.55 | 0.61 | 0.62 | |
Acc. | 0.65 | 0.30 | 0.51 | 0.30 | 0.49 | 0.61 | 0.63 | 0.15 | 0.36 | 0.56 | 0.60 | 0.40 | 0.57 | 0.40 | 0.56 | 0.62 | 0.63 |
Setup | Train | Test | Metric | B | LA | LA + LL | LL | LA + RA | LL + RL | RA | RA + RL | RL | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S | A | S | A | S | A | S | A | S | A | S | A | S | A | S | A | |||||
CS | LRM | LRM | F | 0.49 | 0.25 | 0.37 | 0.22 | 0.36 | 0.46 | 0.45 | 0.09 | 0.25 | 0.39 | 0.45 | 0.31 | 0.44 | 0.29 | 0.44 | 0.47 | 0.47 |
Acc. | 0.49 | 0.26 | 0.38 | 0.23 | 0.37 | 0.47 | 0.47 | 0.12 | 0.27 | 0.41 | 0.47 | 0.32 | 0.45 | 0.31 | 0.45 | 0.47 | 0.48 | |||
SV | L | L | F | 0.60 | 0.25 | 0.39 | 0.20 | 0.38 | 0.50 | 0.49 | 0.08 | 0.28 | 0.37 | 0.50 | 0.34 | 0.46 | 0.29 | 0.48 | 0.53 | 0.51 |
Acc. | 0.60 | 0.26 | 0.40 | 0.21 | 0.39 | 0.51 | 0.50 | 0.11 | 0.29 | 0.41 | 0.50 | 0.36 | 0.46 | 0.32 | 0.48 | 0.55 | 0.51 | |||
M | M | F | 0.57 | 0.27 | 0.36 | 0.24 | 0.35 | 0.50 | 0.46 | 0.09 | 0.24 | 0.42 | 0.46 | 0.36 | 0.44 | 0.31 | 0.43 | 0.51 | 0.48 | |
Acc. | 0.58 | 0.28 | 0.37 | 0.25 | 0.36 | 0.51 | 0.47 | 0.11 | 0.26 | 0.45 | 0.47 | 0.38 | 0.44 | 0.32 | 0.44 | 0.52 | 0.49 | |||
R | R | F | 0.58 | 0.27 | 0.35 | 0.22 | 0.35 | 0.51 | 0.44 | 0.08 | 0.25 | 0.40 | 0.46 | 0.32 | 0.38 | 0.27 | 0.40 | 0.52 | 0.47 | |
Acc. | 0.59 | 0.29 | 0.36 | 0.24 | 0.35 | 0.52 | 0.45 | 0.10 | 0.27 | 0.43 | 0.46 | 0.34 | 0.38 | 0.30 | 0.40 | 0.53 | 0.47 | |||
CV | L | M | F | 0.38 | 0.14 | 0.35 | 0.11 | 0.34 | 0.35 | 0.45 | 0.04 | 0.24 | 0.28 | 0.46 | 0.23 | 0.43 | 0.20 | 0.44 | 0.35 | 0.47 |
Acc. | 0.37 | 0.16 | 0.36 | 0.13 | 0.35 | 0.35 | 0.46 | 0.06 | 0.26 | 0.30 | 0.46 | 0.23 | 0.43 | 0.20 | 0.44 | 0.34 | 0.47 | |||
L | R | F | 0.41 | 0.24 | 0.35 | 0.20 | 0.35 | 0.37 | 0.43 | 0.09 | 0.23 | 0.31 | 0.42 | 0.26 | 0.39 | 0.25 | 0.39 | 0.39 | 0.44 | |
Acc. | 0.41 | 0.24 | 0.35 | 0.20 | 0.35 | 0.37 | 0.44 | 0.12 | 0.23 | 0.33 | 0.43 | 0.29 | 0.39 | 0.28 | 0.39 | 0.40 | 0.44 | |||
M | L | F | 0.46 | 0.23 | 0.39 | 0.20 | 0.38 | 0.44 | 0.49 | 0.08 | 0.25 | 0.35 | 0.49 | 0.31 | 0.45 | 0.29 | 0.45 | 0.45 | 0.50 | |
Acc. | 0.46 | 0.24 | 0.39 | 0.21 | 0.38 | 0.44 | 0.50 | 0.10 | 0.26 | 0.37 | 0.49 | 0.32 | 0.45 | 0.32 | 0.45 | 0.45 | 0.51 | |||
M | R | F | 0.46 | 0.26 | 0.33 | 0.25 | 0.32 | 0.43 | 0.40 | 0.09 | 0.22 | 0.37 | 0.39 | 0.30 | 0.37 | 0.28 | 0.37 | 0.45 | 0.41 | |
Acc. | 0.46 | 0.28 | 0.33 | 0.26 | 0.32 | 0.44 | 0.40 | 0.11 | 0.23 | 0.39 | 0.40 | 0.32 | 0.37 | 0.30 | 0.37 | 0.46 | 0.41 | |||
R | L | F | 0.44 | 0.23 | 0.33 | 0.22 | 0.33 | 0.41 | 0.44 | 0.08 | 0.21 | 0.35 | 0.43 | 0.27 | 0.39 | 0.25 | 0.39 | 0.42 | 0.45 | |
Acc. | 0.43 | 0.24 | 0.34 | 0.23 | 0.33 | 0.41 | 0.44 | 0.10 | 0.22 | 0.37 | 0.44 | 0.28 | 0.38 | 0.27 | 0.39 | 0.42 | 0.46 | |||
R | M | F | 0.50 | 0.23 | 0.31 | 0.20 | 0.30 | 0.47 | 0.38 | 0.09 | 0.23 | 0.42 | 0.39 | 0.32 | 0.38 | 0.28 | 0.39 | 0.47 | 0.40 | |
Acc. | 0.50 | 0.24 | 0.32 | 0.21 | 0.31 | 0.48 | 0.39 | 0.10 | 0.25 | 0.43 | 0.39 | 0.33 | 0.39 | 0.30 | 0.39 | 0.47 | 0.41 | |||
LR | M | F | 0.42 | 0.22 | 0.37 | 0.21 | 0.36 | 0.40 | 0.46 | 0.07 | 0.26 | 0.35 | 0.46 | 0.27 | 0.44 | 0.26 | 0.44 | 0.41 | 0.48 | |
Acc. | 0.43 | 0.23 | 0.38 | 0.22 | 0.37 | 0.42 | 0.47 | 0.10 | 0.28 | 0.37 | 0.47 | 0.30 | 0.44 | 0.28 | 0.45 | 0.43 | 0.49 | |||
LM | R | F | 0.46 | 0.24 | 0.34 | 0.22 | 0.33 | 0.43 | 0.42 | 0.09 | 0.22 | 0.38 | 0.41 | 0.27 | 0.37 | 0.26 | 0.37 | 0.44 | 0.42 | |
Acc. | 0.45 | 0.25 | 0.35 | 0.23 | 0.34 | 0.43 | 0.43 | 0.12 | 0.24 | 0.40 | 0.42 | 0.28 | 0.38 | 0.27 | 0.38 | 0.44 | 0.43 | |||
RM | L | F | 0.52 | 0.29 | 0.37 | 0.24 | 0.36 | 0.48 | 0.48 | 0.07 | 0.25 | 0.40 | 0.48 | 0.32 | 0.45 | 0.28 | 0.46 | 0.48 | 0.51 | |
Acc. | 0.52 | 0.29 | 0.38 | 0.25 | 0.36 | 0.49 | 0.49 | 0.10 | 0.26 | 0.43 | 0.48 | 0.34 | 0.45 | 0.31 | 0.46 | 0.50 | 0.51 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vernikos, I.; Spyropoulos, T.; Spyrou, E.; Mylonas, P. Human Activity Recognition in the Presence of Occlusion. Sensors 2023, 23, 4899. https://doi.org/10.3390/s23104899
Vernikos I, Spyropoulos T, Spyrou E, Mylonas P. Human Activity Recognition in the Presence of Occlusion. Sensors. 2023; 23(10):4899. https://doi.org/10.3390/s23104899
Chicago/Turabian StyleVernikos, Ioannis, Theodoros Spyropoulos, Evaggelos Spyrou, and Phivos Mylonas. 2023. "Human Activity Recognition in the Presence of Occlusion" Sensors 23, no. 10: 4899. https://doi.org/10.3390/s23104899
APA StyleVernikos, I., Spyropoulos, T., Spyrou, E., & Mylonas, P. (2023). Human Activity Recognition in the Presence of Occlusion. Sensors, 23(10), 4899. https://doi.org/10.3390/s23104899