Abstract
Sparse representation is one of the most popular methods for human activity recognition. Sparse representation describes a video by a set of independent descriptors. Each of these descriptors usually captures the local information of the video. These features are then mapped to another space, using Fisher Vectors, and an SVM is used for clustering them. One of the sparse representation methods proposed in the literature uses trajectories as features. Trajectories have been shown to be discriminative in many previous works on human activity recognition. In this paper, a more formal definition is given to trajectories and a new more effective trajectory shape descriptor is proposed. We tested the proposed method against our challenging dataset and demonstrated through experiments that our new trajectory descriptor outperforms the previously existing main shape descriptor with a good margin. For example, in one case the obtained results had a 5.58% improvement, compared to the existing trajectory shape descriptor. We run our tests over sparse feature sets, and we are able to reach comparable results to a dense sampling method, with fewer computations.
Similar content being viewed by others
References
Wang, H., Klaser, A., Schmid, C., Liu, C.-L.: Action recognition by dense trajectories. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3169–3176. IEEE (2011)
Habashi, P., Boufama, B., Ahmad, I.S.: The bag of micro-movements for human activity recognition. In: Kamel, M., Campilho, A. (eds.) ICIAR 2015. LNCS, vol. 9164, pp. 269–276. Springer, Cham (2015). doi:10.1007/978-3-319-20801-5_29
Mohammadi, E., Wu, Q.J., Saif, M.: Human action recognition by fusing the outputs of individual classifiers. In: 2016 13th Conference on Computer and Robot Vision (CRV), pp. 335–341. IEEE (2016)
Wang, Y., Tran, V., Hoai, M.: Evolution-preserving dense trajectory descriptors. arXiv preprint arXiv:1702.04037 (2017)
Wang, H., Oneata, D., Verbeek, J., Schmid, C.: A robust and efficient video representation for action recognition. Int. J. Comput. Vis. 119(3), 219–238 (2016)
Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 3551–3558. IEEE (2013)
Chang, C.-C. Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011). Software available at http://www.csie.ntu.edu.tw/cjlin/libsvm
Laptev, I., Lindeberg, T.: Interest point detection and scale selection in space-time. In: Griffin, L.D., Lillholm, M. (eds.) Scale-Space 2003. LNCS, vol. 2695, pp. 372–387. Springer, Heidelberg (2003). doi:10.1007/3-540-44935-3_26
Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)
Dollár, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72. IEEE (2005)
Scovanner, P., Ali, S., Shah, M.: A 3-dimensional sift descriptor and its application to action recognition. In: Proceedings of the 15th international conference on Multimedia, pp. 357–360. ACM (2007)
Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006). doi:10.1007/11744023_34
Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–119 (2010)
Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)
Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 428–441. Springer, Heidelberg (2006). doi:10.1007/11744047_33
Wang, H., Kläser, A., Schmid, C., Liu, C.-L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)
Mohammadi, E., Wu, Q.J., Saif, M.: Human activity recognition using an ensemble of support vector machines. In: 2016 International Conference on High Performance Computing and Simulation (HPCS), pp. 549–554. IEEE (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Habashi, P., Boufama, B., Ahmad, I.S. (2017). A Better Trajectory Shape Descriptor for Human Activity Recognition. In: Karray, F., Campilho, A., Cheriet, F. (eds) Image Analysis and Recognition. ICIAR 2017. Lecture Notes in Computer Science(), vol 10317. Springer, Cham. https://doi.org/10.1007/978-3-319-59876-5_37
Download citation
DOI: https://doi.org/10.1007/978-3-319-59876-5_37
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-59875-8
Online ISBN: 978-3-319-59876-5
eBook Packages: Computer ScienceComputer Science (R0)