Abstract
In this paper, we proposed a new method to detect 4D spatiotemporal interest point called 4D-ISIP (4 dimension implicit surface interest point). We implicitly represent the 3D scene by 3D volume which has a truncated signed distance function (TSDF) in every voxel. The TSDF represents the distance between the spatial point and object surface which is a kind of implicit surface representation. The basic idea of 4D-ISIP detection is to detect the points whose local neighborhood has significant variations along both spatial and temporal dimensions. In order to test our 4D-ISIP detection, we built a system to acquire 3D human motion dataset using only one Kinect. Experimental results show that our method can detect 4D-ISIP for different human actions.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Non-rigid motion 3D reconstruction and tracking
- Spatiotemporal interest point dectection
- Human action recognition
- 3D human motion dataset
- Kinect
- Depth sensor
1 Introduction
Interest point detection has been a hot topic in computer vision field for a number of years. It’s a fundamental research problem in computer vision, which plays a key role in many high-level problems, such as activity recognition, 3D reconstruction, image retrieval and so on. In this paper we proposed a new method to robustly detect interest point in 4D spatiotemporal space (x, y, z, t) for human action recognition.
3D spatiotemporal interest point (3D STIP) have been shown to perform well for activity recognition and event recognition [22, 30]. It used RGB video to detect interest point, which RGB image is sensitive to color and illumination changes, occlusions, as well as background clutters. With the advent of 3D acquisition equipment, we can easily get the depth information. Depth data can significantly simplify the task of background subtraction and human detection. It can work well in low light conditions, giving a real 3D measure invariant to surface color and texture, while resolving silhouette pose ambiguities.
The depth data provided by Kinect, however, is noisy, which may have an impact on interest point detection [30]. In order to resolve this problem, we acquire a non-noisy human 3D action representation by fusing the depth data stream into global TSDF volume which can be useful for detecting robust interest points. Then, we introduced a new 4D implicit surface interest point (4D-ISIP) as an extension to 3D-STIP for motion recognition, especially for human actions recognition.
2 Related Work
Interest point is usually required to be robust under different image transformations and is typically the local extrema point of some domain. There are a number of interest point detectors [8, 14, 19, 20] for static images. They are widely used for image matching, image retrieval and image classification. For the activity recognition, the spatiotemporal interest points (STIP) detected from a sequence of images is shown to work effectively. The widely used STIP detectors include three main approaches: (1) Laptev [12] detected the spatiotemporal volumes with large variation along spatial and temporal directions in a video sequence. A spatiotemporal second-moment matrix is used to model a video sequence. The interest point locations are determined by computing the local maxima of the response function \(H = det(M)-k\cdot trace^{3}(M)\), We will give details about this method in Sect. 3.2 (2) Dollár et al. [5] proposed a cubed detector computing the interest point by the local maxima of the response function R, which is defined as: \(R=(I*g *h_{ev})^{2}+(I*g *h_{od})^{2}\) where g is the 2D gaussian smoothing kernel, \(h_{ev}\) and \(h_{od}\) are a quadrature pair of 1D Gabor filters. (3) Willems et al. [26] proposed the Hessian detector, which measures the strength of each interest point using the Hessian matrix. The response function is defined as \(S=|det(\varGamma )|\), where \(\varGamma \) is the Hessian matrix.
The above methods detect the interest points from RGB images. Compared to depth data RGB images they are sensitive to illumination changes, occlusions, and background clutter. As the development of the depth sensor, many STIP detectors had been extended to the depth data. Xia and Aggarwal [27] presented a filtering method to extract STIPs from depth videos called DSTIP. Zhang and Parker [28] extracted STIPs by calculating a response function from both depth and RGB channels.
All the above methods only use partial view of the human body. Holte et al. [9] used multi-cameras to construct the 3D human action and then detected STIPs in every single camera view. Following that, they projected STIPs to 3D space to find 4D (x, y, z, t) spatiotemporal interest points. Cho et al. [3] proposed a volumetric spatial feature representation (VSFR) to measure the density of 3D point clouds for view-invariant human action recognition from depth sequence images. Kim et al. [10] extracted 4D spatiotemporal interest points (4D-STIP) in 3D space volume sequence reconstructed from multi-views. They detected interest points having large variations in (x, y, z) space firstly, then they check if those interest points have a significant variation in time axis. Kim et al. [10] used binary volume to represent the whole 3D human and calculate the partial derivatives.
In this paper, We only use one Kinect to get non-noisy 3D human motion, which is more practical in real applications. We use implicit surface (TSDF volume) to represent the whole 3D human, which provides a way to robustly calculate the partial derivative in 4 directions (x, y, z, t). We directly calculated the 4D ISIP in (x, y, z, t) space and choose the points which simultaneously have large variations in different four directions.
Utilizing a single Kinect to accurately recover 3D human action is an active and challenging research topic in recent years. Many methods [1, 2, 25] get 3D human model based on a trained human template, but those methods can’t be used to reconstruct the human body with clothes. Zhang et al. [29] reconstructed human body with clothes. However all of those methods require the reconstructed human to stay still during acquisition, which is not possible in reality. Newcombe et al. [15] proposed a real-time method to reconstruct dynamic scenes without any prior templates. But it is not capable of long term tracking, because of growing warp field and error accumulation. Guo et al. [7] introduced a novel \(L_0\) based motion regularizer with an iterative optimization solver, which can robustly reconstruct non-rigid geometries and motions from single view depth input. In this paper, we combine Newcombe et al. [15] and Guo et al. [7] to build a system to construct 3D human motion dataset for 4D ISIP detection.
3 Method
Before we introduce 4D-ISIP we introduce how to represent and acquire human action dataset firstly. We adopt the volumetric truncated signed distance function (TSDF) [4] to represent the 3D scene. We combine Newcombe et al. [15] and Guo et al. [7] to build a system to construct 3D human motion dataset for 4D ISIP detection. Then, we gave a detail about the 3D spatial temporal interest points below. Lastly, we will introduce how to extended 3D-STIP to 4D-ISIP.
3.1 Acquisition of 3D Human Motion Dataset
Upon acquisition of every input depth data, we first estimate the ground plane using RANSAC to segment human body from the ground. This is followed by point neighborhood statistics to filter noise outlier data.
DynamicFusion [15] can reconstruct non-rigidly deforming scenes in real-time by fusing RGBD scans acquired from commodity sensors without any template. It can generate denoised, detailed and complete reconstructions. But it is not capable of long term tracking, because of growing warp field and error accumulation. The method proposed by Guo et al. [7] provides long term tracking using a human template. We use DynamicFusion to get a complete human body mesh by rotating our body in front of the Kinect as the template. Then we use Shi and Tomasi [20] to track human motion with partial data input.
Lastly, We generated the same topology mesh for every motion frame. Then we transformed those mesh into TSDF representation. In Fig. 1, we illustrate how we use TSDF to implicitly represent an arbitrary surface as zero crossings within the volume. The whole scene is represented by a 3D volume with a TSDF value in each voxel. TSDF is the truncated distance between spatial point and object surface.
where \(\tau \) is the threshold distance and \(\eta \) is the distance to surface, \(\varPsi (\eta )\) is the truncated signed distance.
3.2 3D Spatial Temporal Interest Points
In order to model a spatial-temporal image sequence, Laptev [12] used a function \(f:\mathbb {R} ^{2} \times \mathbb {R} \mapsto \mathbb {R}\) and constructed its linear scale-space representation \(L:\mathbb {R}^{2} \times \mathbb {R}\times \mathbb {R}_{+}^2\mapsto \mathbb {R}\) by convolution of f with an anisotropic Gaussian kernel with independent spatial variance \(\sigma _{s}^{2}\) and temporal variance \({\sigma _{t}^{2}}\):
where the spatiotemporal separable Gaussian kernel is defined as:
where \(\sigma _{s^{'}}^{2} = l \sigma _{s}^{2}\) and \(\sigma _{t^{'}}^{2} = l\sigma _{t}^{2}\). Then they define a spatiotemporal second-moment matrix as:
The first-order derivatives of f are given by:
For detecting interest points, search for region in f having significant eigenvalues \(\lambda _{1},\lambda _{2},\lambda _{3}\) of M. Laptev [12] calculate H by combining the determinant and the trace of M. And select the point with a large H value as the STIP:
where \(k = 0.04\) is an empirical value.
3.3 4D Implicit Surface Interested Points
We define \(p:\mathbb {R} ^{3} \times \mathbb {R} \mapsto \mathbb {R}\) as a truncated signed distance function which is the shortest distance to surface point. This can be regarded as an implicit surface representation. In this paper, our goal is to find interest points that have significant variation in (x, y, z, t) directions. Firstly, we do a Gaussian filtering for the complete 3D motion sequences. Considering that spatial and temporal directions have different noise and scale characteristics, we use \(\bar{\sigma }_{s}^{2}\) for spatial space scale and \(\bar{\sigma }_{t}^{2}\) for temporal scale:
which results in 4D Gaussian given by:
After filtering, we define a spatiotemporal second-moment matrix, which is a 4-by-4 matrix composed of first order of spatial and temporal derivatives averaged by Gaussian weighting function:
where \(\bar{\sigma }_{s^{'}}^{2} = l^{'}\bar{\sigma }_{s}^{2}\) and \(\bar{\sigma }_{t^{'}}^{2} = l^{'}\bar{\sigma }_{t}^{2}\). \(l^{'}\) is an empirical value, in our experiments we set \(l^{'}=2\).
In order to extract interest points, we search for regions in p having significant eigenvalues \(\bar{\lambda }_{1}<\bar{\lambda }_{2}<\bar{\lambda }_{3}<\bar{\lambda }_{4}\) of \(\bar{M}\). Similar to the Harris corner function and STIP function, we define a function as follows:
letting the ratios \(\alpha = \bar{\lambda }_{2}/\bar{\lambda }_{1}\), \(\beta = \bar{\lambda }_{3}/\bar{\lambda }_{1}\), \(\gamma = \bar{\lambda }_{3}/\bar{\lambda }_{1}\), we re-write H as
where \(\bar{H}\ge 0\), we have \(k\le \alpha \beta \gamma /(1+\alpha +\beta +\gamma )^{4}\). Suppose \(\alpha =\beta =\gamma = 23\), we get \(k\le 0.0005\). In our experiment we use \(k = 0.0005\). We select point with a \(\bar{H}\) value bigger than a threshold value \(\bar{H}_{t}\) as the candidate. At last, we select the points with the local maxima \(\bar{H}\) as the 4D ISIPs.
4 Experiments
4.1 3D Human Action Reconstruction
There are a number of datasets for human action recognition. Some of those datasets [11, 16,17,18, 21] are captured by single RGB camera. Some of those datasets [6, 24] are captured by multiple view RGB cameras, which can provide 3D human motion. However, the acquired 3D models are not accuracy enough. There are also some datasets [13, 23] captured by single Kinect. But those datasets have an only partial view data with high noise.
We constructed a 3D human action dataset using single fixed Kinect. In order to generate a 3D whole body, we rotate the body in front of the Kinect without required of a rigid body motion, as shown in Figure 2(a). Then we use this model as a template to do tracking. As shown in Fig. 2(c), with just one view input we can get a complete body model. Figure 3 show the sequences of reconstructed action dataset. We put a couple of frames together, so we can see the animation. This dataset includes 10 kinds of human action class: waving, walking, bowing, clapping, kicking, looking watch, weight-lifting, golf swinging, playing table tennis and badminton.
4.2 4D-ISIP Detection
Upon the generated human motion sequences, we can extract 4D-ISIPs. In our experiments we set the resolution of volume is 128 * 128 * 128, \(\bar{\sigma }_{s} = 2\) and \(\bar{\sigma }_{t} = 1\). We normalized the value of \(\bar{H}\) by \(\bar{H} = (\bar{H}-\bar{H}_{min})/(\bar{H}_{max}-\bar{H}_{min})\). Apparently, we will get different results by setting different threshold. Figure 4(a, b, c) are the point clouds of action sequences. The red points in the those point clouds are the 4D ISIPs. As increasing the value of threshold the number of 4D-ISIPs is decreasing. Figure 4(d) is the 3D mesh of the kicking sequence. In order to extract sparse 4D-ISIPs, we set \(\bar{H}_t= 0.6\) in following experiments.
As Fig. 5 shows that we can robustly detect the changing motion directions, which suggests that 4D ISIP can represent the human motion. It can be used to describe trajectory of human action which can be used for action recognition. The red points in point cloud denote the detected 4D ISIP. The corresponding mesh models are also shown on the left. Here are more 4D ISIPs detection results in Fig. 6.
4.3 Comparison with 3D STIP
Technically, 3D STIP [12] detect interest point in (x, y, t) space which can’t describe the real 3D motion. It can’t handle motion occlusion and illumination change problems. For instance STIP can’t handle the motion of waving hand back and forth. Because this kind of motions has slight variations in image content. However, this motion have a significant variations in 3D (x, y, z) and 4D (x, y, z, t) space. The proposed 4D ISIP approach can work in this situation. As shown in Fig. 7.
Furthermore, 4D-ISIP is robust to illumination change, which is important for action recognition. As shown in Fig. 8. The 3D STIP is sensitive to illumination change. Because it extracts the interest points on RGB image which is sensitive to illumination change.
5 Conclusions
In this paper, we built a system to acquire 3D human motion using one only Kinect. We proposed a new 4D ISIP(4D implicit surface interest point detection) which is the keypoint of the motion. It can be used for motion recognition, especially for human action recognition. We use the TSDF volume as implicit surface representation to represent the reconstructed 3D body. This results in a non-noisy human 3D action representation by fusing the depth data stream into global TSDF volume which can be useful for getting a robust interest point detection. Our approach doesn’t use the image information and only uses the pure 3D geometric information to detect the interest points which has a sort of advantages. It’s robust to the illumination change, occlusion, and noise caused by RGB or depth data stream. In the future work, we expected that the proposed 4D-ISIP could be used in human action recognition dealing with scale and view-invariant problems.
References
Bogo, F., Black, M.J., Loper, M., Romero, J.: Detailed full-body reconstructions of moving people from monocular RGB-D sequences. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2300–2308 (2015)
Chen, Y., Liu, Z., Zhang, Z.: Tensor-based human body modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 105–112 (2013)
Cho, S.S., Lee, A.R., Suk, H.I., Park, J.S., Lee, S.W.: Volumetric spatial feature representation for view-invariant human action recognition using a depth camera. Opt. Eng. 54(3), 033102 (2015)
Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303–312. ACM (1996)
Dollár, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72. IEEE (2005)
Gkalelis, N., Kim, H., Hilton, A., Nikolaidis, N., Pitas, I.: The i3DPost multi-view and 3D human action/interaction database. In: Conference for Visual Media Production, CVMP 2009, pp. 159–168. IEEE (2009)
Guo, K., Xu, F., Wang, Y., Liu, Y., Dai, Q.: Robust non-rigid motion tracking and surface reconstruction using L0 regularization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3083–3091 (2015)
Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, vol. 15, p. 50. Citeseer (1988)
Holte, M.B., Chakraborty, B., Gonzalez, J., Moeslund, T.B.: A local 3-D motion descriptor for multi-view human action recognition from 4-D spatio-temporal interest points. IEEE J. Sel. Top. Signal Process. 6(5), 553–565 (2012)
Kim, S.J., Kim, S.W., Sandhan, T., Choi, J.Y.: View invariant action recognition using generalized 4D features. Pattern Recogn. Lett. 49, 40–47 (2014)
Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: HMDB: a large video database for human motion recognition. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2556–2563. IEEE (2011)
Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64(2–3), 107–123 (2005)
Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 9–14. IEEE (2010)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Newcombe, R.A., Fox, D., Seitz, S.M.: DynamicFusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)
Niebles, J.C., Chen, C.-W., Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6312, pp. 392–405. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15552-9_29
Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Mach. Vis. Appl. 24(5), 971–981 (2013)
Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2008, pp. 1–8. IEEE (2008)
Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–119 (2010)
Shi, J., Tomasi, C.: Good features to track. In: 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Proceedings CVPR 1994, pp. 593–600. IEEE (1994)
Tran, D., Sorokin, A.: Human activity recognition with metric learning. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 548–561. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88682-2_42
Wang, H., Ullah, M.M., Klaser, A., Laptev, I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: BMVC 2009-British Machine Vision Conference, pp. 124–131. BMVA Press (2009)
Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1290–1297. IEEE (2012)
Weinland, D., Boyer, E., Ronfard, R.: Action recognition from arbitrary views using 3D exemplars. In: IEEE 11th International Conference on Computer Vision, 2007, ICCV 2007, pp. 1–7. IEEE (2007)
Weiss, A., Hirshberg, D., Black, M.J.: Home 3D body scans from noisy image and range data. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 1951–1958. IEEE (2011)
Willems, G., Tuytelaars, T., Van Gool, L.: An efficient dense and scale-invariant spatio-temporal interest point detector. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5303, pp. 650–663. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88688-4_48
Xia, L., Aggarwal, J.: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2834–2841 (2013)
Zhang, H., Parker, L.E.: 4-dimensional local spatio-temporal features for human activity recognition. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2044–2049. IEEE (2011)
Zhang, Q., Fu, B., Ye, M., Yang, R.: Quality dynamic human body modeling using a single low-cost depth camera. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 676–683. IEEE (2014)
Zhu, Y., Chen, W., Guo, G.: Evaluating spatiotemporal interest point features for depth-based action recognition. Image Vis. Comput. 32(8), 453–464 (2014)
Acknowledgements
This work was funded by Natural Science Foundation of China (61227802, 61379082) and China Scholarship Council.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Li, S., Yilmaz, A., Xiao, C., Li, H. (2017). 4D ISIP: 4D Implicit Surface Interest Point Detection. In: Zhao, Y., Kong, X., Taubman, D. (eds) Image and Graphics. ICIG 2017. Lecture Notes in Computer Science(), vol 10666. Springer, Cham. https://doi.org/10.1007/978-3-319-71607-7_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-71607-7_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-71606-0
Online ISBN: 978-3-319-71607-7
eBook Packages: Computer ScienceComputer Science (R0)