Keywords

1 Introduction

Interest point detection has been a hot topic in computer vision field for a number of years. It’s a fundamental research problem in computer vision, which plays a key role in many high-level problems, such as activity recognition, 3D reconstruction, image retrieval and so on. In this paper we proposed a new method to robustly detect interest point in 4D spatiotemporal space (x, y, z, t) for human action recognition.

3D spatiotemporal interest point (3D STIP) have been shown to perform well for activity recognition and event recognition [22, 30]. It used RGB video to detect interest point, which RGB image is sensitive to color and illumination changes, occlusions, as well as background clutters. With the advent of 3D acquisition equipment, we can easily get the depth information. Depth data can significantly simplify the task of background subtraction and human detection. It can work well in low light conditions, giving a real 3D measure invariant to surface color and texture, while resolving silhouette pose ambiguities.

The depth data provided by Kinect, however, is noisy, which may have an impact on interest point detection [30]. In order to resolve this problem, we acquire a non-noisy human 3D action representation by fusing the depth data stream into global TSDF volume which can be useful for detecting robust interest points. Then, we introduced a new 4D implicit surface interest point (4D-ISIP) as an extension to 3D-STIP for motion recognition, especially for human actions recognition.

2 Related Work

Interest point is usually required to be robust under different image transformations and is typically the local extrema point of some domain. There are a number of interest point detectors [8, 14, 19, 20] for static images. They are widely used for image matching, image retrieval and image classification. For the activity recognition, the spatiotemporal interest points (STIP) detected from a sequence of images is shown to work effectively. The widely used STIP detectors include three main approaches: (1) Laptev [12] detected the spatiotemporal volumes with large variation along spatial and temporal directions in a video sequence. A spatiotemporal second-moment matrix is used to model a video sequence. The interest point locations are determined by computing the local maxima of the response function \(H = det(M)-k\cdot trace^{3}(M)\), We will give details about this method in Sect. 3.2 (2) Dollár et al. [5] proposed a cubed detector computing the interest point by the local maxima of the response function R, which is defined as: \(R=(I*g *h_{ev})^{2}+(I*g *h_{od})^{2}\) where g is the 2D gaussian smoothing kernel, \(h_{ev}\) and \(h_{od}\) are a quadrature pair of 1D Gabor filters. (3) Willems et al. [26] proposed the Hessian detector, which measures the strength of each interest point using the Hessian matrix. The response function is defined as \(S=|det(\varGamma )|\), where \(\varGamma \) is the Hessian matrix.

The above methods detect the interest points from RGB images. Compared to depth data RGB images they are sensitive to illumination changes, occlusions, and background clutter. As the development of the depth sensor, many STIP detectors had been extended to the depth data. Xia and Aggarwal [27] presented a filtering method to extract STIPs from depth videos called DSTIP. Zhang and Parker [28] extracted STIPs by calculating a response function from both depth and RGB channels.

All the above methods only use partial view of the human body. Holte et al. [9] used multi-cameras to construct the 3D human action and then detected STIPs in every single camera view. Following that, they projected STIPs to 3D space to find 4D (x, y, z, t) spatiotemporal interest points. Cho et al. [3] proposed a volumetric spatial feature representation (VSFR) to measure the density of 3D point clouds for view-invariant human action recognition from depth sequence images. Kim et al. [10] extracted 4D spatiotemporal interest points (4D-STIP) in 3D space volume sequence reconstructed from multi-views. They detected interest points having large variations in (x, y, z) space firstly, then they check if those interest points have a significant variation in time axis. Kim et al. [10] used binary volume to represent the whole 3D human and calculate the partial derivatives.

In this paper, We only use one Kinect to get non-noisy 3D human motion, which is more practical in real applications. We use implicit surface (TSDF volume) to represent the whole 3D human, which provides a way to robustly calculate the partial derivative in 4 directions (x, y, z, t). We directly calculated the 4D ISIP in (x, y, z, t) space and choose the points which simultaneously have large variations in different four directions.

Utilizing a single Kinect to accurately recover 3D human action is an active and challenging research topic in recent years. Many methods [1, 2, 25] get 3D human model based on a trained human template, but those methods can’t be used to reconstruct the human body with clothes. Zhang et al. [29] reconstructed human body with clothes. However all of those methods require the reconstructed human to stay still during acquisition, which is not possible in reality. Newcombe et al. [15] proposed a real-time method to reconstruct dynamic scenes without any prior templates. But it is not capable of long term tracking, because of growing warp field and error accumulation. Guo et al. [7] introduced a novel \(L_0\) based motion regularizer with an iterative optimization solver, which can robustly reconstruct non-rigid geometries and motions from single view depth input. In this paper, we combine Newcombe et al. [15] and Guo et al. [7] to build a system to construct 3D human motion dataset for 4D ISIP detection.

3 Method

Before we introduce 4D-ISIP we introduce how to represent and acquire human action dataset firstly. We adopt the volumetric truncated signed distance function (TSDF) [4] to represent the 3D scene. We combine Newcombe et al. [15] and Guo et al. [7] to build a system to construct 3D human motion dataset for 4D ISIP detection. Then, we gave a detail about the 3D spatial temporal interest points below. Lastly, we will introduce how to extended 3D-STIP to 4D-ISIP.

3.1 Acquisition of 3D Human Motion Dataset

Upon acquisition of every input depth data, we first estimate the ground plane using RANSAC to segment human body from the ground. This is followed by point neighborhood statistics to filter noise outlier data.

DynamicFusion [15] can reconstruct non-rigidly deforming scenes in real-time by fusing RGBD scans acquired from commodity sensors without any template. It can generate denoised, detailed and complete reconstructions. But it is not capable of long term tracking, because of growing warp field and error accumulation. The method proposed by Guo et al. [7] provides long term tracking using a human template. We use DynamicFusion to get a complete human body mesh by rotating our body in front of the Kinect as the template. Then we use Shi and Tomasi [20] to track human motion with partial data input.

Lastly, We generated the same topology mesh for every motion frame. Then we transformed those mesh into TSDF representation. In Fig. 1, we illustrate how we use TSDF to implicitly represent an arbitrary surface as zero crossings within the volume. The whole scene is represented by a 3D volume with a TSDF value in each voxel. TSDF is the truncated distance between spatial point and object surface.

$$\begin{aligned} \varPsi (\eta ) \left\{ \begin{array}{ll} min(1,\frac{\eta }{\tau }) &{} \text { if }\eta \ge -\tau \\ -1 &{} \text {otherwise} \end{array}\right. \end{aligned}$$
(1)

where \(\tau \) is the threshold distance and \(\eta \) is the distance to surface, \(\varPsi (\eta )\) is the truncated signed distance.

Fig. 1.
figure 1

TSDF volume

3.2 3D Spatial Temporal Interest Points

In order to model a spatial-temporal image sequence, Laptev [12] used a function \(f:\mathbb {R} ^{2} \times \mathbb {R} \mapsto \mathbb {R}\) and constructed its linear scale-space representation \(L:\mathbb {R}^{2} \times \mathbb {R}\times \mathbb {R}_{+}^2\mapsto \mathbb {R}\) by convolution of f with an anisotropic Gaussian kernel with independent spatial variance \(\sigma _{s}^{2}\) and temporal variance \({\sigma _{t}^{2}}\):

$$\begin{aligned} L(x,y,t; \sigma _{s}^{2},\sigma _{t}^{2}) = g(x,y,t; \sigma _{s}^{2},\sigma _{t}^{2}) *f(x,y,t), \end{aligned}$$
(2)

where the spatiotemporal separable Gaussian kernel is defined as:

$$\begin{aligned} \begin{aligned} g(x,y,t; \sigma _{s}^{2},\sigma _{t}^{2}) = \frac{1}{\sqrt{(2 \pi )^{3}\sigma _{s}^{4} \sigma _{t}^{2}}} \\ \times \,\, exp(\frac{-(x^{2}+y^{2})}{2\sigma _{s}^{2}} - \frac{t^{2}}{2 \sigma _{t}^{2}}) \end{aligned}, \end{aligned}$$
(3)

where \(\sigma _{s^{'}}^{2} = l \sigma _{s}^{2}\) and \(\sigma _{t^{'}}^{2} = l\sigma _{t}^{2}\). Then they define a spatiotemporal second-moment matrix as:

$$\begin{aligned} M = g(\cdot ;\sigma _{s^{'}}^{2},\sigma _{t^{'}}^{2}) *\left( \begin{array}{ccc} L_{x}^{2} &{} L_{x}L_{y} &{} L_{x}L_{t} \\ L_{x}L_{y} &{} L_{y}^{2} &{} L_{y}L_{t} \\ L_{x}L_{t} &{} L_{y}L_{t} &{} L_{t}^{2} \end{array} \right) . \end{aligned}$$
(4)

The first-order derivatives of f are given by:

$$\begin{aligned} \begin{aligned} L_{x}(\cdot ; \sigma _{s}^{2},\sigma _{t}^{2})= \partial _{x}(g *f) \\ L_{y}(\cdot ; \sigma _{s}^{2},\sigma _{t}^{2})= \partial _{y}(g *f) \\ L_{t}(\cdot ; \sigma _{s}^{2},\sigma _{t}^{2})= \partial _{t}(g *f) \end{aligned}. \end{aligned}$$
(5)

For detecting interest points, search for region in f having significant eigenvalues \(\lambda _{1},\lambda _{2},\lambda _{3}\) of M. Laptev [12] calculate H by combining the determinant and the trace of M. And select the point with a large H value as the STIP:

$$\begin{aligned} H = det(M) - k\cdot trace^{3}(M) =\lambda _{1} \lambda _{2} \lambda _{3} - k(\lambda _{1}+\lambda _{2}+\lambda _{3})^{3}, \end{aligned}$$
(6)

where \(k = 0.04\) is an empirical value.

3.3 4D Implicit Surface Interested Points

We define \(p:\mathbb {R} ^{3} \times \mathbb {R} \mapsto \mathbb {R}\) as a truncated signed distance function which is the shortest distance to surface point. This can be regarded as an implicit surface representation. In this paper, our goal is to find interest points that have significant variation in (xyzt) directions. Firstly, we do a Gaussian filtering for the complete 3D motion sequences. Considering that spatial and temporal directions have different noise and scale characteristics, we use \(\bar{\sigma }_{s}^{2}\) for spatial space scale and \(\bar{\sigma }_{t}^{2}\) for temporal scale:

$$\begin{aligned} \bar{L}(x,y,z,t;\bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2}) = \bar{g}(x,y,z,t;\bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2}) *p(x,y,z,t), \end{aligned}$$
(7)

which results in 4D Gaussian given by:

$$\begin{aligned} \begin{aligned} \bar{g}(x,y,z,t;\bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2}) = \frac{1}{\sqrt{(2\pi )^{3}\bar{\sigma }_{s}^{6}\bar{\sigma }_{t}^{2}}} \\ \times \,\, exp(-\frac{(x^{2}+y^{2}+z^{2})}{2\bar{\sigma }_{s}^{2}}-\frac{t^{2}}{2\bar{\sigma }_{t}^{2}}) \end{aligned} \end{aligned}$$
(8)

After filtering, we define a spatiotemporal second-moment matrix, which is a 4-by-4 matrix composed of first order of spatial and temporal derivatives averaged by Gaussian weighting function:

$$\begin{aligned} \bar{M} = \bar{g}(\cdot ;\bar{\sigma }_{s^{'}}^{2},\bar{\sigma }_{t^{'}}^{2}) *\left( \begin{array}{cccc} \bar{L}_{x}^{2} &{} \bar{L}_{x}\bar{L}_{y} &{} \bar{L}_{x}\bar{L}_{z} &{} \bar{L}_{x}\bar{L}_{t} \\ \bar{L}_{x}\bar{L}_{y} &{} \bar{L}_{y}^{2} &{} \bar{L}_{y}\bar{L}_{z} &{} \bar{L}_{y}\bar{L}_{t} \\ \bar{L}_{x}\bar{L}_{z} &{} \bar{L}_{y}\bar{L}_{z} &{} \bar{L}_{z}^{2}&{} \bar{L}_{z}\bar{L}_{t} \\ \bar{L}_{x}\bar{L}_{t} &{} \bar{L}_{y}\bar{L}_{t} &{} \bar{L}_{z}L_{t}&{} \bar{L}_{t}^{2} \\ \end{array} \right) , \end{aligned}$$
(9)

where \(\bar{\sigma }_{s^{'}}^{2} = l^{'}\bar{\sigma }_{s}^{2}\) and \(\bar{\sigma }_{t^{'}}^{2} = l^{'}\bar{\sigma }_{t}^{2}\). \(l^{'}\) is an empirical value, in our experiments we set \(l^{'}=2\).

$$\begin{aligned} \begin{aligned} \bar{L}_{x}(\cdot ; \bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2})= \partial _{x}(\bar{g} *p), \\ \bar{L}_{y}(\cdot ; \bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2})= \partial _{y}(\bar{g} *p), \\ \bar{L}_{z}(\cdot ; \bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2})= \partial _{z}(\bar{g} *p),\\ \bar{L}_{t}(\cdot ; \bar{\sigma }_{s}^{2},\bar{\sigma }_{t}^{2})= \partial _{t}(\bar{g} *p). \end{aligned} \end{aligned}$$
(10)

In order to extract interest points, we search for regions in p having significant eigenvalues \(\bar{\lambda }_{1}<\bar{\lambda }_{2}<\bar{\lambda }_{3}<\bar{\lambda }_{4}\) of \(\bar{M}\). Similar to the Harris corner function and STIP function, we define a function as follows:

$$\begin{aligned} \begin{aligned} \bar{H}=det(\bar{M})-k*trace^{4}(\bar{M}) \\ = \bar{\lambda }_{1}\bar{\lambda }_{2}\bar{\lambda }_{3}\bar{\lambda }_{4}-k(\bar{\lambda }_{1}+\bar{\lambda }_{2}+\bar{\lambda }_{3}+\bar{\lambda }_{4})^{4}. \end{aligned} \end{aligned}$$
(11)

letting the ratios \(\alpha = \bar{\lambda }_{2}/\bar{\lambda }_{1}\), \(\beta = \bar{\lambda }_{3}/\bar{\lambda }_{1}\), \(\gamma = \bar{\lambda }_{3}/\bar{\lambda }_{1}\), we re-write H as

$$\begin{aligned} \bar{H} = \bar{\lambda }_{1}^{4}(\alpha \beta \gamma -k(1+\alpha +\beta +\gamma )^{4}), \end{aligned}$$
(12)

where \(\bar{H}\ge 0\), we have \(k\le \alpha \beta \gamma /(1+\alpha +\beta +\gamma )^{4}\). Suppose \(\alpha =\beta =\gamma = 23\), we get \(k\le 0.0005\). In our experiment we use \(k = 0.0005\). We select point with a \(\bar{H}\) value bigger than a threshold value \(\bar{H}_{t}\) as the candidate. At last, we select the points with the local maxima \(\bar{H}\) as the 4D ISIPs.

Fig. 2.
figure 2

Reconstruction setup (a) and reconstruction results (b). The complete 3D body model (d) is driven by the input one view data (c)

4 Experiments

4.1 3D Human Action Reconstruction

There are a number of datasets for human action recognition. Some of those datasets [11, 16,17,18, 21] are captured by single RGB camera. Some of those datasets [6, 24] are captured by multiple view RGB cameras, which can provide 3D human motion. However, the acquired 3D models are not accuracy enough. There are also some datasets [13, 23] captured by single Kinect. But those datasets have an only partial view data with high noise.

We constructed a 3D human action dataset using single fixed Kinect. In order to generate a 3D whole body, we rotate the body in front of the Kinect without required of a rigid body motion, as shown in Figure 2(a). Then we use this model as a template to do tracking. As shown in Fig. 2(c), with just one view input we can get a complete body model. Figure 3 show the sequences of reconstructed action dataset. We put a couple of frames together, so we can see the animation. This dataset includes 10 kinds of human action class: waving, walking, bowing, clapping, kicking, looking watch, weight-lifting, golf swinging, playing table tennis and badminton.

Fig. 3.
figure 3

Action dataset includes ten actions

4.2 4D-ISIP Detection

Upon the generated human motion sequences, we can extract 4D-ISIPs. In our experiments we set the resolution of volume is 128 * 128 * 128, \(\bar{\sigma }_{s} = 2\) and \(\bar{\sigma }_{t} = 1\). We normalized the value of \(\bar{H}\) by \(\bar{H} = (\bar{H}-\bar{H}_{min})/(\bar{H}_{max}-\bar{H}_{min})\). Apparently, we will get different results by setting different threshold. Figure 4(a, b, c) are the point clouds of action sequences. The red points in the those point clouds are the 4D ISIPs. As increasing the value of threshold the number of 4D-ISIPs is decreasing. Figure 4(d) is the 3D mesh of the kicking sequence. In order to extract sparse 4D-ISIPs, we set \(\bar{H}_t= 0.6\) in following experiments.

Fig. 4.
figure 4

Selection of threshold value \(\bar{H}_t\). (a) (b) (c) are the point clouds of the action sequence, (d) is the 3D mesh of the action sequence. (Color figure online)

Fig. 5.
figure 5

4D-ISIP detection results on dataset (Color figure online)

As Fig. 5 shows that we can robustly detect the changing motion directions, which suggests that 4D ISIP can represent the human motion. It can be used to describe trajectory of human action which can be used for action recognition. The red points in point cloud denote the detected 4D ISIP. The corresponding mesh models are also shown on the left. Here are more 4D ISIPs detection results in Fig. 6.

Fig. 6.
figure 6

More results of 4D-ISIP detection

Fig. 7.
figure 7

4D ISIPs can be detected even there is a slightly change in image content.

4.3 Comparison with 3D STIP

Technically, 3D STIP [12] detect interest point in (x, y, t) space which can’t describe the real 3D motion. It can’t handle motion occlusion and illumination change problems. For instance STIP can’t handle the motion of waving hand back and forth. Because this kind of motions has slight variations in image content. However, this motion have a significant variations in 3D (x, y, z) and 4D (x, y, z, t) space. The proposed 4D ISIP approach can work in this situation. As shown in Fig. 7.

Furthermore, 4D-ISIP is robust to illumination change, which is important for action recognition. As shown in Fig. 8. The 3D STIP is sensitive to illumination change. Because it extracts the interest points on RGB image which is sensitive to illumination change.

Fig. 8.
figure 8

Illumination change, 3D STIP is sensitive to illumination change, (a) show the 3D STIPs on the image sequence under the illumination change. Meanwhile, 4D ISIP is robust to the illumination change, (b) is the 4D ISIP on point cloud

5 Conclusions

In this paper, we built a system to acquire 3D human motion using one only Kinect. We proposed a new 4D ISIP(4D implicit surface interest point detection) which is the keypoint of the motion. It can be used for motion recognition, especially for human action recognition. We use the TSDF volume as implicit surface representation to represent the reconstructed 3D body. This results in a non-noisy human 3D action representation by fusing the depth data stream into global TSDF volume which can be useful for getting a robust interest point detection. Our approach doesn’t use the image information and only uses the pure 3D geometric information to detect the interest points which has a sort of advantages. It’s robust to the illumination change, occlusion, and noise caused by RGB or depth data stream. In the future work, we expected that the proposed 4D-ISIP could be used in human action recognition dealing with scale and view-invariant problems.