Nothing Special   »   [go: up one dir, main page]

CN103218824A - Motion key frame extracting method based on distance curve amplitudes - Google Patents

Motion key frame extracting method based on distance curve amplitudes Download PDF

Info

Publication number
CN103218824A
CN103218824A CN2012105660916A CN201210566091A CN103218824A CN 103218824 A CN103218824 A CN 103218824A CN 2012105660916 A CN2012105660916 A CN 2012105660916A CN 201210566091 A CN201210566091 A CN 201210566091A CN 103218824 A CN103218824 A CN 103218824A
Authority
CN
China
Prior art keywords
motion
distance
frame
key frame
key
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012105660916A
Other languages
Chinese (zh)
Inventor
魏小鹏
张强
薛翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University
Original Assignee
Dalian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University filed Critical Dalian University
Priority to CN2012105660916A priority Critical patent/CN103218824A/en
Publication of CN103218824A publication Critical patent/CN103218824A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本文发明基于距离曲线幅度的运动关键帧提取方法,提取运动的关键姿态来描述原始运动序列。第一步选择一组新的关节距离作为距离特征,第二步采用主成分分析方法(简称PCA方法)将此距离特征进行降维,提取第一维主成分,采用平滑滤波去除噪声,得到特征曲线,这样可以更好的反应出原始运动的本质特征,第三步通过提取特征曲线上的局部极值点,获得初始关键帧,第四步在初始相邻关键帧之间,根据其特征曲线幅度差值,均匀的插入相应的帧数,再合并过密的关键帧,得到最终的关键帧集合。大量的实验数据证明本发明不仅满足在运动的视觉概括性上的良好效果,而且具有较低的压缩率和误差比,在处理过程中无需手工干预自动完成。

Figure 201210566091

This paper invents a motion key frame extraction method based on the distance curve amplitude, and extracts the key pose of the motion to describe the original motion sequence. The first step is to select a new set of joint distances as the distance feature, and the second step is to use the principal component analysis method (PCA method for short) to reduce the dimensionality of this distance feature, extract the first-dimensional principal component, and use smoothing filter to remove noise to obtain the feature curve, which can better reflect the essential characteristics of the original motion. The third step obtains the initial key frame by extracting the local extremum points on the characteristic curve. The fourth step is between the initial adjacent key frames, according to the characteristic curve The amplitude difference is evenly inserted into the corresponding number of frames, and then the dense key frames are merged to obtain the final key frame set. A large amount of experimental data proves that the present invention not only satisfies the good effect on the visual generalization of motion, but also has a lower compression rate and error ratio, and it can be completed automatically without manual intervention in the processing process.

Figure 201210566091

Description

基于距离曲线幅度的运动关键帧提取Motion Keyframe Extraction Based on Distance Curve Amplitude

技术领域technical field

本发明涉及人体运动捕捉技术,更具体地说,涉及人体运动的关键帧提取。The present invention relates to human body motion capture technology, more specifically, relates to key frame extraction of human body motion.

背景技术Background technique

在近几十年,人体运动捕捉技术发展迅速,运动捕捉的重要性也随之增加。电影和游戏中已经广泛使用运动捕捉系统。伴随着存在一个问题,由于捕捉数据的庞大导致运动捕捉数据库的规模也很庞大,如何处理巨大运动捕捉数据已成为国内外研究的热点。In recent decades, human motion capture technology has developed rapidly, and the importance of motion capture has also increased. Motion capture systems are already widely used in movies and games. Accompanied by a problem, due to the huge amount of captured data, the size of the motion capture database is also very large. How to deal with huge motion capture data has become a hot research topic at home and abroad.

关键帧技术是一种有效的解决方法,选择运动中最重要最关键的帧作为关键帧,代表整个运动序列,而其他的非关键帧并不如这些关键帧重要,可以由关键帧经过插值算法计算得到。由于关键帧提取技术在整个运动序列的表示中占有不可替代的作用,不仅可以加快数据的处理速度,而且还在运动捕捉数据的存储、压缩、浏览和重用方面有着明显的优势。Key frame technology is an effective solution. Select the most important and critical frame in the motion as the key frame, representing the entire motion sequence, while other non-key frames are not as important as these key frames, which can be calculated by the key frame through an interpolation algorithm get. Because the key frame extraction technology plays an irreplaceable role in the representation of the entire motion sequence, it can not only speed up the data processing speed, but also has obvious advantages in the storage, compression, browsing and reuse of motion capture data.

也就是说,“人体运动的关键帧提取”是针对一段运动序列,自动提取出一定数目的关键运动姿态,对此段运动有一个较好的视觉概括性,同时又可以进行运动重建,还原原始运动,保持一个较低的误差率。That is to say, "key frame extraction of human motion" is to automatically extract a certain number of key motion postures for a motion sequence, which has a better visual generalization for this motion, and at the same time can perform motion reconstruction and restore the original frame. Exercise, keep a low error rate.

关键帧提取应该满足以下几个需要,一方面,在一定压缩率下的关键帧可以有效的概括原始运动序列。另一方面,关键帧可以用来尽可能精确的重建原始运动序列。对于关键帧提取的好坏除了主观视觉判断之外还有两个评估标准:误差率和压缩比。Key frame extraction should meet the following requirements. On the one hand, key frames under a certain compression rate can effectively summarize the original motion sequence. On the other hand, keyframes can be used to reconstruct the original motion sequence as accurately as possible. In addition to subjective visual judgment, there are two evaluation criteria for the quality of key frame extraction: error rate and compression ratio.

到目前为止,关键帧提取方法有均匀采样,曲线简化,基于聚类的方法等。So far, key frame extraction methods include uniform sampling, curve simplification, clustering-based methods, etc.

人体运动捕捉数据是采用BVH格式,其运动数据是由一帧一帧的运动数据组成,而每一帧的数据包含着运动的姿势信息,每一个姿势是由人体所有关节点组成的。将BVH数据导入MATLAB中所显示的人体骨骼模型见附图2,该模型包含31个关节点(对其中要用到的关节点进行了标注),各关节点采用树形结构,根节点(root)为树形人体骨架的根节点,从root关节点向人体骨架的各个末端关节逐层延伸形成根节点的各个子树。其中,root关节由3个平移量和3个旋转量表示,其他非root关节点各由3个旋转量表示。一共有96个自由度。root的平移决定人体运动的当前位置,root的旋转决定人体朝向;其他各关节点的旋转表示在其父关节点所在的局部坐标系下该关节点的方向,它们共同决定人体姿势。Human body motion capture data is in BVH format, and its motion data is composed of frame-by-frame motion data, and each frame of data contains motion posture information, and each posture is composed of all relevant nodes of the human body. Import the BVH data into MATLAB to display the human skeleton model shown in Figure 2. The model contains 31 joint points (the joint points to be used are marked), each joint point adopts a tree structure, and the root node (root ) is the root node of the tree-shaped human skeleton, and extends layer by layer from the root joint point to each end joint of the human skeleton to form each subtree of the root node. Among them, the root joint is represented by 3 translations and 3 rotations, and the other non-root joints are represented by 3 rotations. There are 96 degrees of freedom in total. The translation of the root determines the current position of the human body, and the rotation of the root determines the orientation of the human body; the rotation of each other joint point indicates the direction of the joint point in the local coordinate system where its parent joint point is located, and they together determine the human body posture.

人体运动捕捉的数据是由离散时间点采样得到的人体姿势序列,每个采样点为一帧,每一帧的姿势由31个关节点共同决定。这样,在任意时刻i,人体姿势表示为 F i = ( p i ( 1 ) , r i ( 1 ) , r i ( 2 ) , . . . . , r i ( 31 ) ) , 其中 p i ( 1 ) ∈ R 3 r i ( 1 ) ∈ R 3 分别表示root关节点的位置和方向,即平移量及旋转量,

Figure BDA00002636509700024
j=2,…,31表示非root关节点的方向。The data of human body motion capture is a sequence of human body postures obtained by sampling discrete time points, each sampling point is a frame, and the posture of each frame is jointly determined by 31 joint points. In this way, at any time i, the human body pose is expressed as f i = ( p i ( 1 ) , r i ( 1 ) , r i ( 2 ) , . . . . , r i ( 31 ) ) , in p i ( 1 ) ∈ R 3 and r i ( 1 ) ∈ R 3 Respectively represent the position and direction of the root joint point, that is, the amount of translation and the amount of rotation,
Figure BDA00002636509700024
j=2,...,31 represent the direction of non-root joint points.

发明内容Contents of the invention

本发明的目的在于:提出了基于一种新的距离特征曲线反应运动的本质特征,根据特征曲线的幅度进行两次关键帧提取,着重解决针对一段运动序列,可以自动的提取出一定数目的关键运动姿态,对于此段运动有一个较好的视觉概括性,同时又重建原始运动序列,保持一个较低的误差率。The purpose of the present invention is to: propose a new distance characteristic curve based on the essential characteristics of the reaction movement, perform two key frame extractions according to the magnitude of the characteristic curve, and focus on solving the problem of automatically extracting a certain number of key frames for a section of motion sequence Motion posture has a better visual generalization for this segment of motion, and at the same time reconstructs the original motion sequence, maintaining a low error rate.

本发明基于曲线简化的方法,提供一种基于距离曲线幅度的运动关键帧提取,包括如下步骤:Based on the method of curve simplification, the present invention provides a motion key frame extraction based on distance curve amplitude, comprising the following steps:

S1、选择一组关节距离作为距离特征;S1. Select a group of joint distances as distance features;

S2、采用PCA方法提取第一维主成分,采用平滑滤波去除噪声,得到特征曲线;S2, using the PCA method to extract the first dimension principal component, using smoothing filter to remove the noise, and obtaining the characteristic curve;

S3、通过提取特征曲线上的局部极值点获得初始关键帧;S3. Obtain an initial key frame by extracting local extremum points on the characteristic curve;

S4、在相邻的所述初始关键帧之间,计算的特征曲线幅度差值,进而根据均匀采样均匀的插入附加关键帧;S4. Between the adjacent initial key frames, calculate the characteristic curve amplitude difference, and then insert additional key frames evenly according to uniform sampling;

S5、合并过密的所述附加关键帧和所述初始关键帧,留下最终的关键帧集合。S5. Merge the over-dense additional keyframes and the initial keyframes to leave a final set of keyframes.

其中,步骤S1,所述关节距离的选择,以下表的逻辑语义方式做出:Wherein, in step S1, the selection of the joint distance is made in the logical semantic manner of the following table:

d1:左腿的弯曲程度d1: degree of bending of the left leg d4:左胳膊弯曲程度d4: left arm bending degree d7:低头/抬头/摇头程度d7: bow/raise/shake degree d2:右腿的弯曲程度d2: degree of bending of the right leg d5:右胳膊弯曲程度d5: degree of bending of the right arm d8:左胳膊的摇摆程度d8: swing degree of left arm d3:两脚的之间的距离d3: the distance between the two feet d6:弯腰程度d6: degree of bending d9:右胳膊的摇摆程度d9: swing degree of right arm

则将一个运动帧表示成为:θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9),其中d1,..,d9为S1步选取的具有逻辑语义的关节距离。Then a motion frame is represented as: θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9), where d1,...,d9 are the joint distances with logical semantics selected in step S1.

步骤S2中所述PCA方法分成以下五个步骤:The PCA method described in step S2 is divided into the following five steps:

S21:构建S1所示的9个距离特征样本的平均值S21: Construct the average of the 9 distance feature samples shown in S1

θθ ‾‾ ii == 11 TT ΣΣ jj == 11 TT θθ jithe ji (( ii == 11 .. .. 99 ,, jj == 11 .. .. .. TT ))

其中i=1…9代表这9个距离特征样本,T为这个运动总的帧数;Where i=1...9 represents the 9 distance feature samples, and T is the total number of frames of this movement;

S22:计算这9个距离特征的原始值与平均值的差值

Figure BDA00002636509700032
然后构建距离特征的差值矩阵D=[Δθ1,....Δθ9];S22: Calculate the difference between the original value and the average value of these 9 distance features
Figure BDA00002636509700032
Then construct the difference matrix D=[Δθ 1 ,....Δθ 9 ] of the distance feature;

S23:计算得到协方差矩阵C=DDT,其中DT为D的转置;S23: Calculate the covariance matrix C=DD T , where D T is the transpose of D;

S24:计算协方差矩阵的特征值λ,以及相应的特征向量L;S24: Calculate the eigenvalue λ of the covariance matrix, and the corresponding eigenvector L;

S25:提取特征向量

Figure BDA00002636509700033
这样就重新构建了一组新的9维主成分
Figure BDA00002636509700034
其值按贡献率从大到小的顺序排列;然后提取第一维主成分,也就是的最大贡献率
Figure BDA00002636509700035
S25: Extract feature vector
Figure BDA00002636509700033
In this way, a new set of 9-dimensional principal components is reconstructed
Figure BDA00002636509700034
Its values are arranged in order of contribution rate from large to small; then extract the first dimension principal component, that is, the maximum contribution rate of
Figure BDA00002636509700035

此一个运动M进而可以表示成为:

Figure BDA00002636509700036
其中Nframe为运动的总帧数;This motion M can in turn be expressed as:
Figure BDA00002636509700036
Among them, N frame is the total number of frames of the motion;

此外,步骤S2中选用Lowess平滑滤波。In addition, Lowess smoothing filter is selected in step S2.

优选方式下,步骤S4的实现过程为:In a preferred manner, the implementation process of step S4 is:

S41:计算相邻的所述初始关键帧之间的特征曲线的幅度差值;S41: Calculate the magnitude difference of the characteristic curve between the adjacent initial key frames;

S42:设置一个阈值;如果所述幅度差值≥所述阈值,则插入一帧或多帧所述附加关键帧。S42: Set a threshold; if the amplitude difference ≥ the threshold, insert one or more frames of the additional key frame.

优选方式下,步骤S5的实现方法:针对所述附加关键帧和所述初始关键帧过密的情况,采用限制相邻关键帧之间的帧数的方法合并。In a preferred manner, the implementation method of step S5: In view of the situation that the additional key frames and the initial key frames are too dense, combine by using a method of limiting the number of frames between adjacent key frames.

本发明与现有技术相比具有以下优点:Compared with the prior art, the present invention has the following advantages:

1、选择了一组新的距离特征,提取的特征曲线能更好的反应人体运动的本质特征。1. A new set of distance features is selected, and the extracted feature curve can better reflect the essential features of human motion.

2、采用两次关键帧提取的方法,第一次依据特征曲线的局部极值点提取边界关键帧作为初始关键帧,第二次依据特征曲线幅度差值采用均匀采样算法自动的插帧与合并过密帧,有较好的自动性不用人为的指定采样帧数,而且可以更好的适应运动,在平缓的运动处提取较少的关键帧,在剧烈的运动处提取较多的关键帧数。既能很好的概括原始运动,同时又有一个较低的误差率和压缩率。2. Using two key frame extraction methods, the first time extracts the boundary key frame as the initial key frame according to the local extreme point of the characteristic curve, and the second time uses the uniform sampling algorithm to automatically interpolate and merge the frame according to the amplitude difference of the characteristic curve Over-dense frames have better automation without artificially specifying the number of sampling frames, and can better adapt to motion. Fewer key frames are extracted at gentle motion, and more key frames are extracted at severe motion. . It can not only generalize the original motion well, but also has a low error rate and compression rate.

附图说明Description of drawings

图1本发明的流程图。Figure 1 is a flow chart of the present invention.

图2在MATLAB中显示的人体骨骼模型图。Figure 2 shows the human skeleton model diagram in MATLAB.

图3选择的九个关节距离特征表示图,θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9)。Figure 3 Selected nine joint distance feature representations, θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9).

图4‘踢球’运动的特征曲线及关键帧分布图;这里圆圈表示最初的关键帧,星号表示为最终的关键帧。Fig. 4 The characteristic curve and key frame distribution diagram of the "kicking ball" movement; here the circle represents the initial key frame, and the asterisk represents the final key frame.

图5‘踢球’运动的不同关键帧提取方法比较结果展示图;其中,(a)本发明方法;(b)均匀采样方法,椭圆代表过采样和欠采样;(c)曲线简化,椭圆代表过采样和欠采样;(d)只有四元数距离方法。Figure 5 shows the comparison results of different key frame extraction methods for the "kicking ball" movement; among them, (a) the method of the present invention; (b) the uniform sampling method, the ellipse represents oversampling and undersampling; (c) the simplified curve, the ellipse represents Oversampling and undersampling; (d) only quaternion distance method.

图6相似运动类型的关键帧提取比较图(以走运动为例);其中,(a)胳膊摆动大的走;(b)胳膊摆动小的走;(c)欢快的走;(d)狂野的走。Figure 6 Comparison of key frame extraction for similar motion types (taking walking as an example); among them, (a) walking with big arm swing; (b) walking with small arm swing; (c) walking cheerfully; (d) crazy Go wild.

图7采用本发明方法对六种不同运动类型(踢球,跳,跑-停止,走,跳舞,走-跳-走)的压缩率比较图。Fig. 7 is a comparison chart of compression rates of six different motion types (kicking ball, jumping, running-stop, walking, dancing, walking-jumping-walking) using the method of the present invention.

图8四种不同方法(依次是本发明方法,四元数距离方法,曲线简化,均匀采样方法)对六种采样运动的重建误差比较图;(a)踢球误差(提取33个关键帧);(b)跳的误差(提取24个关键帧);(c)跑-停止的误差(提取11个关键帧);(d)走的误差(提取16个关键帧);(e)跳舞的误差(提取37个关键帧);(f)走-跳-走的误差(提取50个关键帧)。Fig. 8 Four different methods (the method of the present invention, the quaternion distance method, the curve simplification, and the uniform sampling method) compare the reconstruction errors of six sampling movements; (a) kicking error (extract 33 key frames) ; (b) jumping error (24 keyframes extracted); (c) run-stop error (11 keyframes extracted); (d) walking error (16 keyframes extracted); (e) dancing Error (extract 37 keyframes); (f) go-jump-walk error (extract 50 keyframes).

具体实施方式Detailed ways

本发明的技术方案是:本文发明基于距离曲线幅度的运动关键帧提取方法,包括选择一组新的关节距离作为距离特征,采用PCA方法提取第一维主成分,采用平滑滤波去除噪声,得到特征曲线,通过提取特征曲线上的局部极值点获得初始关键帧,在初始相邻关键帧之间,计算的特征曲线幅度差值,进而根据均匀采样均匀的插入相应的帧数,再合并过密的关键帧得到最终的关键帧集合。附图1所示为本发明算法流程图,其具体包括以下技术环节:The technical solution of the present invention is: the motion key frame extraction method based on the range of the distance curve is invented in this paper, which includes selecting a group of new joint distances as the distance feature, using the PCA method to extract the first dimension principal component, and using smoothing filter to remove noise to obtain the feature Curve, the initial key frame is obtained by extracting the local extremum point on the characteristic curve, and the amplitude difference of the characteristic curve is calculated between the initial adjacent key frames, and then the corresponding number of frames is evenly inserted according to the uniform sampling, and then merged and over-dense keyframes to get the final set of keyframes. Accompanying drawing 1 shows the algorithm flowchart of the present invention, and it specifically comprises the following technical links:

1.选择一组新的关节距离作为距离特征。1. Select a new set of joint distances as distance features.

本发明选择九个具有逻辑语义的人体关节距离作为距离特征,反映此运动的本质特征,θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9),见附图3。其逻辑语义见表1。这样就对原始的96维数据降到9维。The present invention selects nine human body joint distances with logical semantics as distance features to reflect the essential features of this movement, θ=(d1, d2, d3, d4, d5, d6, d7, d8, d9), see Figure 3. Its logical semantics are shown in Table 1. This reduces the original 96-dimensional data to 9-dimensional.

表1.提取的九个距离特征的逻辑语义Table 1. Logical semantics of the extracted nine distance features

d1:左腿的弯曲程度d1: degree of bending of the left leg d4:左胳膊弯曲程度d4: left arm bending degree d7:低头/抬头/摇头程度d7: bow/raise/shake degree d2:右腿的弯曲程度d2: degree of bending of the right leg d5:右胳膊弯曲程度d5: degree of bending of the right arm d8:左胳膊的摇摆程度d8: swing degree of left arm d3:两脚的之间的距离d3: the distance between the two feet d6:弯腰程度d6: degree of bending d9:右胳膊的摇摆程度d9: the degree of swing of the right arm

其中,人体关节距离可根据具体人体运动捕捉的需要确定,本发明选择的九个特征为本实施例的最优选择,也是多数人体整体动作捕捉的最优选择,对于局部动作的步骤,如手臂的动作,应具体选择手臂的关节点予以实现。Among them, the distance between human body joints can be determined according to the needs of specific human body motion capture. The nine features selected by the present invention are the optimal choice for this embodiment, and are also the optimal choice for most overall human body motion capture. For the steps of local movements, such as arm The movement of the arm should be realized by selecting the joint points of the arm.

2.采用PCA方法对这九个距离特征进行分析,进一步降维,提取其第一维主成分,采用Lowess平滑滤波,设置相应的参数对特征曲线进行去噪,得到距离特征曲线。2. Use PCA method to analyze the nine distance features, further reduce the dimension, extract the first dimension principal component, use Lowess smoothing filter, set the corresponding parameters to denoise the characteristic curve, and obtain the distance characteristic curve.

PCA分析可以分成以下五个步骤:PCA analysis can be divided into the following five steps:

步骤1:构建S1所示的9个距离特征样本的平均值Step 1: Construct the average of the 9 distance feature samples shown in S1

θθ ‾‾ ii == 11 TT ΣΣ jj == 11 TT θθ jithe ji (( ii == 11 .. .. 99 ,, jj == 11 .. .. .. TT ))

其中i=1…9代表这9个距离特征样本,T为这个运动总的帧数;Where i=1...9 represents the 9 distance feature samples, and T is the total number of frames of this movement;

步骤2:计算这9个距离特征的原始值与平均值的差值

Figure BDA00002636509700052
然后构建距离特征的差值矩阵D=[Δθ1,....Δfθ9];Step 2: Calculate the difference between the original value and the mean value of these 9 distance features
Figure BDA00002636509700052
Then construct the difference matrix D=[Δθ 1 ,....Δfθ 9 ] of the distance feature;

步骤3:计算得到协方差矩阵C=DDT,其中DT为D的转置;Step 3: Calculate the covariance matrix C=DD T , where D T is the transpose of D;

步骤4:计算协方差矩阵的特征值λ,以及相应的特征向量L;;Step 4: Calculate the eigenvalue λ of the covariance matrix, and the corresponding eigenvector L;

步骤5:提取特征向量

Figure BDA00002636509700053
这样就重新构建了一组新的9维主成分
Figure BDA00002636509700054
其值按贡献率从大到小的顺序排列;然后提取第一维主成分,也就是的最大贡献率
Figure BDA00002636509700055
此一个运动M进而可以表示成为:
Figure BDA00002636509700056
其中Nframe为运动的总帧数;Step 5: Extract feature vectors
Figure BDA00002636509700053
In this way, a new set of 9-dimensional principal components is reconstructed
Figure BDA00002636509700054
Its values are arranged in order of contribution rate from large to small; then extract the first dimension principal component, that is, the maximum contribution rate of
Figure BDA00002636509700055
This motion M can in turn be expressed as:
Figure BDA00002636509700056
Among them, N frame is the total number of frames of the motion;

采用Lowess平滑滤波,设置相应的参数对曲线进行去噪,得到了一个特征曲线,可以更好的反映原始运动。The Lowess smoothing filter is used, and the corresponding parameters are set to denoise the curve, and a characteristic curve is obtained, which can better reflect the original motion.

3.对于一段运动片段,key存放初始关键帧。提取特征曲线上的局部极值点loc,认为是运动的边界姿态,从而得到初始关键帧key。具体步骤叙述如下:3. For a motion clip, key stores the initial keyframe. The local extremum point loc on the characteristic curve is extracted, which is regarded as the boundary pose of the motion, so as to obtain the initial key frame key. The specific steps are described as follows:

步骤1:我们在MATLAB中使用“findpeak”函数得到局部最大值;Step 1: We use the "findpeak" function in MATLAB to get the local maximum;

步骤2:将数据进行反转,继续使用“findpeak”函数得到局部最小值,然后再将数据逆置回来,便于以后继续使用;Step 2: Invert the data, continue to use the "findpeak" function to get the local minimum, and then invert the data back for further use in the future;

步骤3:合并局部极大值和局部极小值,得到局部极值点loc(根据帧号从小到大的顺序)。将得到的局部极值点作为初始关键帧key。Step 3: Merge the local maximum and local minimum to obtain the local extremum point loc (according to the order of frame numbers from small to large). The obtained local extremum point is used as the initial key frame key.

这些初始的关键帧只是些边界姿态,我们没有考虑运动的强烈程度。对于若是要求较低误差需求和较好的概括人体运动,则需要在相邻的初始关键帧之间插入一帧或者更多帧。我们认为如果相邻初始关键帧之间的特征曲线幅度有一个较大的差值,则这相邻关键帧之间的运动强烈姿态变化较大,如果差值较小则认为对应的运动平缓姿态变化较小。采用下面的距离特征曲线幅度的分裂与合并算法来进一步的提取其中的关键帧。These initial keyframes are just boundary poses, we don't take into account the intensity of the motion. For lower error requirements and better generalization of human motion, it is necessary to insert one or more frames between adjacent initial key frames. We believe that if there is a large difference in the magnitude of the characteristic curve between adjacent initial keyframes, the motion between the adjacent keyframes has a large change in posture, and if the difference is small, the corresponding motion is considered to be gentle. Changes are minor. The following splitting and merging algorithm of the magnitude of the distance characteristic curve is used to further extract the key frames.

4.对于一段运动片段,key存放初始关键帧,Lkey存放最终关键帧。然后在初始关键帧key之间,基于距离特征曲线幅度,分裂或合并关键帧,进而得到最终的关键帧Lkey。具体描述如下:4. For a motion segment, key stores the initial key frame, and Lkey stores the final key frame. Then, between the initial keyframe keys, based on the magnitude of the distance characteristic curve, the keyframes are split or merged to obtain the final keyframe Lkey. The specific description is as follows:

步骤1:根据特征曲线,计算相邻的初始关键帧key1,key2之间的特征曲线的幅度差值vary;Step 1: According to the characteristic curve, calculate the magnitude difference vary of the characteristic curve between the adjacent initial keyframes key1 and key2;

步骤2:设置一个阈值threshold。如果这个vary<threshold,说明初始的相邻关键帧姿态变化不剧烈,不需要插帧,否则,相邻初始关键帧之间变化剧烈,其中需要插入一帧或者更多帧。例如,这个变化为vary=0.4,我们不需要插入更多帧;如果这个变化为vary=10,我们需要采用均匀采样的方式在key1和key2之间插入10帧,增加到初始关键帧key中。这样可以得到一个关键帧集合;Step 2: Set a threshold threshold. If this vary<threshold, it means that the pose of the initial adjacent keyframes does not change drastically, and frame interpolation is not required; otherwise, the changes between adjacent initial keyframes are drastic, and one or more frames need to be inserted. For example, if the change is vary=0.4, we don’t need to insert more frames; if the change is vary=10, we need to insert 10 frames between key1 and key2 in a uniform sampling method, adding to the initial key frame key. In this way, a set of keyframes can be obtained;

步骤3:但得到的关键帧集合中的相邻关键帧可能过密,因此需要合并过密的关键帧来得到最终的关键帧。我们采用限制相邻关键帧之间的帧数的方法合并。如果相邻关键帧之间的帧数大于一个阈值,我们则保存这个关键帧到最终的关键帧Lkey中。Step 3: But the adjacent keyframes in the obtained keyframe set may be too dense, so it is necessary to merge the too dense keyframes to obtain the final keyframe. We use a method of merging that limits the number of frames between adjacent keyframes. If the number of frames between adjacent keyframes is greater than a threshold, we save this keyframe to the final keyframe Lkey.

本发明实现过程中,具体参数的取值根据需要确定,即:取值范围、阈值等均无具体限定,根据情况取值即可。During the implementation of the present invention, the values of the specific parameters are determined according to the needs, that is, there are no specific limitations on the value ranges, thresholds, etc., and the values can be selected according to the situation.

下面,以本发明具体实现一个具体实施例。本发明的实施例是在以本发明技术方案为前提下进行实施的,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述实施例。实例样例是CMU数据库(即美国卡内基梅隆大学图形学实验室创建的运动捕捉数据库)中选取的。In the following, a specific embodiment is specifically realized by the present invention. The embodiments of the present invention are implemented on the premise of the technical solutions of the present invention, and detailed implementation methods and specific operation processes are given, but the protection scope of the present invention is not limited to the following embodiments. The examples are selected from the CMU database (that is, the motion capture database created by the Graphics Laboratory of Carnegie Mellon University in the United States).

具体实施步骤为:The specific implementation steps are:

步骤1:从CMU数据库中选择一个具有代表性的运动,‘踢球’运动(此运动总帧长为802帧)。提取九个关节距离作为距离特征θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9)。Step 1: Select a representative movement from the CMU database, the 'kicking' movement (the total frame length of this movement is 802 frames). Extract nine joint distances as distance features θ=(d1,d2,d3,d4,d5,d6,d7,d8,d9).

步骤2:采用PCA方法对这九维距离特征进行分析进一步降维。提取其第一维主成分,采用Lowess平滑滤波,设置相应的参数对特征曲线进行去噪,得到我们所需的距离特征曲线进而进行关键帧提取。Step 2: Use the PCA method to analyze the nine-dimensional distance features for further dimensionality reduction. Extract its first-dimensional principal component, use Lowess smoothing filter, set corresponding parameters to denoise the characteristic curve, obtain the distance characteristic curve we need, and then extract key frames.

步骤3:采用MATLAB中的’findpeak’函数找到特征曲线上的局部极值点loc,认为是运动的边界姿态,得到初始关键帧key。见附图4。Step 3: Use the 'findpeak' function in MATLAB to find the local extremum point loc on the characteristic curve, which is considered to be the boundary pose of the motion, and get the initial key frame key. See attached drawing 4.

步骤4:基于距离特征曲线幅度的分裂与合并关键帧,Lkey存放最终关键帧。见附图4。我们提取了局部极值点作为初始关键帧key由圆圈表示,同时我们获得最终的关键帧Lkey由星号表示。通过研究‘踢球’运动,我们发现运动是非规律运动,其前半部分运动相对缓慢姿态变化小,运动的中后部分剧烈姿态变化大。当运动是缓慢时,对应的特征曲线幅度小,不需要再格外插入关键帧,然而,当运动剧烈,就需要在相邻的初始关键帧之间依据特征曲线幅度差值分裂出新的关键帧,因为初始的关键帧并不能很好的反应这个变化,然后合并过密关键帧。通过这个方法,我们可以有效的获得最终的关键帧集合。Step 4: Split and merge keyframes based on the magnitude of the distance characteristic curve, and Lkey stores the final keyframe. See attached drawing 4. We extracted the local extremum point as the initial keyframe key represented by a circle, and we obtained the final keyframe Lkey represented by an asterisk. Through the study of the "kicking" movement, we found that the movement is an irregular movement, the first half of the movement is relatively slow and the posture changes are small, and the middle and rear parts of the movement are violent and the posture changes are large. When the movement is slow, the corresponding characteristic curve has a small amplitude, and no additional keyframes need to be inserted. However, when the movement is severe, it is necessary to split new keyframes between adjacent initial keyframes based on the difference in characteristic curve amplitude. , because the initial keyframe does not reflect this change very well, and then merge the over-dense keyframes. Through this method, we can effectively obtain the final set of keyframes.

步骤5:不同关键帧提取算法的比较。我们采用了不同的方法:本发明方法,均匀采样,曲线简化,和只有四元数距离的方法,从一个‘踢球’运动中提取相同数目的关键帧帧数(压缩比相同)。不同方法提取关键帧的比较结果见附图5。我们提取了33关键帧,很好的概括了运动,避免了过采样和欠采样问题。Step 5: Comparison of different keyframe extraction algorithms. We used different methods: the method of the present invention, uniform sampling, curve simplification, and only quaternion distance method, extracting the same number of keyframes from a 'kicking' motion (same compression ratio). The comparison results of key frame extraction by different methods are shown in Figure 5. We extract 33 keyframes, which generalize motion well and avoid oversampling and undersampling issues.

步骤6:采用本发明方法对相似运动的关键帧进行比较。我们选择在一定帧范围内(130帧)四种类型的走运动提取关键帧。见附图6。我们可以从图6(a)-(d)中,本发明提取的关键帧可以明显区分相似运动。Step 6: Using the method of the present invention to compare key frames of similar motions. We choose four types of walking motions to extract key frames within a certain frame range (130 frames). See attached drawing 6. From Figure 6(a)-(d), the key frames extracted by the present invention can clearly distinguish similar motions.

步骤7:采用本发明方法测试六种不同类型的运动序列,其中包括踢球,跳,跑-停止,走,跳舞,走-跳-走。见附图7。从表2发现本发明方法得到的压缩率在10%以内。Step 7: Using the method of the present invention to test six different types of motion sequences, including kicking ball, jumping, running-stop, walking, dancing, walking-jumping-walking. See attached drawing 7. From Table 2, it is found that the compression ratio obtained by the method of the present invention is within 10%.

表2.六种运动类型的压缩比比较Table 2. Comparison of compression ratios for the six motion types

类型type 踢球play football Jump 跑-停止run-stop Walk 跳舞Dance 走-跳-走go-jump-walk 总数total 802802 439439 239239 343343 10331033 12001200 关键帧Keyframe 3333 24twenty four 1111 1616 3737 5050 压缩比(%)Compression ratio (%) 4.14.1 5.55.5 4.64.6 4.64.6 3.53.5 4.34.3

然后,我们用如下公式计算绝对平均误差:Then, we calculate the absolute mean error with the following formula:

E=[∑(F(n)-F'(n))2]/NE=[∑(F(n)-F'(n)) 2 ]/N

这里,F(n)是原始运动数据,F'(n)是相应的重建后的运动数据,N为这个运动总帧数乘以96个自由度。Here, F(n) is the original motion data, F'(n) is the corresponding reconstructed motion data, and N is the total number of frames of this motion multiplied by 96 degrees of freedom.

从表3中可以看到,我们采用六个采样运动计算重建后的绝对平均误差,其中,括号的数据代表提取关键帧的数目。附图8,展示了六种采样运动的误差比较。As can be seen from Table 3, we use six sampled motions to calculate the absolute average error after reconstruction, where the data in brackets represent the number of key frames extracted. Figure 8, shows the error comparison of the six sampled motions.

表3.四种方法的误差率比较Table 3. Comparison of error rates of the four methods

Figure BDA00002636509700081
Figure BDA00002636509700081

我们采用四种方法去得到相同关键帧数。之后,我们通过线性插值的方法重建运动序列得到重建后误差。我们可以发现使用本发明方法比其他的方法在相同压缩比下得到的误差率更低。这是因为它提取了重要的关键帧所以达到了较低的误差。从表中可以发现本发明方法不论在非规律运动还是规律的运动上都有一个明显的优势。We use four methods to get the same number of keyframes. Afterwards, we reconstruct the motion sequence by linear interpolation to get the post-reconstruction error. We can find that the error rate obtained by using the method of the present invention is lower than other methods under the same compression ratio. This is because it extracts important keyframes and thus achieves a lower error. From the table, it can be found that the method of the present invention has an obvious advantage in both irregular motion and regular motion.

以上所述,仅为本发明较佳的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明披露的技术范围内,根据本发明的技术方案及其发明构思加以等同替换或改变,都应涵盖在本发明的保护范围之内。The above is only a preferred embodiment of the present invention, but the scope of protection of the present invention is not limited thereto. Anyone familiar with the technical field within the technical scope disclosed in the present invention, according to the technical solution of the present invention Any equivalent replacement or change of the inventive concepts thereof shall fall within the protection scope of the present invention.

Claims (5)

1. the motion key-frame extraction based on the distance Curve amplitude is characterized in that, comprises the steps:
S1, one group of joint distance of selection are as distance feature;
S2, employing PCA method are extracted the first dimension major component, adopt smothing filtering to remove noise, obtain characteristic curve;
S3, the Local Extremum of passing through to extract on the characteristic curve obtain initial key frame;
S4, between adjacent described initial key frame, the characteristic curve amplitude difference of calculating, and then insert uniformly additional key frame according to uniform sampling;
S5, overstocked described additional key frame and the described initial key frame of merging stay final key frame set.
2. according to the described motion key-frame extraction of claim 1, it is characterized in that among the step S1 based on the distance Curve amplitude, the selection of described joint distance, the logical semantics mode of following table is made:
D1: the degree of crook of left leg D4: left arm degree of crook D7: the degree of bowing/come back/shake the head D2: the degree of crook of right leg D5: right arm degree of crook D8: the degree of waving of left arm D3: the distance between the bipod D6: the degree of bending over D9: the degree of waving of right arm
Then a motion frame is represented to become: θ=(d1, d2, d3, d4, d5, d6, d7, d8, d9), d1 wherein .., d9 are the joint distance of choosing in the S1 step with logical semantics.
3. according to claim 1 or 2 described motion key-frame extraction, it is characterized in that the method for PCA described in the step S2 is divided into following five steps based on the distance Curve amplitude:
S21: the mean value that makes up 9 distance feature samples shown in the S1
&theta; &OverBar; i = 1 T &Sigma; j = 1 T &theta; ji ( i = 1 . . 9 , j = 1 . . . T )
I=1 wherein ... 9 represent this 9 distance feature samples, and T is the total frame number of this motion;
S22: the difference of calculating the original value and the mean value of these 9 distance feature
Figure FDA00002636509600012
Make up the matrix of differences D=[Δ θ of distance feature then 1.... and Δ θ 9];
S23: calculate covariance matrix C=DD T, D wherein TTransposition for D;
S24: calculate the eigenvalue of covariance matrix, and corresponding proper vector L;
S25: extract proper vector So just rebuild one group of 9 new dimension major component
Figure FDA00002636509600014
Its value is by contribution rate series arrangement from big to small; Extract the first dimension major component then, maximum contribution rate just
Figure FDA00002636509600015
This motion M and then can represent to become:
Figure FDA00002636509600016
N wherein FrameTotalframes for motion;
In addition, select the Lowess smothing filtering for use among the step S2.
4. according to the described motion key-frame extraction of claim 3, it is characterized in that the implementation procedure of step S4 is based on the distance Curve amplitude:
S41: the amplitude difference of calculating the characteristic curve between the adjacent described initial key frame;
S42 a: threshold value is set; If described amplitude difference 〉=described threshold value is then inserted the described additional key frame of a frame or multiframe.
5. according to the described motion key-frame extraction of claim 4, it is characterized in that the implementation method of step S5 based on the distance Curve amplitude:
At described additional key frame and the overstocked situation of described initial key frame, adopt the method for the frame number between the adjacent key frame of restriction to merge.
CN2012105660916A 2012-12-24 2012-12-24 Motion key frame extracting method based on distance curve amplitudes Pending CN103218824A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012105660916A CN103218824A (en) 2012-12-24 2012-12-24 Motion key frame extracting method based on distance curve amplitudes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012105660916A CN103218824A (en) 2012-12-24 2012-12-24 Motion key frame extracting method based on distance curve amplitudes

Publications (1)

Publication Number Publication Date
CN103218824A true CN103218824A (en) 2013-07-24

Family

ID=48816567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012105660916A Pending CN103218824A (en) 2012-12-24 2012-12-24 Motion key frame extracting method based on distance curve amplitudes

Country Status (1)

Country Link
CN (1) CN103218824A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679747A (en) * 2013-11-15 2014-03-26 南昌大学 Key frame extraction method of motion capture data
CN103927776A (en) * 2014-03-28 2014-07-16 浙江中南卡通股份有限公司 Animation curve optimization method
CN103942778A (en) * 2014-03-20 2014-07-23 杭州禧颂科技有限公司 Fast video key frame extraction method of principal component characteristic curve analysis
CN104331911A (en) * 2014-11-21 2015-02-04 大连大学 Improved second-order oscillating particle swarm optimization based key frame extraction method
CN104331904A (en) * 2014-10-30 2015-02-04 大连大学 Three-dimensional human motion key frame extracting method based on fuse of improved LLE and PCA
CN104463788A (en) * 2014-12-11 2015-03-25 西安理工大学 Human motion interpolation method based on motion capture data
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106791480A (en) * 2016-11-30 2017-05-31 努比亚技术有限公司 A kind of terminal and video skimming creation method
CN104200455B (en) * 2014-06-13 2017-09-15 北京工业大学 A kind of key poses extracting method based on movement statistics signature analysis
CN105931270B (en) * 2016-04-27 2018-03-27 石家庄铁道大学 Video key frame extracting method based on gripper path analysis
CN107993275A (en) * 2016-10-26 2018-05-04 佳能株式会社 Method and system for the dynamic sampling of the slow-action curve of animation
CN108121973A (en) * 2017-12-25 2018-06-05 江苏易乐网络科技有限公司 Key frame extraction method of motion capture data based on principal component analysis
CN109241956A (en) * 2018-11-19 2019-01-18 Oppo广东移动通信有限公司 Method, apparatus, terminal and the storage medium of composograph
CN110415336A (en) * 2019-07-12 2019-11-05 清华大学 High-precision human body posture reconstruction method and system
CN110532837A (en) * 2018-05-25 2019-12-03 九阳股份有限公司 Image processing method and household appliance in a kind of article fetching process
CN112698196A (en) * 2021-03-24 2021-04-23 深圳市三和电力科技有限公司 High-voltage switch mechanical characteristic monitoring device
CN113327228A (en) * 2021-05-26 2021-08-31 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN113407371A (en) * 2020-12-03 2021-09-17 腾讯科技(深圳)有限公司 Data anomaly monitoring method and device, computer equipment and storage medium
CN114550071A (en) * 2022-03-22 2022-05-27 北京壹体科技有限公司 Method, device and medium for automatically identifying and capturing track and field video action key frames

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1967525A (en) * 2006-09-14 2007-05-23 浙江大学 Extraction method of key frame of 3d human motion data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1967525A (en) * 2006-09-14 2007-05-23 浙江大学 Extraction method of key frame of 3d human motion data

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
吴刚: "《隐式多项式曲线理论与应用》", 31 December 2011, article "隐式多项式曲线理论与应用", pages: 39-42 *
彭淑娟: "基于中心距离特征的人体运动序列关键帧提取", 《系统仿真学报》, vol. 24, no. 3, 31 March 2012 (2012-03-31), pages 565 - 569 *
杨跃东等: "基于几何特征的人体运动捕获数", 《系统仿真学报》, vol. 19, no. 10, 31 May 2007 (2007-05-31), pages 2229 - 2234 *
沈军行等: "从运动捕获数据中提取关键帧", 《计算机辅助设计与图形学学报》, vol. 16, no. 5, 31 May 2004 (2004-05-31), pages 719 - 723 *
肖俊等: "三维人体运动特征可视化与交互式分割", 《软件学报》, vol. 19, no. 8, 31 August 2008 (2008-08-31), pages 1995 - 2003 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679747A (en) * 2013-11-15 2014-03-26 南昌大学 Key frame extraction method of motion capture data
CN103679747B (en) * 2013-11-15 2016-08-17 南昌大学 A kind of key frame extraction method of motion capture data
CN103942778A (en) * 2014-03-20 2014-07-23 杭州禧颂科技有限公司 Fast video key frame extraction method of principal component characteristic curve analysis
CN103927776A (en) * 2014-03-28 2014-07-16 浙江中南卡通股份有限公司 Animation curve optimization method
CN104200455B (en) * 2014-06-13 2017-09-15 北京工业大学 A kind of key poses extracting method based on movement statistics signature analysis
CN104331904A (en) * 2014-10-30 2015-02-04 大连大学 Three-dimensional human motion key frame extracting method based on fuse of improved LLE and PCA
CN104331911A (en) * 2014-11-21 2015-02-04 大连大学 Improved second-order oscillating particle swarm optimization based key frame extraction method
CN104463788A (en) * 2014-12-11 2015-03-25 西安理工大学 Human motion interpolation method based on motion capture data
CN104463788B (en) * 2014-12-11 2018-02-16 西安理工大学 Human motion interpolation method based on movement capturing data
CN105931270B (en) * 2016-04-27 2018-03-27 石家庄铁道大学 Video key frame extracting method based on gripper path analysis
CN106504267A (en) * 2016-10-19 2017-03-15 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN106504267B (en) * 2016-10-19 2019-05-17 东南大学 A kind of motion of virtual human data critical frame abstracting method
CN107993275A (en) * 2016-10-26 2018-05-04 佳能株式会社 Method and system for the dynamic sampling of the slow-action curve of animation
CN106791480A (en) * 2016-11-30 2017-05-31 努比亚技术有限公司 A kind of terminal and video skimming creation method
CN108121973A (en) * 2017-12-25 2018-06-05 江苏易乐网络科技有限公司 Key frame extraction method of motion capture data based on principal component analysis
CN110532837A (en) * 2018-05-25 2019-12-03 九阳股份有限公司 Image processing method and household appliance in a kind of article fetching process
CN109241956A (en) * 2018-11-19 2019-01-18 Oppo广东移动通信有限公司 Method, apparatus, terminal and the storage medium of composograph
CN109241956B (en) * 2018-11-19 2020-12-22 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for synthesizing images
CN110415336A (en) * 2019-07-12 2019-11-05 清华大学 High-precision human body posture reconstruction method and system
CN110415336B (en) * 2019-07-12 2021-12-14 清华大学 High-precision human body posture reconstruction method and system
CN113407371A (en) * 2020-12-03 2021-09-17 腾讯科技(深圳)有限公司 Data anomaly monitoring method and device, computer equipment and storage medium
CN113407371B (en) * 2020-12-03 2024-05-10 腾讯科技(深圳)有限公司 Data anomaly monitoring method, device, computer equipment and storage medium
CN112698196B (en) * 2021-03-24 2021-06-08 深圳市三和电力科技有限公司 High-voltage switch mechanical characteristic monitoring device
CN112698196A (en) * 2021-03-24 2021-04-23 深圳市三和电力科技有限公司 High-voltage switch mechanical characteristic monitoring device
CN113327228A (en) * 2021-05-26 2021-08-31 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN113327228B (en) * 2021-05-26 2024-04-16 Oppo广东移动通信有限公司 Image processing method and device, terminal and readable storage medium
CN114550071A (en) * 2022-03-22 2022-05-27 北京壹体科技有限公司 Method, device and medium for automatically identifying and capturing track and field video action key frames

Similar Documents

Publication Publication Date Title
CN103218824A (en) Motion key frame extracting method based on distance curve amplitudes
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN102521843B (en) Three-dimensional human body motion analysis and synthesis method based on manifold learning
CN111354017A (en) A Target Tracking Method Based on Siamese Neural Network and Parallel Attention Module
CN107644217B (en) Target tracking method based on convolutional neural network and related filter
CN107609572A (en) Multi-modal emotion identification method, system based on neutral net and transfer learning
CN105427308B (en) A kind of sparse and dense characteristic mates the method for registering images for combining
CN109948573A (en) A Noise Robust Face Recognition Method Based on Cascaded Deep Convolutional Neural Networks
CN106649663B (en) A kind of video copying detection method based on compact video characterization
CN113837959B (en) Image denoising model training method, image denoising method and system
CN110458235A (en) A method for comparison of motion posture similarity in video
CN106886986A (en) Image interfusion method based on the study of self adaptation group structure sparse dictionary
CN103679747B (en) A kind of key frame extraction method of motion capture data
CN116712083A (en) QRS complex wave detection method based on U-Net network
CN104331911A (en) Improved second-order oscillating particle swarm optimization based key frame extraction method
CN108647334B (en) A method for homology analysis of video social network under spark platform
CN104156986A (en) Motion capture data key frame extracting method based on local linear imbedding
CN103345623A (en) Behavior recognition method based on robust relative attributes
CN104200489A (en) Motion capture data key frame extraction method based on multi-population genetic algorithm
CN102819549B (en) Based on the human motion sequences segmentation method of Least-squares estimator characteristic curve
CN104537694A (en) Online learning offline video tracking method based on key frames
CN104463802A (en) Non-convex compressed sensing image reconstruction method based on variable scale over-complete dictionaries
CN110188181A (en) Field keyword determines method, apparatus, electronic equipment and storage medium
CN116883684A (en) Graph characterization method based on point-to-edge conversion and automatic encoder
CN110910310B (en) A face image reconstruction method based on identity information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C05 Deemed withdrawal (patent law before 1993)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130724