Nothing Special   »   [go: up one dir, main page]

CN104463788B - Human motion interpolation method based on movement capturing data - Google Patents

Human motion interpolation method based on movement capturing data Download PDF

Info

Publication number
CN104463788B
CN104463788B CN201410764271.4A CN201410764271A CN104463788B CN 104463788 B CN104463788 B CN 104463788B CN 201410764271 A CN201410764271 A CN 201410764271A CN 104463788 B CN104463788 B CN 104463788B
Authority
CN
China
Prior art keywords
mrow
msub
motion
interpolation
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410764271.4A
Other languages
Chinese (zh)
Other versions
CN104463788A (en
Inventor
赵明华
原永芹
莫瑞阳
丁晓枫
曹慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Feidie Virtual Reality Technology Co ltd
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201410764271.4A priority Critical patent/CN104463788B/en
Publication of CN104463788A publication Critical patent/CN104463788A/en
Application granted granted Critical
Publication of CN104463788B publication Critical patent/CN104463788B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/262Analysis of motion using transform domain methods, e.g. Fourier domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于运动捕捉数据的人体运动插值方法,首先解析载入的两段运动序列文件并进行坐标系转换操作;然后基于世界坐标系下的运动数据,提取双脚前后位置关系特征和步距时序特征,并根据人体运动规律,依据以上的两个特征,将长的运动序列分割为短运动段;最后在同一短段内选取最相似的帧对并基于此确定关键帧对,然后基于关键帧对运用四元数球面插值算法进行角度旋转插值,运用线性插值方法进行根节点的平移插值,从而连接成一段新的运动。本发明在确定关键帧对时是基于分割后的同一短段进行的,确保了插值过渡顺序符合人眼视觉逻辑顺序,具有很好的视觉效果。

The invention discloses a human body motion interpolation method based on motion capture data. Firstly, the loaded two-segment motion sequence files are analyzed and the coordinate system conversion operation is performed; then, based on the motion data in the world coordinate system, the characteristics of the front and rear positional relationship of both feet are extracted. and step timing features, and according to the law of human motion, according to the above two features, the long motion sequence is divided into short motion segments; finally, the most similar frame pair is selected in the same short segment and the key frame pair is determined based on this, Then based on the key frame, the quaternion spherical interpolation algorithm is used to perform angle rotation interpolation, and the linear interpolation method is used to perform translation interpolation of the root node, thereby connecting a new movement. The present invention determines the key frame pair based on the same short segment after division, ensures that the interpolation transition sequence conforms to the visual logic sequence of human eyes, and has good visual effect.

Description

基于运动捕捉数据的人体运动插值方法Human motion interpolation method based on motion capture data

技术领域technical field

本发明属于计算机视觉领域,具体涉及一种基于运动捕捉数据的人体运动插值方法。The invention belongs to the field of computer vision, and in particular relates to a human motion interpolation method based on motion capture data.

背景技术Background technique

随着计算机图像技术及其相关衍生品的高速发展,使得运动捕捉技术逐渐成为虚拟现实、计算机视觉、影视制作、游戏娱乐和计算机动画等领域的数据获取手段。然而,该系统的价格昂贵;采集数据时对场地环境要求严格;动画制作对演员动作质量的要求高,使得运动数据的重用性研究成为了一个非常有意义的研究方向。重用性研究即利用已有数据库中的运动数据,通过编辑、融合、拼接、合成等操作搭建运动网络,产生丰富多变的新运动序列,生成满足各自需求的虚拟动作,使得计算机动画制作、虚拟现实等各领域的工作效率大大提升,并且节省了制作成本。With the rapid development of computer image technology and its related derivatives, motion capture technology has gradually become a means of data acquisition in the fields of virtual reality, computer vision, film and television production, game entertainment and computer animation. However, the price of the system is expensive; the site environment is strictly required when collecting data; animation production has high requirements for the quality of actors' movements, which makes the research on the reusability of motion data a very meaningful research direction. Reusability research is to use the motion data in the existing database to build a motion network through editing, fusion, splicing, synthesis, etc., to generate rich and varied new motion sequences, and to generate virtual actions that meet their respective needs, making computer animation production, virtual The work efficiency in various fields such as reality has been greatly improved, and production costs have been saved.

关键帧插值是广泛用于人体运动数据重用性研究的重要插值技术。其基本原理是首先获取或制作动画序列中的若干关键帧,然后基于关键帧利用插值技术生成中间过渡帧。关键帧的提取主要考虑运动特征和关键帧量化分析两方面。由于人体运动包含多个自由度,而且是高维空间的运动数据,传统方法在原始数据上直接分析和提取特征存在计算量过大的问题。因此,当前方法是先将原始数据通过降维,对降维后的数据提取关键帧可大大减少计算量。关键帧插值技术是通过给定若干关键帧,运用插值算法直接生成中间帧。常见的插值算法有:线性插值、四元数插值、三次样条插值以及双插值等。运用关键帧插值技术具有代表性的有:Ashraf和Wong运用双线性插值方法将给定的两个以上的运动生成新的运动。Rose等人提出结合运动约束的逆向运动学方法。通过将人体模型划分为三种不同部分,分别进行插帧合成的新的人体运动。四元数球面插值技术将四元数的旋转映射到单位四维球面上,逐步减小两四元数的夹角,在给定的两个姿态间逐步过渡生成新的姿态,经常运用在关键帧插值研究中。但是在三维人体动画制作中,由于三维人体模型复杂、运动数据高维以及人眼视觉对人体运动敏感特性,依靠纯粹的插值算法,达到降低关键帧对提取的误差以及运用插值算法生成高度真实的运动具有很大的挑战性,因此经常需要结合起运动学知识或者其他知识。Keyframe interpolation is an important interpolation technique widely used in the research of human motion data reusability. Its basic principle is to first obtain or make several key frames in the animation sequence, and then use interpolation technology to generate intermediate transition frames based on the key frames. The extraction of key frames mainly considers two aspects of motion features and key frame quantitative analysis. Since human motion contains multiple degrees of freedom and is motion data in high-dimensional space, the traditional method directly analyzes and extracts features from the original data, and there is a problem of excessive calculation. Therefore, the current method is to reduce the dimensionality of the original data first, and extract key frames from the dimensionality-reduced data, which can greatly reduce the amount of calculation. Key frame interpolation technology is to directly generate intermediate frames by using interpolation algorithm given several key frames. Common interpolation algorithms include: linear interpolation, quaternion interpolation, cubic spline interpolation, and double interpolation. The representative ones using key frame interpolation technology are: Ashraf and Wong use bilinear interpolation method to generate new motion from more than two given motions. Rose et al. proposed an inverse kinematics approach incorporating motion constraints. By dividing the human body model into three different parts, a new human body motion is synthesized by frame interpolation. The quaternion spherical interpolation technology maps the rotation of the quaternion to the unit four-dimensional sphere, gradually reduces the angle between the two quaternions, and gradually transitions between the given two poses to generate a new pose, which is often used in key frames In interpolation research. However, in the production of 3D human body animation, due to the complexity of the 3D human body model, high-dimensional motion data, and the sensitivity of human vision to human body motion, relying on pure interpolation algorithms can reduce the error of key frame pair extraction and generate highly realistic images using interpolation algorithms. Sports are challenging and often require a combination of kinematic or other knowledge.

发明内容Contents of the invention

本发明的目的是提供一种基于运动捕捉数据的人体运动插值方法,是一种新的基于特征分析、运动周期分割,过渡帧对匹配的关键帧插值方法。The purpose of the present invention is to provide a human motion interpolation method based on motion capture data, which is a new key frame interpolation method based on feature analysis, motion cycle segmentation, and transition frame pair matching.

本发明所采用的技术方案是,一种基于运动捕捉数据的人体运动插值方法,首先解析载入的两段运动序列文件(BVH文件),并进行坐标系转换操作;然后基于世界坐标系下的运动数据,提取双脚前后位置关系特征和步距时序特征,并根据人体运动规律,依据以上的两个特征,将长的运动序列分割为短运动段;最后在同一短段内选取最相似的帧对并基于此确定关键帧对,然后基于关键帧对运用四元数球面插值算法进行角度旋转插值,运用线性插值方法进行根节点的平移插值,从而连接成一段新的运动。The technical solution adopted in the present invention is, a human body motion interpolation method based on motion capture data, firstly analyze the loaded two-segment motion sequence file (BVH file), and perform coordinate system conversion operation; then based on the world coordinate system Motion data, extracting the characteristics of the front and rear position relationship of the feet and the timing characteristics of the step distance, and according to the law of human motion, according to the above two characteristics, the long motion sequence is divided into short motion segments; finally, the most similar motion sequence is selected in the same short segment Frame pairs and determine the key frame pairs based on this, and then use the quaternion spherical interpolation algorithm to perform angle rotation interpolation based on the key frame pairs, and use the linear interpolation method to perform translation interpolation of the root node, thereby connecting a new movement.

本发明的特点还在于,The present invention is also characterized in that,

具体包括以下步骤:Specifically include the following steps:

步骤1,加载解析运动序列文件(BVH文件),并对BVH文件的运动数据在局部坐标系下的相对位置信息转换为在世界坐标系下的绝对位置信息;Step 1, load and analyze the motion sequence file (BVH file), and convert the relative position information of the motion data of the BVH file in the local coordinate system into the absolute position information in the world coordinate system;

步骤2,基于步骤1得到的在世界坐标系下的绝对位置信息,提取关节点的空间位置关系特征与时序关系的特征,并根据人体运动周期性规律,采用双脚前后位置关系特征和步距时序特征将长运动序列进行分割为多个短的运动序列;Step 2, based on the absolute position information in the world coordinate system obtained in step 1, extract the spatial position relationship features and temporal relationship features of the joint points, and use the front and rear position relationship features and step distance of the feet according to the periodic law of human motion The timing feature divides the long motion sequence into multiple short motion sequences;

步骤3,基于步骤2得到的分割结果,将分割后的短段进行基于时间帧序列对齐,并在同一短段内,确定欧氏距离最短的帧对为最相似的姿态,从而确定关键帧对;Step 3, based on the segmentation result obtained in step 2, align the segmented short segments based on the time frame sequence, and determine the frame pair with the shortest Euclidean distance as the most similar pose in the same short segment, so as to determine the key frame pair ;

步骤4,基于步骤3的关键帧对,运用四元数球面插值算法生成中间过渡帧角度旋转值,运用线性插值算法生成根节点中间过渡帧平移值。Step 4, based on the key frame pair in step 3, use the quaternion spherical interpolation algorithm to generate the angle rotation value of the intermediate transition frame, and use the linear interpolation algorithm to generate the translation value of the root node intermediate transition frame.

步骤1中运动序列文件由两部分组成:骨架部分和运动数据;首先以令牌传递方法解析人体骨架部分:通过逐步读取运动序列文件中的每一个关键词、整形字符、浮点型字符、字符串解析,然后按照骨架结构顺序解析运动数据部分;In step 1, the motion sequence file is composed of two parts: the skeleton part and the motion data; firstly, the human skeleton part is parsed by the token passing method: by gradually reading each keyword, integer character, floating-point character, String parsing, and then parse the motion data part according to the sequence of the skeleton structure;

采用递归的方式求出人体运动数据各关节点在世界坐标系下的绝对位置信息;转换公式如下(1)所示:Use a recursive method to find the absolute position information of each joint point of the human body motion data in the world coordinate system; the conversion formula is as follows (1):

pi (j)=Ti (root)Ri (root)...Ri (k)...p0 (j) (1)p i (j) = T i (root) R i (root) ... R i (k) ... p 0 (j) (1)

其中,pi (j)表示运动序列的第i时刻关节点Nj的在世界坐标系的坐标;Ti (root),Ri (root)分别表示根节点的平移和旋转变换矩阵;Ri (k)表示骨架结构中关节Nk相对其直接父节点的旋转变换矩阵;Nk为树形人体骨架中,从根节点到节点Nj之间的任意节点;p0 (j)表示初始时,Nj在其父节点的局部坐标系下的偏移量。Among them, p i (j) represents the coordinates of the joint point N j in the world coordinate system at the i-th moment of the motion sequence; T i (root) and R i (root) represent the translation and rotation transformation matrices of the root node respectively; R i (k) represents the rotation transformation matrix of the joint N k relative to its direct parent node in the skeleton structure; N k is any node from the root node to the node N j in the tree-shaped human skeleton; p 0 (j) represents the initial time , the offset of N j in the local coordinate system of its parent node.

步骤2中,根据人体运动呈周期性规律的性质,特别是对于移动类的运动,无论风格如何,都是双脚向前交替着向前行进;基于此规律,运用双脚向前空间位置关系特征以及步距时序特征为依据分割长运动序列;两个分割函数分别表示如下所示:In step 2, according to the nature of human body movement in a periodic law, especially for mobile sports, no matter what the style is, the feet move forward alternately; based on this law, use the forward spatial position relationship of the feet The features and step timing features are used to segment long motion sequences; the two segmentation functions are expressed as follows:

其中,Pace_changed函数表示某一时刻的双脚向前步距是否由增加变换为开始减小,或者由减小变换为开始增加,如果是,将函数值赋值为1,表示此时刻步距为该短段的最大步距或最小步距,即将此时刻定义为该段运动的分割点;否则赋值为0,表示此时刻为某短段的非分割点;Frount_foot函数表示某一时刻的右脚是否在左脚前方,当右脚在前方时,赋值为1,否则赋值为0。Among them, the Pace_changed function indicates whether the forward step distance of both feet at a certain moment changes from increasing to decreasing, or from decreasing to increasing. If so, assign the function value to 1, indicating that the step distance at this moment is the The maximum or minimum step distance of a short segment is to define this moment as the segmentation point of this segment of motion; otherwise, it is assigned a value of 0, indicating that this moment is a non-segmentation point of a short segment; the Front_foot function indicates whether the right foot at a certain moment In front of the left foot, when the right foot is in front, assign a value of 1, otherwise assign a value of 0.

步骤3中,基于步骤2得到的分割结果,运用公式(4)进行短段内的时间帧序列对齐,获得匹配帧对:f1与f2、f1 与f2 分别表示两段的起始与结束帧,计算得到匹配帧对(fi,fi′);然后基于匹配帧对计算选取最相似的帧对做为关键帧对;采用最常用的欧氏距离确定最相似的帧对,假设计算得到的欧氏距离最小值为D(Pi,Pj),则选取下一帧为关键帧对即(Pi,Pj+1);其中,确定欧氏距离最小值公式如下(5)所示:In step 3, based on the segmentation results obtained in step 2, use the formula (4) to align the time frame sequence in the short segment to obtain matching frame pairs: f 1 and f 2 , f 1 and f 2 represent the two segments of Start and end frames, calculate the matching frame pair (f i , f i ′); then select the most similar frame pair as the key frame pair based on the matching frame pair calculation; use the most commonly used Euclidean distance to determine the most similar frame Yes, assuming that the calculated minimum value of the Euclidean distance is D(P i , P j ), then select the next frame as a key frame pair (P i , P j+1 ); among them, the formula for determining the minimum value of the Euclidean distance As shown in (5) below:

其中,pi k,pj k表示两段运动序列第k个关节点分别在第i帧和第j帧的位置信息;wk表示第k个关节点在人体骨架中所占的权重,一般来说,离根节点近的节点权值越大。Among them, p i k , p j k represent the position information of the k-th joint point in the i-th frame and the j-th frame of the two motion sequences respectively; w k represents the weight of the k-th joint point in the human skeleton, generally In other words, the weight of the node closer to the root node is greater.

步骤4中,运用公式(6)和公式(7)进行欧拉角旋转角度与四元数相互转换:欧拉角组为绕Z,Y,X的旋转角度,转换后对应的四元数为q=[w a b c];In step 4, use formula (6) and formula (7) to convert Euler angle rotation angle and quaternion: Euler angle group is the rotation angle around Z, Y, X, and the corresponding quaternion after conversion is q=[wabc];

运用公式(8)进行基于四元数球面插值算法生成过渡旋转角度:p0、p1为两个关键帧的某关节点的旋转四元数,Ω为其角度差,t为插值参数,用来控制插值过程中平滑过渡的速度;随着t值的改变,改变插值角度,当t接近1时,插值p的角度旋转越接近p1;当t接近0时,插值p的角度旋转接近p0Use the formula (8) to generate the transitional rotation angle based on the quaternion spherical interpolation algorithm: p 0 and p 1 are the rotation quaternions of a joint point in two key frames, Ω is the angle difference, t is the interpolation parameter, and To control the smooth transition speed during the interpolation process; as the value of t changes, change the interpolation angle, when t is close to 1, the angle rotation of interpolation p is closer to p 1 ; when t is close to 0, the angle rotation of interpolation p is closer to p 0 ;

运用公式(9)进行基于根节点平移信息的线性插值:其中,起始帧与结束帧的根节点坐标分别是C0、C1,u是插值参数,当u接近0时靠近C0,接近1时靠近C1Use formula (9) to perform linear interpolation based on root node translation information: where, the root node coordinates of the start frame and end frame are C 0 , C 1 , respectively, and u is an interpolation parameter. When u is close to 0, it is close to C 0 , close to 1 is close to C 1 ;

Ci(xi,yi,zi)=uC0(x0,y0,z0)+(1-u)C1(x1,y1,z1) (9)。C i (x i ,y i ,z i )=uC 0 (x 0 ,y 0 ,z 0 )+(1-u)C 1 (x 1 ,y 1 ,z 1 ) (9).

本发明的有益效果,本发明基于运动捕捉数据的人体运动插值方法,根据移动类人体运动周期性规律,提取并依据特征将长运动序列文件(BVH文件)分割为多个短运动序列段。基于分割后的短段确定关键帧对,运用四元数球面插值算法和线性插值算法插值连接成一段长运动序列段。本发明在确定关键帧对时是基于分割后的同一短段进行的,确保了插值过渡顺序符合人眼视觉逻辑顺序,具有很好的视觉效果。The beneficial effect of the present invention is that the human body motion interpolation method based on the motion capture data of the present invention extracts and divides the long motion sequence file (BVH file) into a plurality of short motion sequence segments according to the feature based on the periodic law of moving human body motion. The key frame pair is determined based on the segmented short segment, and a long motion sequence segment is interpolated and connected by using a quaternion spherical interpolation algorithm and a linear interpolation algorithm. The present invention determines the key frame pair based on the same short segment after division, ensures that the interpolation transition sequence conforms to the visual logic sequence of human eyes, and has good visual effect.

本发明对运动捕捉数据的重用性研究具有重要的意义。相对于传统的关键帧插值方法,本发明关键帧对的选择避免了插值顺序不合理的错误,并且插值连接处的运动相对于传统的插值方法能够更加的自然无缝。The invention has important significance for the reusability research of motion capture data. Compared with the traditional key frame interpolation method, the selection of the key frame pair in the present invention avoids the error of unreasonable interpolation sequence, and the motion of the interpolation connection can be more natural and seamless compared with the traditional interpolation method.

附图说明Description of drawings

图1为人体骨架结构图;Fig. 1 is the structural diagram of human skeleton;

图2为解析BVH文件流程图;Fig. 2 is a flow chart of parsing a BVH file;

图3为走路运动在局部坐标系下的第102帧的人体姿态;Fig. 3 is the human body posture of the 102nd frame of walking motion under the local coordinate system;

图4为走路运动在世界坐标系下的第102帧的人体姿态;Fig. 4 is the human body posture of the 102nd frame of walking motion under the world coordinate system;

图5为走路运动一个周期的姿态时序图;Fig. 5 is a posture sequence diagram of one cycle of walking motion;

图6为走路运动示意时序图;Fig. 6 is a schematic timing diagram of walking motion;

图7为走路运动的向前步距时序关系图;Fig. 7 is the time-series relationship diagram of the forward step distance of walking motion;

图8为走路运动按照分割点分割后的顺序图;Fig. 8 is a sequence diagram after the walking motion is divided according to the segmentation point;

图9为走路运动分段示意时序图;Fig. 9 is a schematic timing diagram of walking motion segmentation;

图10为四元数球面插值原理图;Fig. 10 is a schematic diagram of quaternion spherical interpolation;

图11为本发明的关键帧对及插值姿态顺序图;Fig. 11 is a sequence diagram of key frame pairs and interpolation gestures of the present invention;

图12为传统方法的关键帧对及插值姿态顺序图;Fig. 12 is a key frame pair and interpolation posture sequence diagram of the traditional method;

图13为一段原始的走路运动姿态时序图;Fig. 13 is a sequence diagram of an original walking posture;

图14为一段原始的大步走运动姿态时序图;Fig. 14 is a time sequence diagram of an original striding motion posture;

图15为走路与大步走运动插值连接时序图。Fig. 15 is a sequence diagram of motion interpolation connection for walking and striding.

具体实施方式detailed description

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明是基于特征分析、运动分割、关键帧对匹配的人体运动插值方法,结合了特征分析、运动分割结果确定关键帧对。人体运动序列的高维难题通过特征提取降维解决。本发明首次提出了运用坐标系转换方法进行特征提取,到达了精确提取人体运动特征同时对高维人体运动序列降维的目的。另外,本发明基于分割后的短段确定关键帧对确保了插值过渡的人眼视觉的顺序性。The invention is a human motion interpolation method based on feature analysis, motion segmentation, and key frame pair matching, and combines the feature analysis and motion segmentation results to determine the key frame pair. The high-dimensional problem of human motion sequence is solved by feature extraction and dimensionality reduction. The present invention proposes for the first time the use of a coordinate system conversion method for feature extraction, and achieves the purpose of accurately extracting human body motion features while reducing the dimensionality of high-dimensional human body motion sequences. In addition, the present invention determines key frame pairs based on the segmented short segments to ensure the order of human vision in the interpolation transition.

本发明基于运动捕捉数据的人体运动插值方法,首先解析加载的两段运动序列文件(BVH文件)并进行特征分析;然后根据人体移动类运动周期性规律,将长运动序列分割,并基于分割后的多个短运动序列段进行帧时序对齐操作得到匹配帧对,从匹配帧对中确定同一短段的最相似姿态,确定关键帧对;最后基于运动序列的旋转角度数据运用四元数球面插值算法,基于运动序列的平移数据运用线性插值算法生成过渡运动序列连接关键帧对,生成新的长运动序列。The human body motion interpolation method based on the motion capture data of the present invention first parses the loaded two-segment motion sequence files (BVH files) and performs feature analysis; then, according to the periodic law of human motion movement, the long motion sequence is segmented, and based on the segmented Multiple short motion sequence segments are aligned in frame timing to obtain matching frame pairs, and the most similar pose of the same short segment is determined from the matching frame pairs to determine key frame pairs; finally, quaternion spherical interpolation is used based on the rotation angle data of the motion sequence Algorithm, based on the translation data of the motion sequence, the linear interpolation algorithm is used to generate the transition motion sequence and connect the key frame pairs to generate a new long motion sequence.

主要包括以下步骤:It mainly includes the following steps:

步骤1,将两段运动序列文件(BVH文件)载入并解析。运动序列文件(BVH文件)是以文本方式存储的,由两部分组成:骨架部分和运动数据。BVH文件的骨架结构形式不统一,但结构类似,图1为其中一个典型的一个例子:骨架呈树形结构存储,人体的中间臀部节点(Hip)为根节点,上背部节点(Upperback)、左右臀部(L_Hip、R_Hip)为其子节点,而上背部、左右臀部等节点也有各自的子节点继承,依次类推直到末端点。运动数据按照骨架结构顺序存储根节点的平移信息以及所有关节点相对其父节点的角度旋转信息。本发明解析加载的运动序列文件(BVH文件),解析流程如下图2所示:载入BVH文件,如果文件不为空首先解析BVH文件骨骼部分,得到骨架结构、初始姿态以及BVH文件的帧数和帧频;然后基于骨架结构、总帧数、帧频解析BVH文件数据块部分,计算各帧关节点的偏移量和旋转量,直到最后一帧;最后基于数据块部分的信息,绘制每帧的人体运动姿态。Step 1, load and analyze two motion sequence files (BVH files). The motion sequence file (BVH file) is stored in text mode and consists of two parts: skeleton part and motion data. The skeleton structure of the BVH file is not uniform, but the structure is similar. Figure 1 is a typical example: the skeleton is stored in a tree structure, the middle hip node (Hip) of the human body is the root node, the upper back node (Upperback), left and right The hips (L_Hip, R_Hip) are its child nodes, and nodes such as the upper back, left and right hips also have their own child nodes, and so on until the end point. The motion data stores the translation information of the root node and the angular rotation information of all related nodes relative to their parent nodes in the order of the skeleton structure. The present invention analyzes the loaded motion sequence file (BVH file), and the analysis process is as shown in Figure 2 below: load the BVH file, if the file is not empty, first analyze the bone part of the BVH file to obtain the frame structure, initial posture and the number of frames of the BVH file and frame frequency; then analyze the data block part of the BVH file based on the skeleton structure, total frame number, and frame frequency, and calculate the offset and rotation of the joint points of each frame until the last frame; finally, based on the information of the data block part, draw each Frames of human motion poses.

然而由于人体运动序列中各个关节点都是以各自相对坐标系下存储的,不便于进行分析,本发明为了准确高效提取运动序列的关节点的空间位置关系特征和时序特征,首次提出将运动序列关节点在局部坐标系下的相对运动信息转换为在世界坐标系下的绝对位置信息。人体骨架的Nj关节点由局部坐标系下相对位置信息转换到世界坐标系下的公式如下(1)所示:pi (j)表示运动序列的第i时刻关节点Nj的在世界坐标系的坐标;Ti (root),Ri (root)分别表示根节点的平移和旋转变换矩阵,Ri (k)表示骨架结构中关节Nk(Nk为树形人体骨架中,从根节点到节点Nj之间的任意节点)相对其直接父节点的旋转变换矩阵;p0 (j)表示初始时,Nj在其父节点所在的局部坐标系下的偏移量。However, because each joint point in the human motion sequence is stored in its own relative coordinate system, it is not convenient for analysis. In order to accurately and efficiently extract the spatial position relationship characteristics and timing characteristics of the joint points in the motion sequence, the present invention proposes for the first time that the motion sequence The relative motion information of the joint points in the local coordinate system is converted into the absolute position information in the world coordinate system. The formula for transforming the relative position information of the N j joint points of the human skeleton from the local coordinate system to the world coordinate system is shown in (1): p i (j) represents the world coordinates of the joint point N j at the i-th moment of the motion sequence The coordinates of the system; T i (root) and R i (root) represent the translation and rotation transformation matrix of the root node respectively, and R i (k) represents the joint N k in the skeleton structure (N k is the tree-shaped human skeleton, starting from the root Any node between node and node N j ) relative to the rotation transformation matrix of its direct parent node; p 0 (j) represents the initial offset of N j in the local coordinate system where its parent node is located.

pi (j)=Ti (root)Ri (root)...Ri (k)...p0 (j) (1)p i (j) = T i (root) R i (root) ... R i (k) ... p 0 (j) (1)

图3为转换前的走路运动的第102帧,图4为转换后的走路运动的第102帧,通过对比是相同的姿态,说明本发明采用的转换公式是有效的。Fig. 3 is the 102nd frame of the walking motion before the conversion, and Fig. 4 is the 102nd frame of the walking motion after the conversion, and they are the same postures by comparison, which shows that the conversion formula adopted by the present invention is effective.

根据步骤1得到的世界坐标系下的绝对位置信息,提取关节点的空间位置关系与时间序列的特征,根据人体运动周期性规律,利用双脚前后位置关系特征将运动进行分割为多个短段;According to the absolute position information in the world coordinate system obtained in step 1, the spatial position relationship and time series characteristics of the joint points are extracted, and according to the periodic law of human motion, the movement is divided into multiple short segments by using the front and rear position relationship characteristics of the feet ;

由于人体运动大多数是周期性的运动,特别是移动类的运动,双脚都是周期性的交换着向前行走。本发明在分析了人体运动规律后,首次利用如下公式(2)所示的双脚向前步距是否增加关系公式以及公式(3)所示的双脚前后位置关系公式,将一个周期的运动分割为四个短运动段;Because most of the human body motions are periodic motions, especially the motions of movement, both feet are periodically exchanged to walk forward. After the present invention has analyzed the law of human body movement, utilizes for the first time the relationship formula of whether the forward step distance of both feet shown in the following formula (2) increases and the relationship formula of the front and back positions of both feet shown in formula (3), to convert a period of motion Divided into four short motion segments;

其中,(2)式中的Pace_changed依据函数表示判断某一时刻的双脚向前步距是否由增加变换为开始减小,或者由减小变换为开始增加,如果是,将函数值赋值为1,表示此时刻步距为该短段的最大步距或最小步距,即将此时刻定义为该段运动的分割点;否则赋值为0,表示此时刻为某短段的非分割点。(3)式中的Frount_foot依据函数表示某一时刻的右脚是否在左脚前方,若右脚在前方时,赋值为1,否则赋值为0。Among them, Pace_changed in formula (2) is based on the function to judge whether the forward step distance of both feet at a certain moment is changed from increasing to starting to decrease, or from decreasing to starting to increase. If so, assign the function value to 1 , indicating that the step distance at this moment is the maximum or minimum step distance of the short segment, that is, this moment is defined as the segmentation point of the segment; otherwise, it is assigned a value of 0, indicating that this moment is a non-segmentation point of a short segment. Front_foot in formula (3) indicates whether the right foot is in front of the left foot at a certain moment according to the function. If the right foot is in front, it is assigned a value of 1, otherwise it is assigned a value of 0.

下面根据具体的走路运动来说明分割过程:图5为输入的走路运动姿态时序图。图6-图8说明了运动分割的原理,图9是分割后的运动效果。图6为该走路运动序列的时序示意图,为了方便叙述分割过程,用方格表示帧时序。图7为走路运动双脚向前步距与帧时序对用的关系图,其中A、B、C、D四个时刻为走路运动序列在一个周期内的中步距最大、最小时刻,作为运动序列分割点。分割后的该走路运动时序示意图如图8所示:方格中的数字表示运动段被分割的段序号,图中可以看出:每周期的人体运动序列被分割为4个不同的运动时序短段。图9为按照分割时刻点依据箭头顺序表示的运动序列图,图中的人体运动姿态为分割时刻点的姿态,运动序列按照分割点依据箭头顺序表示。图中可以看出分割时刻点确定精确,将一个走路运动的进行了符合人眼视觉逻辑的分割。The following describes the segmentation process according to the specific walking motion: Figure 5 is a timing diagram of the input walking motion posture. Figure 6-Figure 8 illustrates the principle of motion segmentation, and Figure 9 is the motion effect after segmentation. FIG. 6 is a timing diagram of the walking motion sequence. In order to facilitate the description of the segmentation process, the frame timing is represented by a grid. Fig. 7 is a diagram showing the relationship between the forward step distance of both feet and the frame timing of the walking movement, in which the four moments A, B, C, and D are the moments of the maximum and minimum step distances of the walking movement sequence in one cycle, as the movement Sequence split point. The schematic diagram of the walking motion sequence after segmentation is shown in Figure 8: the numbers in the squares represent the sequence numbers of the segmented motion segments. It can be seen from the figure that the human motion sequence of each cycle is divided into 4 different motion sequence shorts. part. Fig. 9 is a motion sequence diagram represented by the sequence of arrows according to the segmentation time points. The motion posture of the human body in the figure is the posture of the segmentation time points, and the motion sequence is represented according to the sequence of arrows according to the segmentation points. It can be seen from the figure that the segmentation time point is determined accurately, and a walking movement is segmented in line with human visual logic.

步骤3,由于分割后的短段帧数不同,为了精确确定关键帧对,本发明首先进行基于同一短段内时间帧时序的对齐操作,在同一短段内,运用距离公式确定最相似帧对。假如最相似的帧对为(Pi,Pj),则过渡帧对即确定为:(Pi,Pj+1)。Step 3, because the number of frames in the segmented short segment is different, in order to accurately determine the key frame pair, the present invention first performs an alignment operation based on the timing of time frames in the same short segment, and uses the distance formula to determine the most similar frame pair in the same short segment . If the most similar frame pair is (P i , P j ), then the transition frame pair is determined as: (P i , P j+1 ).

假设m短段的起始帧与结束帧分别为f1与f2,n短段的起始帧与结束帧分别为f1′与f2′,则按照如下公式(4)计算m与n运动段的匹配帧对(fi,fi′)。Suppose the start frame and end frame of m short segment are f 1 and f 2 respectively, and the start frame and end frame of n short segment are f 1 ′ and f 2 ′ respectively, then calculate m and n according to the following formula (4) Matching frame pair (f i , f i ') for the motion segment.

基于匹配后的帧序列对,按照如下公式(5)计算对应帧的距离确定最相似帧对:Based on the matched frame sequence pair, calculate the distance of the corresponding frame according to the following formula (5) to determine the most similar frame pair:

其中,pi k、pj k分别表示两段运动序列第k个关节点在第i帧和第j帧的位置信息。wk表示第k个关节点在人体骨架中所占的权重,一般来说,离根节点近的节点权值越大。Among them, p i k and p j k represent the position information of the kth joint point in the i-th frame and the j-th frame of the two motion sequences respectively. w k represents the weight of the kth joint point in the human skeleton. Generally speaking, the weight of the node closer to the root node is greater.

图11为传统方法确定关键帧对,运用插值算法生成的过渡帧顺序时序图:图中前两个姿态的人体右脚表示向前伸的顺序,然而传统方法中关键帧对的姿态显示人体右脚向后退插帧。因此插值顺序与原来的人体运动顺序出现了不符的问题。图12为采用本发明方法确定的关键帧对,运用插值算法生成的过渡帧顺序时序图:左边起第二个姿态和最后一个姿态为本发明确定的关键帧对,图中人体的左脚由后向前逐步的顺序过渡插值,符合人眼的视觉逻辑。说明本发明的关键帧对的确定方法避免了插值顺序不合理的错误。Figure 11 is a timing diagram of the transition frame sequence generated by the interpolation algorithm to determine the key frame pair by the traditional method: the right foot of the human body in the first two poses in the figure represents the sequence of stretching forward, but the pose of the key frame pair in the traditional method shows the right foot of the human body. Foot back to insert frame. Therefore, the interpolation sequence does not match the original human motion sequence. Fig. 12 is a key frame pair determined by the method of the present invention, and a sequence diagram of transition frames generated by an interpolation algorithm: the second posture and the last posture from the left are the key frame pairs determined by the present invention, and the left foot of the human body in the figure is formed by The step-by-step sequential transition interpolation from back to front is in line with the visual logic of the human eye. It shows that the method for determining the key frame pair of the present invention avoids the error of unreasonable interpolation sequence.

步骤4,给定两段运动,基于步骤3得到的关键帧对,运用四元数球面插值算法生成角度旋转过渡值,运用线性插值算法生成平移信息过渡值,从而连接成一段新的长运动序列。Step 4: Given two segments of motion, based on the key frame pair obtained in Step 3, use the quaternion spherical interpolation algorithm to generate the transition value of the angle rotation, and use the linear interpolation algorithm to generate the transition value of the translation information, thereby connecting a new long motion sequence .

四元数球面插值由于其原理简单等优点广泛运用于刚体旋转研究,其原理图如图4所示:假设起始帧与结束帧的关节点k所在球面上的位置分别为P0与P1,两帧间夹角值为Ω,运用插值公式插入帧P后使得关节点k沿着球面逐步的从球面P0点顺序的过渡到球面P1点。四元数球面插值公式如下(8)所示:t是插值公式的参数,主要用来控制插值过程中插帧平滑过渡的速度。当t接近1时,所插帧P的旋转角度接近P1,对应图4的θ值接近Ω。当t接近0时,插值P的旋转角度接近P0,对应图4的θ值接近0。 Quaternion spherical interpolation is widely used in the study of rigid body rotation due to its simple principle and other advantages. , the angle between the two frames is Ω, and the interpolation formula is used to insert the frame P so that the joint point k gradually transitions from point P 0 on the spherical surface to point P 1 on the spherical surface step by step along the spherical surface. The quaternion spherical interpolation formula is shown in (8): t is a parameter of the interpolation formula, which is mainly used to control the speed of the smooth transition of interpolation frames during the interpolation process. When t is close to 1, the rotation angle of the inserted frame P is close to P 1 , corresponding to the value of θ in FIG. 4 close to Ω. When t is close to 0, the rotation angle of interpolation P is close to P0, corresponding to the value of θ in Figure 4 close to 0.

线性插值是最常用的技术,运算效率高的插值算法。对于根节点的平移信息,本发明采用的是线性插值方法,公式如下(9)所示。起始帧与结束帧的根节点坐标分别是C0、C1,u是插值参数,当u接近0时靠近C0,接近1时靠近C1Linear interpolation is the most commonly used technique, and it is an interpolation algorithm with high computational efficiency. For the translation information of the root node, the present invention adopts a linear interpolation method, and the formula is shown in (9) below. The root node coordinates of the start frame and the end frame are C 0 and C 1 respectively, and u is an interpolation parameter. When u is close to 0, it is close to C 0 , and when it is close to 1, it is close to C 1 .

Ci(xi,yi,zi)=uC0(x0,y0,z0)+(1-u)C1(x1,y1,z1) (9)C i (x i ,y i ,z i )=uC 0 (x 0 ,y 0 ,z 0 )+(1-u)C 1 (x 1 ,y 1 ,z 1 ) (9)

人体动画是动画领域最具挑战的研究,一方面是由于人们最熟悉的运动,人眼对其具有非常敏感直接的判断力,稍有任何的瑕疵,就变得特别明显。另一方面由于人体骨架结构复杂,人体运动序列是高维的,精确分析人体运动数据需要很大的挑战。因此,至今没有一个客观的标准能够代替人眼的主观感觉衡量运动插值的质量。图13为一段正常走路的运动序列时序图。图14为一段大步走的运动序列时序图。图15为将两段运动序列插帧连接后的新的走路运动序列。从图中可以看出:走路运动经过插值后连接大步走路运动,自然无缝的将两段运动序列连接在一起。Human body animation is the most challenging research in the field of animation. On the one hand, human eyes are very sensitive and direct in judging the most familiar sports, and any flaws will become particularly obvious. On the other hand, due to the complex structure of the human skeleton and the high-dimensional human motion sequence, it is a great challenge to accurately analyze the human motion data. Therefore, so far there is no objective standard that can replace the subjective perception of the human eye to measure the quality of motion interpolation. Fig. 13 is a timing diagram of a motion sequence of a normal walk. Fig. 14 is a timing diagram of a motion sequence of striding. Fig. 15 is a new walking motion sequence after interpolation and connection of two motion sequences. It can be seen from the figure that the walking movement is interpolated and connected to the stride walking movement, which naturally and seamlessly connects the two motion sequences together.

Claims (3)

1. a kind of human motion interpolation method based on movement capturing data, it is characterised in that parse two sections of fortune of loading first Dynamic sequential file, and carry out coordinate system conversion operation;The exercise data being then based under world coordinate system, extract double-legged anteroposterior position Relationship characteristic and step pitch temporal aspect are put, and according to human motion rule, two features according to more than, by long motion sequence It is divided into short motor segment;Most like frame pair is finally chosen in same short section and is based on this determination key frame pair, is then based on Key frame carries out the flat of root node to carrying out angle rotation interpolation with quaternary number sphere interpolation algorithm with linear interpolation method Interpolation is moved, so as to connect into one section of new motion;
Specifically include following steps:
Step 1, loading parsing motion sequence file, and the relative position to the exercise datas of BVH files under local coordinate system Information is converted to the absolute location information under world coordinate system;
Motion sequence file is made up of two parts in step 1:Skeleton part and exercise data;Parsed first in alternative space method Human skeleton part:By progressively reading each keyword in motion sequence file, shaping character, floating-point ocra font ocr, word Symbol string parsing, then parses exercise data part according to skeleton structure order;
Absolute location information of each artis of human body movement data under world coordinate system is obtained using recursive mode;Conversion is public Shown in formula following (1):
pi (j)=Ti (root)Ri (root)...Ri (k)...p0 (j) (1)
Wherein, pi (j)Represent the i-th moment artis N of motion sequencejThe coordinate in world coordinate system;Ti (root),Ri (root)Point Not Biao Shi root node translation and rotational transformation matrix;Ri (k)Represent joint N in skeleton structurekWith respect to the rotation of its direct father node Turn transformation matrix;NkFor in tree-like human skeleton, from root node to node NjBetween arbitrary node;p0 (j)When representing initial, Nj Offset under the local coordinate system of its father node;
Step 2, the absolute location information under world coordinate system obtained based on step 1, the locus for extracting artis are closed It is the feature of feature and sequential relationship, and according to human motion periodic regularity, using double-legged front and back position relationship characteristic and step Long motion sequence carried out away from temporal aspect to be divided into multiple short motion sequences;In step 2, according to human motion in periodically The property of rule, all it is that both feet are alternately advanced forward forward regardless of style especially for the motion of mobile class;Base It is according to the long motion sequence of segmentation with double-legged forward spatial position relationship feature and step pitch temporal aspect in this rule;Two Individual segmentation function represents as follows respectively:
<mrow> <mi>P</mi> <mi>a</mi> <mi>c</mi> <mi>e</mi> <mo>_</mo> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> <mi>g</mi> <mi>e</mi> <mi>d</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mi>D</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mo>_</mo> <mi>f</mi> <mi>e</mi> <mi>e</mi> <mi>t</mi> </mrow> </mtd> <mtd> <mrow> <mi>c</mi> <mi>h</mi> <mi>a</mi> <mi>n</mi> <mi>g</mi> <mi>e</mi> <mi>d</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>F</mi> <mi>r</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> <mo>_</mo> <mi>f</mi> <mi>o</mi> <mi>o</mi> <mi>t</mi> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mfenced open = "(" close = ")"> <mtable> <mtr> <mtd> <mrow> <mi>r</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> <mo>_</mo> <mi>f</mi> <mi>o</mi> <mi>o</mi> <mi>t</mi> </mrow> </mtd> <mtd> <mrow> <mi>f</mi> <mi>r</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>t</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>o</mi> <mi>t</mi> <mi>h</mi> <mi>e</mi> <mi>r</mi> <mi>w</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein, whether step pitch reduces the both feet at Pace_changed function representations a certain moment being transformed to by increase forward, Or increase being transformed to by reduction, if it is, functional value is entered as into 1, represent the maximum that this moment step pitch is the short section Step pitch or minimum step, i.e., be defined as to the cut-point of this section motion this moment;Otherwise 0 is entered as, represents that this moment is short for certain The non-cut-point of section;Whether the right crus of diaphragm at Frount_foot function representations a certain moment is in front of left foot, when right crus of diaphragm is in front, 1 is entered as, is otherwise entered as 0;
Step 3, the segmentation result obtained based on step 2, the short section after segmentation is carried out to be based on time frame sequence alignment, and same In one short section, the most short frame of Euclidean distance is determined to for most like posture, so that it is determined that key frame pair;
Step 4, the key frame pair based on step 3, middle transition frame angle rotational value is generated with quaternary number sphere interpolation algorithm, Root node middle transition frame shift value is generated with linear interpolation algorithm.
2. the human motion interpolation method according to claim 1 based on movement capturing data, it is characterised in that step 3 In, the segmentation result that is obtained based on step 2, formula (4) carries out the time frame sequence alignment in short section, obtains matching frame It is right:f1With f2、f1' and f2' respectively represent two sections starting and end frame, be calculated matching frame to (fi,fi′);It is then based on Match frame and choose most like frame to as key frame pair to calculating;Most like frame is determined using the most frequently used Euclidean distance It is right, it is assumed that the Euclidean distance minimum value being calculated is D (Pi,Pj), then selection next frame is key frame to i.e. (Pi,Pj+1);Its In, determine shown in Euclidean distance minimum value formula following (5):
<mrow> <msup> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>&amp;prime;</mo> </msup> <mo>=</mo> <msup> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>&amp;prime;</mo> </msup> <mo>+</mo> <mfrac> <mrow> <msup> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>&amp;prime;</mo> </msup> <mo>-</mo> <msup> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>&amp;prime;</mo> </msup> </mrow> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mi>i</mi> </msub> <mo>-</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mi>D</mi> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>i</mi> <mi>n</mi> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>w</mi> <mi>k</mi> </msub> <mo>|</mo> <mo>|</mo> <msup> <msub> <mi>p</mi> <mi>i</mi> </msub> <mi>k</mi> </msup> <mo>-</mo> <msup> <msub> <mi>p</mi> <mi>j</mi> </msub> <mi>k</mi> </msup> <mo>|</mo> <mo>|</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein, pi k, pj kRepresent two sections of motion sequences, k-th of artis respectively in the positional information of the i-th frame and jth frame;wkRepresent K-th of artis weight shared in human skeleton, in general, the node weights near from root node are bigger.
3. the human motion interpolation method according to claim 1 based on movement capturing data, it is characterised in that step 4 In, formula (6) and formula (7) carry out the Eulerian angles anglec of rotation and mutually changed with quaternary number:Eulerian angles groupFor around Z, Y, the X anglec of rotation, corresponding quaternary number is q=[w a b c] after changing;
Formula (8) is carried out based on the quaternary number sphere interpolation algorithm generation transition anglec of rotation:p0、p1For two key frames The rotation quaternary number of certain artis, Ω are its differential seat angle, and t is interpolation parameter, for controlling the speed seamlessly transitted in Interpolation Process Degree;With the change of t values, change interpolation angle, when t is close to 1, interpolation p angle rotation is closer to p1;When t is close to 0, Interpolation p angle is rotated close to p0
<mrow> <mi>p</mi> <mo>=</mo> <mi>S</mi> <mi>L</mi> <mi>E</mi> <mi>R</mi> <mi>P</mi> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>;</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>;</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>t</mi> <mo>)</mo> </mrow> <mi>&amp;Omega;</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;Omega;</mi> </mrow> </mfrac> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>+</mo> <mfrac> <mrow> <mi>sin</mi> <mi> </mi> <mi>t</mi> <mi>&amp;Omega;</mi> </mrow> <mrow> <mi>s</mi> <mi>i</mi> <mi>n</mi> <mi>&amp;Omega;</mi> </mrow> </mfrac> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>t</mi> <mo>&amp;Element;</mo> <mo>&amp;lsqb;</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>&amp;rsqb;</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow>
Formula (9) carries out the linear interpolation based on root node translation information:Wherein, start frame and the root node of end frame are sat Mark is C respectively0、C1, u is interpolation parameter, when u is close to 0 close to C0, close to when 1 close to C1
Ci(xi,yi,zi)=uC0(x0,y0,z0)+(1-u)C1(x1,y1,z1) (9)。
CN201410764271.4A 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data Active CN104463788B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410764271.4A CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410764271.4A CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Publications (2)

Publication Number Publication Date
CN104463788A CN104463788A (en) 2015-03-25
CN104463788B true CN104463788B (en) 2018-02-16

Family

ID=52909776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410764271.4A Active CN104463788B (en) 2014-12-11 2014-12-11 Human motion interpolation method based on movement capturing data

Country Status (1)

Country Link
CN (1) CN104463788B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550991A (en) * 2015-12-11 2016-05-04 中国航空工业集团公司西安航空计算技术研究所 Image non-polar rotation method
CN109470263B (en) * 2018-09-30 2020-03-20 北京诺亦腾科技有限公司 Motion capture method, electronic device, and computer storage medium
CN109737941B (en) * 2019-01-29 2020-11-13 桂林电子科技大学 A method of human motion capture
CN110197576B (en) * 2019-05-30 2021-04-20 北京理工大学 Large-scale real-time human body action acquisition and reconstruction system
CN112188233B (en) * 2019-07-02 2022-04-19 北京新唐思创教育科技有限公司 Method, device and equipment for generating spliced human body video
CN110992392A (en) * 2019-11-20 2020-04-10 北京影谱科技股份有限公司 Key frame selection method and device based on motion state
CN110942007B (en) * 2019-11-21 2024-03-05 北京达佳互联信息技术有限公司 Method and device for determining hand skeleton parameters, electronic equipment and storage medium
CN110992454B (en) * 2019-11-29 2020-07-17 南京甄视智能科技有限公司 Real-time motion capture and three-dimensional animation generation method and device based on deep learning
CN111681303A (en) * 2020-06-10 2020-09-18 北京中科深智科技有限公司 Method and system for extracting key frame from captured data and reconstructing motion
CN115618155B (en) * 2022-12-20 2023-03-10 成都泰盟软件有限公司 Method and device for generating animation, computer equipment and storage medium
CN118317144B (en) * 2024-06-11 2024-10-01 圆周率科技(常州)有限公司 Video processing method, device, computer equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218824A (en) * 2012-12-24 2013-07-24 大连大学 Motion key frame extracting method based on distance curve amplitudes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130093994A (en) * 2012-02-15 2013-08-23 한국전자통신연구원 Method for user-interaction with hologram using volumetric object wave field

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103218824A (en) * 2012-12-24 2013-07-24 大连大学 Motion key frame extracting method based on distance curve amplitudes

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于BVH驱动的OGRE骨骼动画;郭力 等;《计算机应用研究》;20090915;第26卷(第9期);3550-3552 *
基于人体运动捕获数据的运动编辑技术研究与实现;彭伟;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130615(第06期);I138-1335 *
基于运动捕获的人体运动生成与编辑关键技术研究;瞿师;《中国博士学位论文全文数据库 信息科技辑》;20140415(第04期);I138-71 *

Also Published As

Publication number Publication date
CN104463788A (en) 2015-03-25

Similar Documents

Publication Publication Date Title
CN104463788B (en) Human motion interpolation method based on movement capturing data
CN110599573B (en) Method for realizing real-time human face interactive animation based on monocular camera
CN111950412B (en) Hierarchical dance motion gesture estimation method based on sequence multi-scale depth feature fusion
CN111160164B (en) Action Recognition Method Based on Human Skeleton and Image Fusion
Guerra-Filho et al. The human motion database: A cognitive and parametric sampling of human motion
CN111553968B (en) Method for reconstructing animation of three-dimensional human body
CN102509333B (en) Action-capture-data-driving-based two-dimensional cartoon expression animation production method
CN110472604A (en) A kind of pedestrian based on video and crowd behaviour recognition methods
CN110728220A (en) Gymnastics auxiliary training method based on human body action skeleton information
CN102254336A (en) Method and device for synthesizing face video
CN101833788A (en) A 3D human body modeling method using hand-drawn sketches
CN104504731B (en) Human motion synthetic method based on motion diagram
Xu et al. Scene image and human skeleton-based dual-stream human action recognition
WO2021063271A1 (en) Human body model reconstruction method and reconstruction system, and storage medium
CN108154104A (en) A kind of estimation method of human posture based on depth image super-pixel union feature
CN104461000B (en) A kind of on-line continuous human motion identification method based on a small amount of deleted signal
CN115880724A (en) Light-weight three-dimensional hand posture estimation method based on RGB image
CN101241600A (en) A Chain Skeleton Matching Method in Motion Capture Technology
CN110232727A (en) A kind of continuous posture movement assessment intelligent algorithm
CN114550292A (en) High-physical-reality human body motion capture method based on neural motion control
Kobayashi et al. Motion capture dataset for practical use of AI-based motion editing and stylization
Mao et al. A sketch-based gesture interface for rough 3D stick figure animation
Yadav et al. An efficient deep convolutional neural network model for yoga pose recognition using single images
CN111862276A (en) A method for automatic production of skeletal animation based on formalized action description text
Zuo et al. A simple baseline for spoken language to sign language translation with 3d avatars

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210324

Address after: 19 / F, block B, northwest Guojin center, 168 Fengcheng 8th Road, Xi'an, Shaanxi 710000

Patentee after: XI'AN FEIDIE VIRTUAL REALITY TECHNOLOGY Co.,Ltd.

Address before: 710048 No. 5 Jinhua South Road, Shaanxi, Xi'an

Patentee before: XI'AN University OF TECHNOLOGY