Nothing Special   »   [go: up one dir, main page]

CN111158482B - A method and system for capturing human body gestures - Google Patents

A method and system for capturing human body gestures Download PDF

Info

Publication number
CN111158482B
CN111158482B CN201911394224.4A CN201911394224A CN111158482B CN 111158482 B CN111158482 B CN 111158482B CN 201911394224 A CN201911394224 A CN 201911394224A CN 111158482 B CN111158482 B CN 111158482B
Authority
CN
China
Prior art keywords
node
joint
data acquisition
determining
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911394224.4A
Other languages
Chinese (zh)
Other versions
CN111158482A (en
Inventor
刘谦
曾强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Ezhou Industrial Technology Research Institute of Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Ezhou Industrial Technology Research Institute of Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology, Ezhou Industrial Technology Research Institute of Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201911394224.4A priority Critical patent/CN111158482B/en
Publication of CN111158482A publication Critical patent/CN111158482A/en
Application granted granted Critical
Publication of CN111158482B publication Critical patent/CN111158482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

本发明提供一种人体动作姿态的捕捉方法及系统,方法包括:基于每个数据采集节点的原始数据,确定出每个数据采集节点的空间朝向及空间位置;确定关节节点的空间位置及空间朝向;校准各数据采集节点的空间朝向及各关节节点的空间朝向;将各关节节点的空间位置及空间朝向、数据采集节点的空间位置及空间朝向与人体模型进行对应标定;利用人体模型捕捉人体的动作姿态;如此,先解算出人体第一关节节点的姿态,再由第一关节节点的姿态解算出第二关节节点的姿态,这样只需设置6个数据采集点就可以实现对人体动作姿态的捕捉,有效降低了传感器的使用数量,从而降低了人体动作姿态捕捉成本。

Figure 201911394224

The present invention provides a method and system for capturing human action gestures. The method includes: determining the spatial orientation and spatial position of each data acquisition node based on the original data of each data acquisition node; determining the spatial position and spatial orientation of joint nodes ; Calibrate the spatial orientation of each data acquisition node and the spatial orientation of each joint node; calibrate the spatial position and spatial orientation of each joint node, the spatial position and spatial orientation of the data acquisition node and the human body model; use the human body model to capture the human body Action posture; in this way, the posture of the first joint node of the human body is calculated first, and then the posture of the second joint node is calculated from the posture of the first joint node. In this way, only 6 data collection points can be set to realize the human body movement posture Capture effectively reduces the number of sensors used, thereby reducing the cost of capturing human motion gestures.

Figure 201911394224

Description

Human body motion gesture capturing method and system
Technical Field
The invention relates to the technical field of motion capture, in particular to a human body motion gesture capturing method and system.
Background
Motion capture is used to record and simulate the process of moving an object into a digital model, and in recent years, with the development of computer data acquisition and sensor technology, motion capture is widely used in games, entertainment, sports, military, motion analysis, dance acquisition and virtual reality technology.
And the inertial motion capture realizes the reproduction of the three-dimensional posture of the human body by wearing a plurality of inertial motion sensors.
In the prior art, a large number of sensors are needed for capturing the human body posture, so that the capturing cost is high.
Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the invention provides a human body motion gesture capturing method and system, which are used for solving the technical problems of high capturing cost caused by the fact that a large number of sensors are needed when capturing human body motion gestures in the prior art.
The technical scheme of the invention is realized as follows:
the application provides a human motion gesture capturing method, which comprises the following steps:
acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
Determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
and capturing the action gesture of the human body by using the calibrated human body model.
Optionally, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
Determining the spatial orientation q of each data acquisition node according to the formula q=q (T) +tω (T) q (T); wherein q (T) is a preset initial value of spatial orientation, T is a sampling period, and ω (T) is an angular velocity.
Optionally, the determining, based on the raw data of each data acquisition node, the spatial position of each data acquisition node according to a complementary filtering algorithm includes:
according to the formula
Figure GDA0004233060320000021
Determining the speed of the data acquisition node;
according to the formula
Figure GDA0004233060320000022
Determining the spatial position of the data acquisition node; wherein the v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
Optionally, the determining the bending angle of the first joint node includes:
determining an auxiliary node of the first joint node and a target data acquisition node corresponding to the first joint node based on a reverse kinematics (IK, inverse Kinematics) algorithm;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
Using the formula
Figure GDA0004233060320000031
Determining a bending angle θ of the first joint node; wherein a is the first distance, b is the second distance, and c is the third distance.
Optionally, when the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, and the first relative position is a relative position between the root node and a shoulder joint;
when the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
Optionally, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
Optionally, the calibrating the spatial orientation of each data acquisition node by using the posture deviation includes:
calibrating the spatial orientation of each data acquisition node correspondingly according to the formula q_mod=q_bias 1; wherein,,
the q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the corresponding attitude deviation of each data acquisition node.
The application also provides a human motion gesture capture system, the system comprising:
the data acquisition units are used for acquiring the original data of each data acquisition node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
the terminal is used for acquiring the original data, determining the space orientation and the space position of each data acquisition node according to a complementary filtering algorithm based on the original data, and the data acquisition nodes comprise: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node;
Determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
the method is used for capturing the action gesture of the human body by using the calibrated human body model.
Optionally, the first determining unit is specifically configured to:
Determining the spatial orientation q of each data acquisition node according to the formula q=q (T) +tω (T) q (T);
according to the formula
Figure GDA0004233060320000041
Determining the speed of the data acquisition node;
according to the formula
Figure GDA0004233060320000042
Determining the spatial position of the data acquisition node; wherein q (T) is a preset initial value of spatial orientation, T is a sampling period, ω (T) is an angular velocity, and v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
Optionally, the second determining unit is specifically configured to:
determining an auxiliary node of the first joint node and a target data acquisition point corresponding to the first joint node based on an inverse kinematics IK algorithm, wherein the first joint node comprises: elbow and knee joints;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
Using the formula
Figure GDA0004233060320000051
Determining a bending angle θ of the first joint node; wherein a is the first distance, b is the second distance, and c is the third distance.
The invention provides a method and a system for capturing human motion gestures, wherein the method comprises the following steps: acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data; determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints; determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints; determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node; acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation; correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model; capturing the action gesture of the human body by using the calibrated human body model; therefore, the gesture of the first joint node of the human body is calculated firstly, and then the gesture of the second joint node is calculated by the gesture of the first joint node, so that the capture of the human body action gesture can be realized by only setting 6 data acquisition points, the use quantity of sensors is effectively reduced, and the human body action gesture capture cost is reduced.
Drawings
FIG. 1 is a schematic flow chart of a method for capturing human motion gestures according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of calibration of a data acquisition node according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating IK algorithm calculation according to the embodiment of the present invention;
fig. 4 is a schematic diagram of human body posture calibration according to an embodiment of the present invention;
FIG. 5 is a schematic illustration of a mannequin provided in an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a human motion gesture capturing system according to an embodiment of the present invention.
Detailed Description
In order to be used for solving the prior art when catching human action gesture, the sensor quantity that needs to use is more, leads to catching the higher technical problem of cost. The application provides a human motion gesture capturing method and system.
The technical scheme of the invention is further described in detail through the attached drawings and specific embodiments.
Example 1
The embodiment provides a human motion gesture capturing method, which is characterized by comprising the following steps:
s110, acquiring original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm;
Before the original data of each data acquisition node are acquired, the data acquisition node needs to be determined, and in order to reduce the use quantity of the sensors, 6 data acquisition nodes are arranged, and the data acquisition node comprises: a head node 1, a torso node 2, a first forearm node 3, a second forearm node 4, a first calf node 5, and a second calf node 6; each data acquisition node is shown by the reference numeral in fig. 2.
After determining the data acquisition nodes, binding the data acquisition modules to the data acquisition nodes, wherein each data acquisition module comprises a six-axis IMU and a three-axis magnetic sensor, and each IMU comprises: a three-axis accelerometer and a three-axis gyroscope. After each data acquisition module acquires original data, the original data are transmitted to a data summarizing module through a data bus, the data summarizing module is arranged on a trunk node, after the data summarizing module receives the original data, the original data are transmitted to a terminal through a wireless transmission mode, and the terminal can comprise hardware equipment such as an industrial personal computer, a computer and the like.
The data acquisition module is used for acquiring the original data of the data node, and the original data comprises: acceleration data, angular velocity data, and magnetic field direction data.
After the original data is obtained, the spatial orientation and the spatial position of each data acquisition node can be determined according to a complementary filtering algorithm based on the original data. As an optional embodiment, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
determining the spatial orientation q of each data acquisition node according to the formula (1):
q=q(t)+Tω(t)q(t) (1)
in formula (1), q (T) is a preset initial value of spatial orientation, where T is a sampling period, and ω (T) is an angular velocity.
As an optional embodiment, for each data acquisition node, the determining, based on the raw data of each data acquisition node, the spatial location of each data acquisition node according to a complementary filtering algorithm includes:
determining the speed of the data acquisition node according to formula (2);
Figure GDA0004233060320000071
after the speed of the data acquisition node is determined, determining the spatial position of the data acquisition node according to the formula (3):
Figure GDA0004233060320000072
wherein the v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, v (τ) and v t From the point of view of integration, the same parameter, the p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
After the spatial orientation and spatial position of the individual data acquisition nodes are determined, the spatial position and spatial orientation of the root and end nodes of the manikin can be obtained. Here, the root node is a torso node, and the end nodes include: a head node, a first forearm node, a second forearm node, a first calf node, and a second calf node.
S111, determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
the bending angle of a first joint node is determined first, the spatial position and the spatial orientation of the first joint node are determined according to the bending angle of the first joint node, and the first joint node comprises: elbow joint and knee joint.
As an alternative embodiment, determining the bending angle of the first joint node includes:
the auxiliary node of the first joint node and the target data acquisition node corresponding to the first joint node are determined based on the inverse kinematics IK algorithm;
Acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node; the positional relationship among the first joint node, the auxiliary node and the target data acquisition node is shown in fig. 3, wherein in fig. 3, the first joint node is p, the auxiliary node is p0, and the target data acquisition node is p1.
After the position relation among the first joint node, the auxiliary node and the target data acquisition node is determined, the bending angle theta of the first joint node is determined by using a formula (4):
Figure GDA0004233060320000081
in equation (4), a is the first distance, b is the second distance, c is the third distance, and the first, second, and third distances may refer to fig. 3.
Here, the first joint nodes are different, and the corresponding auxiliary nodes and the target data acquisition nodes are also different. When the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and the shoulder joint;
When the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
For example, with continued reference to fig. 2, when the first joint node p is a knee joint, the auxiliary node is a hip joint p0 (the second relative position between p0 and the root node is unchanged), and the target data acquisition node is a first lower leg node 5, then when the knee joint is bent, the first joint node p, the auxiliary node p0, and the target data acquisition node may form a triangle, and at this time, the bending angle of the knee joint may be determined by formula (4).
After the bending angle of the first joint node is determined, as an optional embodiment, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
For example, taking a knee joint as an example, since the knee joint is in a three-dimensional space, even after the bending angle of the knee joint is determined, the accurate position of the knee joint cannot be known accurately (referring to fig. 3, there is an infinite solution on a circle formed by taking p as a center and r as a radius), a reasonable bending plane of the knee joint needs to be determined first, and in this application, a plane where the thigh and ankle face are oriented consistently is determined as the bending plane of the knee joint.
On the curved surface, the distance between the knee joint and the hip joint is constant, as is the distance between the knee joint and the first calf joint 5; since the bending angle is known; the spatial position of the first calf node 5 is known; the relative position of the hip joint with respect to the root node is fixed, the spatial position of the root node has been determined, and therefore the spatial position of the hip joint is also known, and therefore the spatial position of the knee joint can be determined.
When the spatial position of the hip joint and the spatial position of the knee joint are determined, the two points of the hip joint and the knee joint can form a vector, and the direction of the vector is the spatial orientation of the knee joint (the direction from the hip joint to the knee joint).
S112, determining the spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
the same reason is that after the spatial position of the data acquisition node is determined, the spatial position of the root node may also be obtained, and the distance between the second joint node (neck, shoulder and hip) and the root node is constant for the same human body, and this relative position (relative position and relative angle) may be obtained in advance, so as an alternative embodiment, determining the spatial position of the second joint node according to the spatial position of the data acquisition node includes:
acquiring a second relative position between the hip joint and the root node, acquiring a third relative position between the neck joint and the root node, and acquiring a fourth relative position between the shoulder joint and the root node;
determining a spatial position of the hip joint based on the second relative position and the spatial position of the root node; determining a spatial position of the neck from the third relative position and the spatial position of the root node; and determining the spatial position of the shoulder joint according to the fourth relative position and the spatial position of the root node.
S113, determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
similarly, after the spatial position of the first joint node and the spatial position of the second joint node are determined, the spatial orientation of the second joint node may be determined according to the spatial position of the first joint node and the spatial position of the second joint node.
For example, when the second joint node is any one of the shoulder joints, after the spatial position of the shoulder joint is determined, since the spatial position of the forearm joint is also known, the two points may form a vector, and the spatial orientation of the shoulder joint is the direction of the vector (from the shoulder joint to the forearm joint).
S114, acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
after the pose information (spatial position and spatial orientation) of each joint node and the pose information of each data acquisition node are determined, the information needs to be converted into pose motion information in a human body model, so that the pose information needs to be correspondingly calibrated with a pre-established human body model.
In practical application, since the coordinate system of the sensor and the limb usually has a posture deviation, in order to improve capturing accuracy, before calibration, the posture deviation of each data acquisition node and each joint node is obtained, and the spatial orientation of each joint node and the spatial orientation of each data acquisition node are correspondingly calibrated by using the posture deviation.
Specifically, referring to fig. 4, the sensor is tied to the limb, the left X1 axis is the spatial orientation of the joint node, the right X axis is the spatial orientation of the data acquisition node, and in practical application, when there is a posture deviation, there is an error between the X1 axis and the X axis, and the errors do not completely coincide, which means that the deviation needs to be calibrated to ensure that the X1 axis and the X axis coincide.
For each data acquisition node, the human body keeps the 'T' -shaped gesture still for a period of time, and the quaternion q_sensor of the corresponding data acquisition node can be obtained from the sensor, so that the gesture deviation q_bias of the data acquisition node can be determined according to the formula (5):
q_bias=q_seg*q_sensor -1 (5)
in equation (5), q_seg is the spatial orientation of the corresponding limb when the human body maintains a "T" posture.
After the attitude deviation is determined, correspondingly calibrating the spatial orientation of each data acquisition node according to the formula (6):
q_mod=q_bias*q (6)
In the formula (6), q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the posture deviation corresponding to each data acquisition node.
After the spatial orientation of the data acquisition node is calibrated, the spatial orientation corresponding to the joint node is also calibrated because the sensor is tied to the limb.
S115, correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
after the spatial orientation of each data acquisition node and the spatial orientation of each joint node are calibrated, the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each calibrated joint node and the spatial orientation of each calibrated data acquisition node are correspondingly calibrated with a pre-established human body model. Wherein, the human body model is a rigid body three-dimensional model, and can be shown in fig. 5.
S116, capturing the action gesture of the human body by using the calibrated human body model.
After calibration, the motion gesture of the human body can be captured by using the fixed human body model, and the whole body motion gesture of the human body can be reproduced in corresponding software.
Based on the same inventive concept, the application also provides a system for capturing human motion gestures, and the details are shown in the second embodiment.
Example two
The present embodiment provides a system for capturing a gesture of a human body, as shown in fig. 6, the system includes: a plurality of data acquisition modules, a data summarization module 51 and a terminal 52.
The data acquisition module of this application need confirm data acquisition node before the raw data of acquireing each data acquisition node, in order to reduce the use quantity of sensor, this application has set up 6 data acquisition nodes, data acquisition node includes: a head node 1, a torso node 2, a first forearm node 3, a second forearm node 4, a first calf node 5, and a second calf node 6; each data acquisition node is shown by circled marks in fig. 2.
After determining the data acquisition nodes, binding the data acquisition modules to the data acquisition nodes, wherein each data acquisition module comprises a six-axis IMU and a three-axis magnetic sensor, and each IMU comprises: a three-axis accelerometer and a three-axis gyroscope. Each data acquisition module is used for acquiring original data of a data acquisition node, each data acquisition module acquires the original data and transmits the original data to the data summarizing module 51 through a data bus, the data summarizing module is arranged on a trunk node, the data summarizing module 51 receives the original data and then transmits the original data to the terminal 52 in a wireless transmission mode, and the terminal can comprise hardware equipment such as an industrial personal computer and a computer. Wherein the raw data includes: acceleration data, angular velocity data, and magnetic field direction data.
Therefore, the data acquisition modules in the present application include 6 data acquisition modules, which are respectively: the first data acquisition module 53, the second data acquisition module 54, the third data acquisition module 55, the fourth data acquisition module 56, the fifth data acquisition module 57, and the sixth data acquisition module 58.
After the terminal 52 acquires each piece of raw data, the spatial orientation and spatial position of each data acquisition node can be determined according to the complementary filtering algorithm based on the raw data. As an optional embodiment, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
determining the spatial orientation q of each data acquisition node according to the formula (1):
q=q(t)+Tω(t)q(t) (1)
in formula (1), q (T) is a preset initial value of spatial orientation, where T is a sampling period, and ω (T) is an angular velocity.
As an optional embodiment, for each data acquisition node, the determining, based on the raw data of each data acquisition node, the spatial location of each data acquisition node according to a complementary filtering algorithm includes:
determining the speed of the data acquisition node according to formula (2);
Figure GDA0004233060320000131
After the speed of the data acquisition node is determined, determining the spatial position of the data acquisition node according to the formula (3):
Figure GDA0004233060320000132
wherein the v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, v (τ) and v t From the point of view of integration, the same parameter, the p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
After the spatial orientation and spatial position of the individual data acquisition nodes are determined, the spatial position and spatial orientation of the root and end nodes of the manikin can be obtained. Here, the root node is a torso node, and the end nodes include: a head node, a first forearm node, a second forearm node, a first calf node, and a second calf node.
The terminal 52 of the present application first determines a bending angle of a first joint node, and determines a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, where the first joint node includes: elbow joint and knee joint.
As an alternative embodiment, the determining the bending angle of the first joint node includes:
Determining an auxiliary node of the first joint node and a target data acquisition node corresponding to the first joint node based on an inverse kinematics IK algorithm;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node; the positional relationship among the first joint node, the auxiliary node and the target data acquisition node is shown in fig. 3, wherein in fig. 3, the first joint node is p, the auxiliary node is p0, and the target data acquisition node is p1.
After the position relation among the first joint node, the auxiliary node and the target data acquisition node is determined, the bending angle theta of the first joint node is determined by using a formula (4):
Figure GDA0004233060320000141
in equation (4), a is the first distance, b is the second distance, c is the third distance, and the first, second, and third distances may refer to fig. 3.
Here, the first joint nodes are different, and the corresponding auxiliary nodes and the target data acquisition nodes are also different. When the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and the shoulder joint;
When the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
For example, with continued reference to fig. 2, when the first joint node p is a knee joint, the auxiliary node is a hip joint p0 (the second relative position between p0 and the root node is unchanged), and the target data acquisition node is a first lower leg node 5, then when the knee joint is bent, the first joint node p, the auxiliary node p0, and the target data acquisition node may form a triangle, and at this time, the bending angle of the knee joint may be determined by formula (4).
After the bending angle of the first joint node is determined, as an optional embodiment, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
For example, taking a knee joint as an example, since the knee joint is in a three-dimensional space, even after the bending angle of the knee joint is determined, the accurate position of the knee joint cannot be known accurately (referring to fig. 3, there is an infinite solution on a circle formed by taking p as a center and r as a radius), a reasonable bending plane of the knee joint needs to be determined first, and in this application, a plane where the thigh and ankle are oriented consistently is determined as the bending plane of the knee joint.
On the curved surface, the distance between the knee joint and the hip joint is constant, as is the distance between the knee joint and the first calf joint 5; since the bending angle is known; the spatial position of the first calf node 5 is known; the relative position of the hip joint with respect to the root node is fixed, the spatial position of the root node has been determined, and therefore the spatial position of the hip joint is also known, and therefore the spatial position of the knee joint can be determined.
When the spatial position of the hip joint and the spatial position of the knee joint are determined, the two points of the hip joint and the knee joint can form a vector, and the direction of the vector is the spatial orientation of the knee joint (the direction from the hip joint to the knee joint).
After the spatial position of the first joint node is determined, determining the spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
the same reason is that after the spatial position of the data acquisition node is determined, the spatial position of the root node may also be obtained, and the distance between the second joint node (neck, shoulder and hip) and the root node is constant for the same human body, and this relative position (relative position and relative angle) may be obtained in advance, so as an alternative embodiment, determining the spatial position of the second joint node according to the spatial position of the data acquisition node includes:
acquiring a second relative position between the hip joint and the root node, acquiring a third relative position between the neck joint and the root node, and acquiring a fourth relative position between the shoulder joint and the root node;
determining a spatial position of the hip joint based on the second relative position and the spatial position of the root node; determining a spatial position of the neck from the third relative position and the spatial position of the root node; and determining the spatial position of the shoulder joint according to the fourth relative position and the spatial position of the root node.
Similarly, after the spatial position of the first joint node and the spatial position of the second joint node are determined, the spatial orientation of the second joint node may be determined according to the spatial position of the first joint node and the spatial position of the second joint node.
For example, when the second joint node is any one of the shoulder joints, after the spatial position of the shoulder joint is determined, since the spatial position of the forearm joint is also known, the two points may form a vector, and the spatial orientation of the shoulder joint is the direction of the vector (from the shoulder joint to the forearm joint).
After determining the posture information (spatial position and spatial orientation) of each joint node and the posture information of the data acquisition node, the terminal 52 needs to convert the information into posture action information in the human body model, so that the posture information needs to be calibrated correspondingly to the pre-established human body model.
In practical application, since the coordinate system of the sensor and the limb usually has a posture deviation, in order to improve capturing accuracy, before calibration, the posture deviation of each data acquisition node and each joint node is obtained, and the spatial orientation of each joint node and the spatial orientation of each data acquisition node are correspondingly calibrated by using the posture deviation.
Specifically, referring to fig. 4, the sensor is tied to the limb, the left X1 axis is the spatial orientation of the joint node, the right X axis is the spatial orientation of the data acquisition node, and in practical application, when there is a posture deviation, there is an error between the X1 axis and the X axis, and the errors do not completely coincide, which means that the deviation needs to be calibrated to ensure that the X1 axis and the X axis coincide.
For each data acquisition node, the human body keeps the 'T' -shaped gesture still for a period of time, and the quaternion q_sensor of the corresponding data acquisition node can be obtained from the sensor, so that the gesture deviation q_bias of the data acquisition node can be determined according to the formula (5):
q_bias=q_seg*q_sensor -1 (5)
in equation (5), q_seg is the spatial orientation of the corresponding limb when the human body maintains a "T" posture.
After the attitude deviation is determined, correspondingly calibrating the spatial orientation of each data acquisition node according to the formula (6):
q_mod=q_bias*q (6)
in the formula (6), q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the posture deviation corresponding to each data acquisition node.
After the spatial orientation of the data acquisition node is calibrated, the spatial orientation corresponding to the joint node is also calibrated because the sensor is tied to the limb.
After the spatial orientation of each data acquisition node and the spatial orientation of each joint node are calibrated, the terminal 52 performs corresponding calibration on the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration, and the spatial orientation of each data acquisition node after calibration with a pre-established human model. Wherein, the human body model is a rigid body three-dimensional model, and can be shown in fig. 5.
After calibration, the terminal 52 may capture the motion gesture of the human body using the determined human body model, and reproduce the motion gesture of the whole body of the human body in corresponding software.
The method and the system for capturing the human body action gesture provided by the embodiment of the invention have the following beneficial effects:
the invention provides a method and a system for capturing human motion gestures, wherein the method comprises the following steps: acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data; determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints; determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints; determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node; acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation; correspondingly calibrating the spatial positions of the joint nodes, the spatial positions of the data acquisition nodes, the spatial orientations of the calibrated joint nodes and the spatial orientations of the calibrated data acquisition nodes with a pre-established human body model; capturing the action gesture of the human body by using the calibrated human body model; therefore, the gesture of the first joint node of the human body is calculated firstly, and then the gesture of the second joint node is calculated according to the gesture of the first joint node, so that the capture of the human body action gesture can be realized by only setting 6 data acquisition points, the use quantity of sensors is effectively reduced, and the human body action gesture capture cost is reduced; and the IK algorithm converts the three-dimensional space solution into joint space solution, so that the solution time of gesture motion can be reduced, and the real-time performance of motion capture is improved.
The above description is not intended to limit the scope of the invention, but is intended to cover any modifications, equivalents, and improvements within the spirit and principles of the invention.

Claims (4)

1. A human motion gesture capturing method, the method comprising:
acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of a second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
Determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
capturing the action gesture of the human body by using the calibrated human body model; wherein,,
the determining the bending angle of the first joint node includes:
determining an auxiliary node of the first joint node and a target data acquisition node corresponding to the first joint node based on an inverse kinematics IK algorithm;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
Using the formula
Figure QLYQS_1
Determining the bending angle of the first joint node +.>
Figure QLYQS_2
The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is the first distance, b is the second distance, and c is the third distance;
when the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and a shoulder joint;
when the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node;
the determining the spatial position of each data acquisition node according to the complementary filtering algorithm based on the original data of each data acquisition node comprises the following steps:
according to the formula
Figure QLYQS_3
Determining the speed of the data acquisition node;
according to the formula
Figure QLYQS_4
Determining the spatial position of the data acquisition node; wherein the v 0 For the initial speed of the data acquisition node, said +.>
Figure QLYQS_5
For the acceleration of the data acquisition node, the p 0 For the initial spatial position of the data acquisition node, the +. >
Figure QLYQS_6
For the speed of the data acquisition node, the p t For the spatial position of the data acquisition node, said +.>
Figure QLYQS_7
Is a time parameter, t 0 </>
Figure QLYQS_8
<t。
2. The method of claim 1, wherein the determining the first joint node spatial position and spatial orientation from the angle of curvature of the first joint node comprises:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
and determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
3. The method of claim 1, wherein said correspondingly calibrating the spatial orientation of each of said data acquisition nodes with said attitude deviation comprises:
according to the formula
Figure QLYQS_9
Correspondingly calibrating the space orientation of each data acquisition node; wherein,,
the said
Figure QLYQS_10
And for the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the corresponding attitude deviation of each data acquisition node.
4. A human motion gesture capture system, the system comprising:
the data acquisition units are used for acquiring the original data of each data acquisition node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
the terminal is used for acquiring the original data, determining the space orientation and the space position of each data acquisition node according to a complementary filtering algorithm based on the original data, and the data acquisition nodes comprise: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node;
determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of a second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
Acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
the method is used for capturing the action gesture of the human body by using the calibrated human body model; wherein,,
the terminal is specifically used for:
determining an auxiliary node of the first joint node and a target data acquisition point corresponding to the first joint node based on an inverse kinematics IK algorithm, wherein the first joint node comprises: elbow and knee joints;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
using the formula
Figure QLYQS_11
Determining the bending angle of the first joint node +.>
Figure QLYQS_12
The method comprises the steps of carrying out a first treatment on the surface of the Wherein a is the first distance, b is the second distance, and c is the third distance;
When the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and a shoulder joint;
when the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node;
the determining the spatial position of each data acquisition node according to the complementary filtering algorithm based on the original data of each data acquisition node comprises the following steps:
according to the formula
Figure QLYQS_13
Determining the speed of the data acquisition node;
according to the formula
Figure QLYQS_14
Determining the spatial position of the data acquisition node; wherein the v 0 For the initial speed of the data acquisition node, said +.>
Figure QLYQS_15
For the acceleration of the data acquisition node, the p 0 For the initial spatial position of the data acquisition node, the +.>
Figure QLYQS_16
For the speed of the data acquisition node, the p t For the spatial position of the data acquisition node, said +.>
Figure QLYQS_17
Is a time parameter, t 0 </>
Figure QLYQS_18
<t。
CN201911394224.4A 2019-12-30 2019-12-30 A method and system for capturing human body gestures Active CN111158482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911394224.4A CN111158482B (en) 2019-12-30 2019-12-30 A method and system for capturing human body gestures

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911394224.4A CN111158482B (en) 2019-12-30 2019-12-30 A method and system for capturing human body gestures

Publications (2)

Publication Number Publication Date
CN111158482A CN111158482A (en) 2020-05-15
CN111158482B true CN111158482B (en) 2023-06-27

Family

ID=70559175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911394224.4A Active CN111158482B (en) 2019-12-30 2019-12-30 A method and system for capturing human body gestures

Country Status (1)

Country Link
CN (1) CN111158482B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113305830B (en) * 2021-04-28 2022-08-16 吉林大学 Humanoid robot action system based on human body posture control and control method
CN113505637A (en) * 2021-05-27 2021-10-15 成都威爱新经济技术研究院有限公司 Real-time virtual anchor motion capture method and system for live streaming

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104856684A (en) * 2015-04-10 2015-08-26 深圳市虚拟现实科技有限公司 Moving object acquisition method and system
CN109000633A (en) * 2017-06-06 2018-12-14 大连理工大学 Human body attitude motion capture algorithm design based on isomeric data fusion
WO2019028650A1 (en) * 2017-08-08 2019-02-14 方超 Gesture acquisition system
FR3074676A1 (en) * 2017-12-07 2019-06-14 Cr Tools ASSEMBLY AND DEVICES FOR MEASURING BODY POSTURE PARAMETERS OF AN INDIVIDUAL

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150055958A (en) * 2013-11-14 2015-05-22 삼성전자주식회사 Wearable robot and control method for the same
JP2016206081A (en) * 2015-04-24 2016-12-08 株式会社東芝 Operation inference device and operation inference method
US10222450B2 (en) * 2015-10-12 2019-03-05 Xsens Holding B.V. Integration of inertial tracking and position aiding for motion capture
EP3376414B1 (en) * 2015-11-13 2021-02-24 Innomotion Incorporation (Shanghai) Joint movement detection system and method, and dynamic assessment method and system for knee joint
US10888735B2 (en) * 2016-10-07 2021-01-12 William W. Clark Calibration of initial orientation and position of sports equipment and body segments for inertial sensors
CN108621161B (en) * 2018-05-08 2021-03-02 中国人民解放军国防科技大学 State estimation method of footed robot body based on multi-sensor information fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104856684A (en) * 2015-04-10 2015-08-26 深圳市虚拟现实科技有限公司 Moving object acquisition method and system
CN109000633A (en) * 2017-06-06 2018-12-14 大连理工大学 Human body attitude motion capture algorithm design based on isomeric data fusion
WO2019028650A1 (en) * 2017-08-08 2019-02-14 方超 Gesture acquisition system
FR3074676A1 (en) * 2017-12-07 2019-06-14 Cr Tools ASSEMBLY AND DEVICES FOR MEASURING BODY POSTURE PARAMETERS OF AN INDIVIDUAL

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《多光子成象术中深层图象损耗恢复的研究》;刘谦;《光子学报》;全文 *

Also Published As

Publication number Publication date
CN111158482A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN108762495B (en) Virtual reality driving method and virtual reality system based on arm motion capture
CN107314778B (en) Calibration method, device and system for relative attitude
CN107330967B (en) Rider motion posture capturing and three-dimensional reconstruction system based on inertial sensing technology
CN107833271B (en) Skeleton redirection method and device based on Kinect
WO2016183812A1 (en) Mixed motion capturing system and method
CN103136912A (en) Moving posture capture system
JP2010534316A (en) System and method for capturing movement of an object
CN201431466Y (en) Human motion capture and thee-dimensional representation system
KR101080078B1 (en) Motion Capture System using Integrated Sensor System
CN106873787A (en) A kind of gesture interaction system and method for virtual teach-in teaching
JP6288858B2 (en) Method and apparatus for estimating position of optical marker in optical motion capture
CN110609621B (en) Gesture calibration method and human motion capture system based on microsensor
CN111158482B (en) A method and system for capturing human body gestures
CN109000633A (en) Human body attitude motion capture algorithm design based on isomeric data fusion
CN108279773B (en) A data glove based on MARG sensor and magnetic field positioning technology
CN106227368B (en) A kind of human body joint angle calculation method and device
KR20120059824A (en) A method and system for acquiring real-time motion information using a complex sensor
CN108621164A (en) Taiji push hands machine people based on depth camera
US20180216959A1 (en) A Combined Motion Capture System
CN109453505B (en) Multi-joint tracking method based on wearable device
CN109343713B (en) Human body action mapping method based on inertial measurement unit
CN112945231B (en) A method, device, equipment and readable storage medium for aligning IMU with rigid body posture
CN115919250A (en) A Human Body Dynamic Joint Angle Measuring System
KR102172362B1 (en) Motion capture apparatus using movement of human centre of gravity and method thereof
CN110928420B (en) Human body motion gesture capturing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant