Disclosure of Invention
Aiming at the problems existing in the prior art, the embodiment of the invention provides a human body motion gesture capturing method and system, which are used for solving the technical problems of high capturing cost caused by the fact that a large number of sensors are needed when capturing human body motion gestures in the prior art.
The technical scheme of the invention is realized as follows:
the application provides a human motion gesture capturing method, which comprises the following steps:
acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
Determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
and capturing the action gesture of the human body by using the calibrated human body model.
Optionally, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
Determining the spatial orientation q of each data acquisition node according to the formula q=q (T) +tω (T) q (T); wherein q (T) is a preset initial value of spatial orientation, T is a sampling period, and ω (T) is an angular velocity.
Optionally, the determining, based on the raw data of each data acquisition node, the spatial position of each data acquisition node according to a complementary filtering algorithm includes:
according to the formula
Determining the speed of the data acquisition node;
according to the formula
Determining the spatial position of the data acquisition node; wherein the v
0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p
0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, p
t For the spatial position of the data acquisition node, τ is a time parameter, t
0 <τ<t。
Optionally, the determining the bending angle of the first joint node includes:
determining an auxiliary node of the first joint node and a target data acquisition node corresponding to the first joint node based on a reverse kinematics (IK, inverse Kinematics) algorithm;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
Using the formula
Determining a bending angle θ of the first joint node; wherein a is the first distance, b is the second distance, and c is the third distance.
Optionally, when the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, and the first relative position is a relative position between the root node and a shoulder joint;
when the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
Optionally, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
Optionally, the calibrating the spatial orientation of each data acquisition node by using the posture deviation includes:
calibrating the spatial orientation of each data acquisition node correspondingly according to the formula q_mod=q_bias 1; wherein,,
the q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the corresponding attitude deviation of each data acquisition node.
The application also provides a human motion gesture capture system, the system comprising:
the data acquisition units are used for acquiring the original data of each data acquisition node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data;
the terminal is used for acquiring the original data, determining the space orientation and the space position of each data acquisition node according to a complementary filtering algorithm based on the original data, and the data acquisition nodes comprise: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node;
Determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
the method is used for capturing the action gesture of the human body by using the calibrated human body model.
Optionally, the first determining unit is specifically configured to:
Determining the spatial orientation q of each data acquisition node according to the formula q=q (T) +tω (T) q (T);
according to the formula
Determining the speed of the data acquisition node;
according to the formula
Determining the spatial position of the data acquisition node; wherein q (T) is a preset initial value of spatial orientation, T is a sampling period, ω (T) is an angular velocity, and v
0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p
0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, p
t For the spatial position of the data acquisition node, τ is a time parameter, t
0 <τ<t。
Optionally, the second determining unit is specifically configured to:
determining an auxiliary node of the first joint node and a target data acquisition point corresponding to the first joint node based on an inverse kinematics IK algorithm, wherein the first joint node comprises: elbow and knee joints;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node;
Using the formula
Determining a bending angle θ of the first joint node; wherein a is the first distance, b is the second distance, and c is the third distance.
The invention provides a method and a system for capturing human motion gestures, wherein the method comprises the following steps: acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data; determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints; determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints; determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node; acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation; correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model; capturing the action gesture of the human body by using the calibrated human body model; therefore, the gesture of the first joint node of the human body is calculated firstly, and then the gesture of the second joint node is calculated by the gesture of the first joint node, so that the capture of the human body action gesture can be realized by only setting 6 data acquisition points, the use quantity of sensors is effectively reduced, and the human body action gesture capture cost is reduced.
Detailed Description
In order to be used for solving the prior art when catching human action gesture, the sensor quantity that needs to use is more, leads to catching the higher technical problem of cost. The application provides a human motion gesture capturing method and system.
The technical scheme of the invention is further described in detail through the attached drawings and specific embodiments.
Example 1
The embodiment provides a human motion gesture capturing method, which is characterized by comprising the following steps:
s110, acquiring original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm;
Before the original data of each data acquisition node are acquired, the data acquisition node needs to be determined, and in order to reduce the use quantity of the sensors, 6 data acquisition nodes are arranged, and the data acquisition node comprises: a head node 1, a torso node 2, a first forearm node 3, a second forearm node 4, a first calf node 5, and a second calf node 6; each data acquisition node is shown by the reference numeral in fig. 2.
After determining the data acquisition nodes, binding the data acquisition modules to the data acquisition nodes, wherein each data acquisition module comprises a six-axis IMU and a three-axis magnetic sensor, and each IMU comprises: a three-axis accelerometer and a three-axis gyroscope. After each data acquisition module acquires original data, the original data are transmitted to a data summarizing module through a data bus, the data summarizing module is arranged on a trunk node, after the data summarizing module receives the original data, the original data are transmitted to a terminal through a wireless transmission mode, and the terminal can comprise hardware equipment such as an industrial personal computer, a computer and the like.
The data acquisition module is used for acquiring the original data of the data node, and the original data comprises: acceleration data, angular velocity data, and magnetic field direction data.
After the original data is obtained, the spatial orientation and the spatial position of each data acquisition node can be determined according to a complementary filtering algorithm based on the original data. As an optional embodiment, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
determining the spatial orientation q of each data acquisition node according to the formula (1):
q=q(t)+Tω(t)q(t) (1)
in formula (1), q (T) is a preset initial value of spatial orientation, where T is a sampling period, and ω (T) is an angular velocity.
As an optional embodiment, for each data acquisition node, the determining, based on the raw data of each data acquisition node, the spatial location of each data acquisition node according to a complementary filtering algorithm includes:
determining the speed of the data acquisition node according to formula (2);
after the speed of the data acquisition node is determined, determining the spatial position of the data acquisition node according to the formula (3):
wherein the v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, v (τ) and v t From the point of view of integration, the same parameter, the p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
After the spatial orientation and spatial position of the individual data acquisition nodes are determined, the spatial position and spatial orientation of the root and end nodes of the manikin can be obtained. Here, the root node is a torso node, and the end nodes include: a head node, a first forearm node, a second forearm node, a first calf node, and a second calf node.
S111, determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints;
the bending angle of a first joint node is determined first, the spatial position and the spatial orientation of the first joint node are determined according to the bending angle of the first joint node, and the first joint node comprises: elbow joint and knee joint.
As an alternative embodiment, determining the bending angle of the first joint node includes:
the auxiliary node of the first joint node and the target data acquisition node corresponding to the first joint node are determined based on the inverse kinematics IK algorithm;
Acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node; the positional relationship among the first joint node, the auxiliary node and the target data acquisition node is shown in fig. 3, wherein in fig. 3, the first joint node is p, the auxiliary node is p0, and the target data acquisition node is p1.
After the position relation among the first joint node, the auxiliary node and the target data acquisition node is determined, the bending angle theta of the first joint node is determined by using a formula (4):
in equation (4), a is the first distance, b is the second distance, c is the third distance, and the first, second, and third distances may refer to fig. 3.
Here, the first joint nodes are different, and the corresponding auxiliary nodes and the target data acquisition nodes are also different. When the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and the shoulder joint;
When the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
For example, with continued reference to fig. 2, when the first joint node p is a knee joint, the auxiliary node is a hip joint p0 (the second relative position between p0 and the root node is unchanged), and the target data acquisition node is a first lower leg node 5, then when the knee joint is bent, the first joint node p, the auxiliary node p0, and the target data acquisition node may form a triangle, and at this time, the bending angle of the knee joint may be determined by formula (4).
After the bending angle of the first joint node is determined, as an optional embodiment, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
For example, taking a knee joint as an example, since the knee joint is in a three-dimensional space, even after the bending angle of the knee joint is determined, the accurate position of the knee joint cannot be known accurately (referring to fig. 3, there is an infinite solution on a circle formed by taking p as a center and r as a radius), a reasonable bending plane of the knee joint needs to be determined first, and in this application, a plane where the thigh and ankle face are oriented consistently is determined as the bending plane of the knee joint.
On the curved surface, the distance between the knee joint and the hip joint is constant, as is the distance between the knee joint and the first calf joint 5; since the bending angle is known; the spatial position of the first calf node 5 is known; the relative position of the hip joint with respect to the root node is fixed, the spatial position of the root node has been determined, and therefore the spatial position of the hip joint is also known, and therefore the spatial position of the knee joint can be determined.
When the spatial position of the hip joint and the spatial position of the knee joint are determined, the two points of the hip joint and the knee joint can form a vector, and the direction of the vector is the spatial orientation of the knee joint (the direction from the hip joint to the knee joint).
S112, determining the spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
the same reason is that after the spatial position of the data acquisition node is determined, the spatial position of the root node may also be obtained, and the distance between the second joint node (neck, shoulder and hip) and the root node is constant for the same human body, and this relative position (relative position and relative angle) may be obtained in advance, so as an alternative embodiment, determining the spatial position of the second joint node according to the spatial position of the data acquisition node includes:
acquiring a second relative position between the hip joint and the root node, acquiring a third relative position between the neck joint and the root node, and acquiring a fourth relative position between the shoulder joint and the root node;
determining a spatial position of the hip joint based on the second relative position and the spatial position of the root node; determining a spatial position of the neck from the third relative position and the spatial position of the root node; and determining the spatial position of the shoulder joint according to the fourth relative position and the spatial position of the root node.
S113, determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node;
similarly, after the spatial position of the first joint node and the spatial position of the second joint node are determined, the spatial orientation of the second joint node may be determined according to the spatial position of the first joint node and the spatial position of the second joint node.
For example, when the second joint node is any one of the shoulder joints, after the spatial position of the shoulder joint is determined, since the spatial position of the forearm joint is also known, the two points may form a vector, and the spatial orientation of the shoulder joint is the direction of the vector (from the shoulder joint to the forearm joint).
S114, acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation;
after the pose information (spatial position and spatial orientation) of each joint node and the pose information of each data acquisition node are determined, the information needs to be converted into pose motion information in a human body model, so that the pose information needs to be correspondingly calibrated with a pre-established human body model.
In practical application, since the coordinate system of the sensor and the limb usually has a posture deviation, in order to improve capturing accuracy, before calibration, the posture deviation of each data acquisition node and each joint node is obtained, and the spatial orientation of each joint node and the spatial orientation of each data acquisition node are correspondingly calibrated by using the posture deviation.
Specifically, referring to fig. 4, the sensor is tied to the limb, the left X1 axis is the spatial orientation of the joint node, the right X axis is the spatial orientation of the data acquisition node, and in practical application, when there is a posture deviation, there is an error between the X1 axis and the X axis, and the errors do not completely coincide, which means that the deviation needs to be calibrated to ensure that the X1 axis and the X axis coincide.
For each data acquisition node, the human body keeps the 'T' -shaped gesture still for a period of time, and the quaternion q_sensor of the corresponding data acquisition node can be obtained from the sensor, so that the gesture deviation q_bias of the data acquisition node can be determined according to the formula (5):
q_bias=q_seg*q_sensor -1 (5)
in equation (5), q_seg is the spatial orientation of the corresponding limb when the human body maintains a "T" posture.
After the attitude deviation is determined, correspondingly calibrating the spatial orientation of each data acquisition node according to the formula (6):
q_mod=q_bias*q (6)
In the formula (6), q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the posture deviation corresponding to each data acquisition node.
After the spatial orientation of the data acquisition node is calibrated, the spatial orientation corresponding to the joint node is also calibrated because the sensor is tied to the limb.
S115, correspondingly calibrating the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration and the spatial orientation of each data acquisition node after calibration with a pre-established human body model;
after the spatial orientation of each data acquisition node and the spatial orientation of each joint node are calibrated, the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each calibrated joint node and the spatial orientation of each calibrated data acquisition node are correspondingly calibrated with a pre-established human body model. Wherein, the human body model is a rigid body three-dimensional model, and can be shown in fig. 5.
S116, capturing the action gesture of the human body by using the calibrated human body model.
After calibration, the motion gesture of the human body can be captured by using the fixed human body model, and the whole body motion gesture of the human body can be reproduced in corresponding software.
Based on the same inventive concept, the application also provides a system for capturing human motion gestures, and the details are shown in the second embodiment.
Example two
The present embodiment provides a system for capturing a gesture of a human body, as shown in fig. 6, the system includes: a plurality of data acquisition modules, a data summarization module 51 and a terminal 52.
The data acquisition module of this application need confirm data acquisition node before the raw data of acquireing each data acquisition node, in order to reduce the use quantity of sensor, this application has set up 6 data acquisition nodes, data acquisition node includes: a head node 1, a torso node 2, a first forearm node 3, a second forearm node 4, a first calf node 5, and a second calf node 6; each data acquisition node is shown by circled marks in fig. 2.
After determining the data acquisition nodes, binding the data acquisition modules to the data acquisition nodes, wherein each data acquisition module comprises a six-axis IMU and a three-axis magnetic sensor, and each IMU comprises: a three-axis accelerometer and a three-axis gyroscope. Each data acquisition module is used for acquiring original data of a data acquisition node, each data acquisition module acquires the original data and transmits the original data to the data summarizing module 51 through a data bus, the data summarizing module is arranged on a trunk node, the data summarizing module 51 receives the original data and then transmits the original data to the terminal 52 in a wireless transmission mode, and the terminal can comprise hardware equipment such as an industrial personal computer and a computer. Wherein the raw data includes: acceleration data, angular velocity data, and magnetic field direction data.
Therefore, the data acquisition modules in the present application include 6 data acquisition modules, which are respectively: the first data acquisition module 53, the second data acquisition module 54, the third data acquisition module 55, the fourth data acquisition module 56, the fifth data acquisition module 57, and the sixth data acquisition module 58.
After the terminal 52 acquires each piece of raw data, the spatial orientation and spatial position of each data acquisition node can be determined according to the complementary filtering algorithm based on the raw data. As an optional embodiment, the calculating the spatial orientation of each data acquisition node according to the complementary filtering algorithm based on the raw data of each data acquisition node includes:
determining the spatial orientation q of each data acquisition node according to the formula (1):
q=q(t)+Tω(t)q(t) (1)
in formula (1), q (T) is a preset initial value of spatial orientation, where T is a sampling period, and ω (T) is an angular velocity.
As an optional embodiment, for each data acquisition node, the determining, based on the raw data of each data acquisition node, the spatial location of each data acquisition node according to a complementary filtering algorithm includes:
determining the speed of the data acquisition node according to formula (2);
After the speed of the data acquisition node is determined, determining the spatial position of the data acquisition node according to the formula (3):
wherein the v 0 For the initial speed of the data acquisition node, a (τ) is the acceleration of the data acquisition node, p 0 For the initial spatial position of the data acquisition node, v (τ) is the speed of the data acquisition node, v (τ) and v t From the point of view of integration, the same parameter, the p t For the spatial position of the data acquisition node, τ is a time parameter, t 0 <τ<t。
After the spatial orientation and spatial position of the individual data acquisition nodes are determined, the spatial position and spatial orientation of the root and end nodes of the manikin can be obtained. Here, the root node is a torso node, and the end nodes include: a head node, a first forearm node, a second forearm node, a first calf node, and a second calf node.
The terminal 52 of the present application first determines a bending angle of a first joint node, and determines a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, where the first joint node includes: elbow joint and knee joint.
As an alternative embodiment, the determining the bending angle of the first joint node includes:
Determining an auxiliary node of the first joint node and a target data acquisition node corresponding to the first joint node based on an inverse kinematics IK algorithm;
acquiring a first distance between the first joint node and the auxiliary node, a second distance between the first joint node and the target data acquisition node and a third distance between the auxiliary node and the target data acquisition node; the positional relationship among the first joint node, the auxiliary node and the target data acquisition node is shown in fig. 3, wherein in fig. 3, the first joint node is p, the auxiliary node is p0, and the target data acquisition node is p1.
After the position relation among the first joint node, the auxiliary node and the target data acquisition node is determined, the bending angle theta of the first joint node is determined by using a formula (4):
in equation (4), a is the first distance, b is the second distance, c is the third distance, and the first, second, and third distances may refer to fig. 3.
Here, the first joint nodes are different, and the corresponding auxiliary nodes and the target data acquisition nodes are also different. When the first joint node is an elbow joint, the auxiliary node is determined according to a root node and a first relative position, wherein the first relative position is the relative position between the root node and the shoulder joint;
When the first joint node is a knee joint, the auxiliary node is determined according to the root node and a second relative position, wherein the second relative position is the relative position between the root node and the hip joint; wherein the root node is a torso node.
For example, with continued reference to fig. 2, when the first joint node p is a knee joint, the auxiliary node is a hip joint p0 (the second relative position between p0 and the root node is unchanged), and the target data acquisition node is a first lower leg node 5, then when the knee joint is bent, the first joint node p, the auxiliary node p0, and the target data acquisition node may form a triangle, and at this time, the bending angle of the knee joint may be determined by formula (4).
After the bending angle of the first joint node is determined, as an optional embodiment, the determining the spatial position and the spatial orientation of the first joint node according to the bending angle of the first joint node includes:
determining a bending surface where the first joint node is located, and determining a spatial position of the first joint node in the bending surface based on a first distance between the first joint node and an auxiliary node, a second distance between the first joint node and a target data acquisition node and a bending angle of the first joint node;
And determining the spatial orientation of the first joint node according to the position of the first joint node and the position of the auxiliary node.
For example, taking a knee joint as an example, since the knee joint is in a three-dimensional space, even after the bending angle of the knee joint is determined, the accurate position of the knee joint cannot be known accurately (referring to fig. 3, there is an infinite solution on a circle formed by taking p as a center and r as a radius), a reasonable bending plane of the knee joint needs to be determined first, and in this application, a plane where the thigh and ankle are oriented consistently is determined as the bending plane of the knee joint.
On the curved surface, the distance between the knee joint and the hip joint is constant, as is the distance between the knee joint and the first calf joint 5; since the bending angle is known; the spatial position of the first calf node 5 is known; the relative position of the hip joint with respect to the root node is fixed, the spatial position of the root node has been determined, and therefore the spatial position of the hip joint is also known, and therefore the spatial position of the knee joint can be determined.
When the spatial position of the hip joint and the spatial position of the knee joint are determined, the two points of the hip joint and the knee joint can form a vector, and the direction of the vector is the spatial orientation of the knee joint (the direction from the hip joint to the knee joint).
After the spatial position of the first joint node is determined, determining the spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints;
the same reason is that after the spatial position of the data acquisition node is determined, the spatial position of the root node may also be obtained, and the distance between the second joint node (neck, shoulder and hip) and the root node is constant for the same human body, and this relative position (relative position and relative angle) may be obtained in advance, so as an alternative embodiment, determining the spatial position of the second joint node according to the spatial position of the data acquisition node includes:
acquiring a second relative position between the hip joint and the root node, acquiring a third relative position between the neck joint and the root node, and acquiring a fourth relative position between the shoulder joint and the root node;
determining a spatial position of the hip joint based on the second relative position and the spatial position of the root node; determining a spatial position of the neck from the third relative position and the spatial position of the root node; and determining the spatial position of the shoulder joint according to the fourth relative position and the spatial position of the root node.
Similarly, after the spatial position of the first joint node and the spatial position of the second joint node are determined, the spatial orientation of the second joint node may be determined according to the spatial position of the first joint node and the spatial position of the second joint node.
For example, when the second joint node is any one of the shoulder joints, after the spatial position of the shoulder joint is determined, since the spatial position of the forearm joint is also known, the two points may form a vector, and the spatial orientation of the shoulder joint is the direction of the vector (from the shoulder joint to the forearm joint).
After determining the posture information (spatial position and spatial orientation) of each joint node and the posture information of the data acquisition node, the terminal 52 needs to convert the information into posture action information in the human body model, so that the posture information needs to be calibrated correspondingly to the pre-established human body model.
In practical application, since the coordinate system of the sensor and the limb usually has a posture deviation, in order to improve capturing accuracy, before calibration, the posture deviation of each data acquisition node and each joint node is obtained, and the spatial orientation of each joint node and the spatial orientation of each data acquisition node are correspondingly calibrated by using the posture deviation.
Specifically, referring to fig. 4, the sensor is tied to the limb, the left X1 axis is the spatial orientation of the joint node, the right X axis is the spatial orientation of the data acquisition node, and in practical application, when there is a posture deviation, there is an error between the X1 axis and the X axis, and the errors do not completely coincide, which means that the deviation needs to be calibrated to ensure that the X1 axis and the X axis coincide.
For each data acquisition node, the human body keeps the 'T' -shaped gesture still for a period of time, and the quaternion q_sensor of the corresponding data acquisition node can be obtained from the sensor, so that the gesture deviation q_bias of the data acquisition node can be determined according to the formula (5):
q_bias=q_seg*q_sensor -1 (5)
in equation (5), q_seg is the spatial orientation of the corresponding limb when the human body maintains a "T" posture.
After the attitude deviation is determined, correspondingly calibrating the spatial orientation of each data acquisition node according to the formula (6):
q_mod=q_bias*q (6)
in the formula (6), q_mod is the spatial orientation of each data acquisition node after calibration, q is the spatial orientation of each data acquisition node before calibration, and q_bias is the posture deviation corresponding to each data acquisition node.
After the spatial orientation of the data acquisition node is calibrated, the spatial orientation corresponding to the joint node is also calibrated because the sensor is tied to the limb.
After the spatial orientation of each data acquisition node and the spatial orientation of each joint node are calibrated, the terminal 52 performs corresponding calibration on the spatial position of each joint node, the spatial position of each data acquisition node, the spatial orientation of each joint node after calibration, and the spatial orientation of each data acquisition node after calibration with a pre-established human model. Wherein, the human body model is a rigid body three-dimensional model, and can be shown in fig. 5.
After calibration, the terminal 52 may capture the motion gesture of the human body using the determined human body model, and reproduce the motion gesture of the whole body of the human body in corresponding software.
The method and the system for capturing the human body action gesture provided by the embodiment of the invention have the following beneficial effects:
the invention provides a method and a system for capturing human motion gestures, wherein the method comprises the following steps: acquiring the original data of each data acquisition node, and determining the spatial orientation and the spatial position of each data acquisition node according to a complementary filtering algorithm based on the original data of each data acquisition node, wherein the data acquisition node comprises: a head node, a torso node, a first forearm node, a second forearm node, a first calf node, and a second calf node; the raw data includes: acceleration data, angular velocity data, and magnetic field direction data; determining a bending angle of a first joint node, and determining a spatial position and a spatial orientation of the first joint node according to the bending angle of the first joint node, wherein the first joint node comprises: elbow and knee joints; determining a spatial position of the second joint node according to the spatial position of the data acquisition node, wherein the second joint node comprises: neck, shoulder and hip joints; determining the spatial orientation of the second joint node according to the spatial position of the first joint node and the spatial position of the second joint node; acquiring the attitude deviation of each data acquisition node, and correspondingly calibrating the space orientation of each data acquisition node and the space orientation of each joint node by using the attitude deviation; correspondingly calibrating the spatial positions of the joint nodes, the spatial positions of the data acquisition nodes, the spatial orientations of the calibrated joint nodes and the spatial orientations of the calibrated data acquisition nodes with a pre-established human body model; capturing the action gesture of the human body by using the calibrated human body model; therefore, the gesture of the first joint node of the human body is calculated firstly, and then the gesture of the second joint node is calculated according to the gesture of the first joint node, so that the capture of the human body action gesture can be realized by only setting 6 data acquisition points, the use quantity of sensors is effectively reduced, and the human body action gesture capture cost is reduced; and the IK algorithm converts the three-dimensional space solution into joint space solution, so that the solution time of gesture motion can be reduced, and the real-time performance of motion capture is improved.
The above description is not intended to limit the scope of the invention, but is intended to cover any modifications, equivalents, and improvements within the spirit and principles of the invention.