CN109086698A - A kind of human motion recognition method based on Fusion - Google Patents
A kind of human motion recognition method based on Fusion Download PDFInfo
- Publication number
- CN109086698A CN109086698A CN201810803749.8A CN201810803749A CN109086698A CN 109086698 A CN109086698 A CN 109086698A CN 201810803749 A CN201810803749 A CN 201810803749A CN 109086698 A CN109086698 A CN 109086698A
- Authority
- CN
- China
- Prior art keywords
- sensor node
- matrix
- data
- human body
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 57
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000033001 locomotion Effects 0.000 title claims abstract description 43
- 230000009471 action Effects 0.000 claims abstract description 68
- 239000013598 vector Substances 0.000 claims abstract description 33
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 29
- 230000009467 reduction Effects 0.000 claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 17
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 80
- 238000011156 evaluation Methods 0.000 claims description 16
- 230000001133 acceleration Effects 0.000 claims description 14
- 238000012360 testing method Methods 0.000 claims description 14
- 238000005516 engineering process Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- 238000012795 verification Methods 0.000 claims description 4
- 230000002441 reversible effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000007635 classification algorithm Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 210000000689 upper leg Anatomy 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention relates to human actions to identify field, provides a kind of human motion recognition method based on Fusion, comprising: using the N number of inertial sensor node for being individually fixed in human body different parts, acquires human action data;Window segmentation is carried out to each sensor node human action data collected using sliding window cutting techniques, obtains multiple action data segments of each sensor node;Feature extraction is carried out to the action data segment of each sensor node, obtains corresponding feature vector;Feature Dimension Reduction is carried out using feature vector of the RLDA algorithm to each sensor node of acquisition;Parameter training and modeling are carried out using the feature vector after each sensor node dimensionality reduction as training data, obtains corresponding hierarchical fusion model;Using obtained hierarchical fusion model, human action identification is carried out.The present invention can effectively overcome drawback of the single classifier in identification process, can effectively improve human action accuracy of identification.
Description
Technical Field
The invention relates to the field of human body action recognition, in particular to a human body action recognition method based on multi-sensor data fusion.
Background
The human body action recognition technology is a new man-machine interaction mode which is formed in recent decades and becomes a hot problem for the study of domestic and foreign scholars. The human body action mainly refers to the way of human body action, the reaction of human body to environment or object, and the complex motion of human body through limbs to describe or express the complex human body action. It can be said that most of the actions of the human body need to be reflected by the movement of the limbs of the human body. Research and exploration of human body movement through human body movement become a very effective way for analyzing human body movement. Human motion recognition based on inertial sensors is an emerging research field in the field of pattern recognition, and is essentially characterized in that motion signals generated during human motion are obtained through one or more inertial sensors, then data are preprocessed, feature extraction and selection are carried out, and finally the motion is classified and recognized according to the extracted features.
In the process of researching human body action recognition by using the inertial sensor, a single classification algorithm cannot be applied to recognition of all human body actions. This is because each single classifier has a certain decision error, and all practical problems cannot be solved by using a single classifier; in addition, in practical conditions, the actions have randomness and disorder, which greatly increases the difficulty of recognition. Many researchers often employ multi-classifier fusion techniques to monitor human activities in practical applications when recognizing some complex actions. Combining multiple classifiers to make a decision (or called decision fusion) has a great influence on improving the recognition performance. The decision fusion can effectively improve the classification performance of the recognition system and improve the robustness of the recognition system.
Disclosure of Invention
The invention mainly solves the technical problems that a single classification algorithm in the prior art cannot be suitable for identifying all human body actions, a single classifier has certain decision errors and the like, provides the human body action identification method based on multi-sensor data fusion, can effectively overcome the defects of the single classifier in the identification process, and has an identification result obviously superior to that of the traditional identification method.
The invention provides a human body action recognition method based on multi-sensor data fusion, which comprises the following steps:
step 100, acquiring human body action data by using N inertial sensor nodes respectively fixed at different parts of a human body;
200, performing window segmentation on the human body action data acquired by each sensor node by using a sliding window segmentation technology to obtain a plurality of action data segments of each sensor node;
step 300, extracting the characteristics of the action data segments of each sensor node to obtain corresponding characteristic vectors;
step 400, performing feature dimension reduction on the obtained feature vector of each sensor node by using an RLDA (recursive least squares) algorithm;
step 500, performing parameter training and modeling by using the feature vector after dimension reduction of each sensor node as training data to obtain a corresponding layered fusion model, including steps 501 to 506:
step 501, verifying the reduced-dimension feature vector of each sensor node by a k-fold cross verification method to obtain the contribution rate of each action to each classifier;
step 502, establishing an evaluation matrix of a fusion layer of the following classifier according to the contribution rate:
wherein Y represents an evaluation matrix, c represents an action class, k represents the number of classifiers, i represents the ith inertial sensor node, mijDenotes the jth classifier, y, relative to the ith inertial sensor nodeqjRepresenting the contribution rate of the qth action under the jth classifier;
step 503, according to the evaluation matrix obtained in step 502, obtaining the shannon entropy of each classifier by using the following formula:
wherein e isjrepresents Shannon entropy, η is a constant, and η is 1/log2(c);
And obtaining the redundancy quantity of the classifier according to the Shannon entropy by using the following formula:
rj=1-ej
wherein r isjRepresenting the amount of redundancy;
obtaining the weight values of the ith sensor node and the jth classifier through the following formulas:
wherein,representing the weight values of the ith sensor node and the jth classifier;
obtaining an output result of the ith sensor node by the following formula:
wherein λ isi,qRepresenting that the test sample x is classified into q classes;
step 504, for the feature vector after the dimensionality reduction in the ith sensor node, obtaining the recognition rate of the qth action class through the following formula:
wherein,indicating the recognition rate of the q-th action class;
step 505, establishing an evaluation matrix of the following sensor fusion layers according to the recognition rate:
step 506, according to the evaluation matrix obtained in step 505, the shannon entropy of each sensor is obtained by using the following formula:
and obtaining the redundancy quantity of the sensor according to the Shannon entropy by using the following formula:
the output weight of each sensor node is calculated by the following formula:
the following hierarchical fusion model was obtained:
wherein λ isqRepresenting that the test sample is classified into q classes;
and step 600, recognizing the human body action by using the obtained layered fusion model.
Preferably, the window segmentation of the human body motion data acquired by each sensor node by using a sliding window segmentation technology includes:
for the ith sensor node, the size of the segmentation window is set to be l, and if the length of the motion data matrix is set to be lThe motion data matrix aiCan be divided intoAnd a data window, wherein the size of the partitioned data matrix in each window is (l multiplied by 6) dimension, and the repetition rate of each two adjacent data windows is 50%.
Preferably, feature extraction is performed on the motion data segment of each sensor node, and the extracted features include: root mean square, mean square error, kurtosis, covariance, zero-crossing rate, and energy of the triaxial acceleration data and triaxial angular velocity data.
Preferably, the method for performing feature dimension reduction on the obtained feature vector of each sensor node by using the RLDA algorithm includes the following steps:
step 401, for the eigenvector space corresponding to the ith sensor node, obtaining a corresponding intra-class scattering matrix and inter-class scattering matrix:
wherein S isωRepresents an intra-class scatter matrix, SbRepresents an inter-class scatter matrix, μaRepresents the mean of the sum of all the feature vectors in class a, and μ represents the feature vector space XiAverage of all feature vector sums of (a);
step 402, solving a reversible matrix according to the contract theorem of the matrix and the basic matrix transformation to obtain the following formula:
PTSωP=In
wherein, P represents an invertible matrix,is SbCharacteristic value of (1), andnrepresenting an n-dimensional identity matrix;
step 403, according to the result obtained in step 402, using fisher criterion to obtain the following optimal projection matrix:
φopt=KPT
wherein phi isoptRepresents the optimal projection matrix, K ═ phi (P)T)-1,Representing a projection matrix to be solved;
and step 404, performing feature dimension reduction by using the optimal projection matrix.
The human body action recognition method based on multi-sensor data fusion provided by the invention mainly utilizes the contract theorem in the matrix theory to improve the traditional LDA algorithm, and the improved feature dimension reduction algorithm can effectively reduce the huge disturbance caused by tiny deviation due to the inverse time of the feature value of the scattering matrix in the estimation class, thereby being beneficial to improving the algorithm performance; in the aspect of action recognition, the invention mainly provides a novel layered fusion algorithm which mainly comprises two layers, wherein the first layer is a classifier fusion layer, the second layer is a sensor fusion layer, and the output weight of each layer is obtained by a main entropy value method. The algorithm provided by the invention carries out overall determination through an entropy method, can effectively improve the robustness of a classification model, and can effectively improve the recognition precision of actions through the design of a hierarchical structure.
Drawings
FIG. 1 is a flow chart of an implementation of the human body motion recognition method based on multi-sensor data fusion according to the present invention;
FIG. 2 is a schematic diagram of a layered fusion model of the present invention.
Detailed Description
In order to make the technical problems solved, technical solutions adopted and technical effects achieved by the present invention clearer, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings.
FIG. 1 is a flow chart of the human body motion recognition method based on multi-sensor data fusion. As shown in fig. 1, the human body motion recognition method based on multi-sensor data fusion provided by the embodiment of the present invention specifically includes the following processes:
and step 100, acquiring human body action data by using N inertial sensor nodes respectively fixed at different parts of a human body.
Specifically, firstly, N inertial sensor nodes are respectively fixed at N parts of a human body, each sensor node comprises a three-axis accelerometer and a three-axis gyroscope, and collected action data are uploaded to an upper computer data processing platform by means of a receiving node; then, the N sensor nodes are used for collecting human body action data, such as data of human body actions of standing, running, going upstairs and the like. The human body motion data comprises three-axis acceleration data and three-axis angular velocity data of each sensor node. For the ith (i epsilon {1,2, ·, N }) sensor node, the collected human body motion data comprise acceleration data a of an x axis, a y axis and a z axisi=[aix,aiy,aiz]And angular velocity data ang of x-axis, y-axis and z-axisi=[angix,angiy,angiz]Then, for the ith sensor node, the three-axis acceleration data and the three-axis angular velocity data form an original motion data matrix A with 6 columnsi=[ai,angi]=[aix,aiy,aiz,angix,angiy,angiz]。
And 200, performing window segmentation on the human body action data acquired by each sensor node by using a sliding window segmentation technology to obtain a plurality of action data segments of each sensor node.
After the motion data collected in step 100 is acquired, window segmentation is performed on the motion data. In this embodiment, the sliding window segmentation technology is mainly adopted to perform window division on the motion data: the fixed length window size is selected first, then the moving window divides the motion data, and the adjacent two windows have a repetition rate of 50%.
Specifically, for the ith sensor node, let the size of the segmentation window be l, if the length of the motion data matrix isThe motion data matrix aiCan be divided intoAnd a data window, wherein the size of the partitioned data matrix in each window is (l multiplied by 6) dimension, and the repetition rate of each two adjacent data windows is 50%.
And 300, performing feature extraction on the action data segment of each sensor node to obtain a corresponding feature vector.
After the raw data corresponding to each sensor is divided into a plurality of data windows, feature extraction needs to be performed on the divided data matrix in each window, and the extracted features mainly include the following 6 types:
1. root mean square (Root mean square,RMS) and the root mean square of the three-axis angular velocity data, respectivelyWhere T ═ { x, y, z } represents the three axis directions, i represents the ith sensor;
2. mean square deviation (MAD) of triaxial acceleration data and Mean square deviation of triaxial angular velocity data, the expressions are respectively
Where, T ═ { x, y, z } represents the three-axis direction, and i represents the ith sensor;
3. covariance (Covariance) of triaxial acceleration data and Covariance of triaxial angular velocity data, respectively expressed asWherein T is1,T2X, y, z represents the three-axis direction, i represents the ith sensor;
4. kurtosis (Kurtosis) of the triaxial acceleration data and Kurtosis of the triaxial angular velocity data are mainly expressed by formulasWhere T ═ { x, y, z } represents the three axis directions, i represents the ith sensor,represents the mean of the acceleration data within the window,represents the variance of the acceleration data within the window,represents the mean of the angular velocity data within the window,representing the variance of angular velocity data within the window;
5. zero crossing rate of triaxial acceleration dataAnd zero crossing rate of triaxial angular velocity dataWhere T ═ { x, y, z } represents the three axis directions, i represents the ith sensor;
6. the energy of the triaxial acceleration data and the energy of the triaxial angular velocity data are respectively expressed by formulas
Andwhere T ═ { x, y, z } denotes the three axis directions, i denotes the ith sensor, x denotes the three axis directionsang,T,jRepresenting the original data matrix aiTCoefficient, x, obtained after fast Fourier transformang,T,jRepresenting the raw data matrix angiTThe coefficients obtained after the fast fourier transform.
After the above 6 features are extracted, the following feature vectors can be obtained corresponding to the jth data window of the ith sensing node:
and 400, performing feature dimension reduction on the obtained feature vector of each sensor node by using an RLDA (recursive least squares) algorithm.
Specifically, after the characteristics of the action data collected by each sensor node are extracted, the RLDA algorithm is provided for reducing the dimension of the characteristic space. The method mainly comprises the following steps:
step 401, for the feature vector space X corresponding to the ith sensor nodeiCalculating the corresponding intra-class scatter matrix SωAnd inter-class scatter matrix SbThe calculation formulas are respectively as follows:
wherein S isωRepresents an intra-class scatter matrix, SbRepresents an inter-class scatter matrix, μaMeans, μ, representing the sum of all feature vectors in class aaRepresents the mean of the sum of all the feature vectors in class a, and μ represents the feature vector space XiIs calculated by averaging the sum of all feature vectors.
Step 402, solving an invertible matrix P according to the contract theorem of the matrix and the basic matrix transformation, so that the following formula holds:
PTSωP=In
wherein, P represents an invertible matrix,is SbCharacteristic value of (1), andnrepresenting an n-dimensional identity matrix.
And step 403, performing primary change on the Fisher criterion according to the result obtained in the step 402 to obtain a solution optimal projection matrix.
Specifically, the fischer maximization criterion is expressed as:
wherein,representing the projection matrix to be solved, since SbAnd SωAre semi-positive, and S can be known according to the Linear Discriminant Analysis (LDA) theoryωIs a positive definite matrix. Then it is known from the matrix's contractual theorem that an invertible matrix P must exist, such thatPTSωP=InIn this case, the first and second substrates,is a matrix SbCharacteristic value of (1)nRepresenting an n-dimensional identity matrix. The fischer maximization criterion can be converted to the form:
order toK=φ(PT)-1Then the fischer maximization criterion can be translated into:
the optimal projection matrix for linear discriminant analysis can be obtained byopt:
φopt=KPT。
Wherein phi isoptAn optimal projection matrix is represented.
And step 404, performing feature dimension reduction by using the optimal projection matrix.
And 500, performing parameter training and modeling by using the feature vector subjected to the dimensionality reduction of each sensor node as training data to obtain a corresponding hierarchical fusion model.
FIG. 2 is a schematic diagram of a layered fusion model of the present invention. Referring to fig. 2, the layered fusion model of the embodiment of the present invention is used to identify various actions of a human body, and the fusion algorithm mainly includes two layers. The first layer can be called a classifier fusion layer, the basic idea is to fuse classification results of a plurality of classifiers by utilizing the idea of a majority voting rule, the fusion strategy is mainly to combine the idea of weights, and the weights of corresponding decision results are mainly obtained by an entropy method. The second layer is called a sensor fusion layer, and mainly fuses output results of sensors bound to multiple parts of a body, and the fusion strategy is still to make a decision by using weight values obtained by an entropy method. The method comprises the following specific steps:
and step 501, verifying the feature vector of each sensor node after dimensionality reduction through a k-fold cross verification method to obtain the contribution rate of each action to each classifier.
For a training data set obtained from one sensor after dimensionality reduction, the training data set can be randomly divided into k parts by a k-fold cross validation method. Then, through the k groups of data, we can obtain the recognition accuracy rates of k basic classifiers relative to c action classes respectively, and the k recognition accuracy rates are marked as c, yqjExpressed as the recognition accuracy of the qth action under the jth classifier. Where y isqjCan be considered as the qth action pairThe contribution rate of the jth classifier.
Step 502, establishing an evaluation matrix Y of a fusion layer of the following classifier according to the contribution rate:
wherein Y represents an evaluation matrix, c represents an action class, k represents the number of classifiers, i (0 ≦ i ≦ N) represents the ith inertial sensor node, and m represents the evaluation matrixijDenotes the jth (0 ≦ j ≦ k) classifier, y, relative to the ith inertial sensor nodeqjExpressed as the contribution rate of the qth action (0 ≦ q ≦ c) under the jth classifier;
step 503, according to the evaluation matrix obtained in step 502, obtaining the shannon entropy of each classifier by using the following formula:
wherein e isjdenotes Shannon entropy, j denotes jth classifier, η is a constant, and η is 1/log2(c)。
Then, according to the Shannon entropy, the redundancy r of the classifier is obtained by using the following formulaj:
rj=1-ej
And obtaining the weight values of the ith sensor node and the jth classifier through the following formulas
Obtaining an output result of the ith sensor node by the following formula:
wherein λ isi,qIndicating that the test sample x is classified into q classes.
Step 504, for the feature vector (training data) after dimensionality reduction in the ith sensor node, the recognition rate of the qth action class is obtained through the following formula
Wherein,indicates the recognition rate of the qth action class.
Step 505, establishing an evaluation matrix of the following sensor fusion layers according to the recognition rate
Step 506, according to the evaluation matrix obtained in the step 505, the shannon entropy of each sensor is obtained by using the following formula
Then, the redundancy quantity of the sensor is obtained according to the Shannon entropy and by using the following formula
And the output weight of each sensor node is calculated by the following formula
The following hierarchical fusion model was obtained:
wherein λ isqIndicating that the test sample is classified into q classes. For the test sample, the final decision result λqAnd can be obtained by a layer fusion model.
And step 600, recognizing the human body action by using the obtained layered fusion model.
When the test data are input into the corresponding layered fusion model, the corresponding classification result can be obtained, and then the human body action recognition is realized.
The invention is further illustrated by the following examples:
for example, human motion data is collected through five sensor nodes, each sensor node comprises a three-axis accelerometer and a three-axis gyroscope, and the sampling frequency is 50 Hz. The subjects were 8 persons in total and were between 24 and 34 years of age. The five sensor nodes are respectively placed at the right wrist, the left arm, the waist, the right ankle and the left thigh of the experimental subject. In addition, the actions designed for this experiment include: walking (the walking is executed on a gymnasium treadmill, the set speeds are 3km/h and 5km/h respectively, and the execution time is about 3 minutes each time); running (executed on a gymnasium treadmill, the set speeds are 6km/h,8km/h and 12km/h respectively, and the execution time is about 3 minutes each time); rope skipping, (actual implementation); cycling (performed in campus, execution time 3 minutes); going upstairs (performed in campus); stair descending (performed in campus); gymnastics (actual execution). In addition, the acquired original data can be processed in MATLAB, and the final recognition result is obtained by combining the compiled recognition algorithm. A total of 400 motion sequences (8 x 5 x 10 motions) were collected in this example, each motion sequence being approximately 10000 samples. Each motion sequence contains three-axis acceleration data and three-axis angular velocity data.
Then, the acquired motion sequence is subjected to window segmentation. For example, for the ith (i ═ 1,2,3,4,5) sensor node, the size of the division window is taken to be 256, that is, each 256 sampling points is taken as a data window. If the length of the motion data matrix isEach motion data sequence may be divided intoAnd a data window, wherein the size of the partitioned data matrix in each window is 256 x 6 dimensions, and the repetition rate of every two adjacent data windows is 50%.
After the data windows are acquired, feature extraction is required to be performed on each data window, and as described above, the extracted features include: root mean square, absolute mean square, kurtosis, covariance, zero-crossing rate, and energy. For each data window, extracting characteristics of triaxial acceleration data and triaxial angular velocity data, wherein the extracted characteristics are as follows:
the dimension of the features in each data window is (36 × 1), each feature vector is regarded as a data sample, and the data samples are identified and classified.
After the features are extracted, no matter for test data and training data, the data set is required to be subjected to dimensionality reduction. The dimension reduction method is the RLDA algorithm provided by the invention. For each data sample obtained by the sensor, the characteristic dimension of each sample is 36, and the characteristic dimension of each data sample obtained by the sensor data is reduced to be less than 9 dimensions by using an RLDA algorithm.
For the motion data after the dimension reduction, the layered fusion algorithm provided by the invention is utilized to identify and classify the test data so as to evaluate the performance of the algorithm provided by the invention. The single classifier used in the fusion algorithm mainly comprises: nearest neighbor classifier (KNN), naive bayes classifier (NB), decision tree C4.5 classifier (C4.5), Support Vector Machine (SVM), hidden markov classifier (HMM). The classifiers used within the fusion algorithm are all the same.
The present example evaluates the algorithm primarily using a leave-one-out verification method. First, data of 1 test subject was taken out as test data, and data of the remaining 7 persons were taken out as training data, and then the cycle was repeated. The final experimental results were averaged over the results of 8 test data to obtain the final results.
Table 1 gives the classification accuracy obtained under different identification methods. The example provides an identification result obtained when a single classifier is directly utilized to directly identify a test sample; in addition, the example also shows the recognition result obtained by a classical classifier fusion algorithm, namely a majority voting Method (MV). From the results, the method provided by the invention can obtain the highest identification precision which reaches 96.96%.
TABLE 1 Classification accuracy obtained under different identification methods
Method of producing a composite material | KNN | NB | SVM | C4.5 | HMM | MV | The invention |
Average recognition rate | 84.55% | 84.79% | 87.59% | 82.48% | 89.17% | 94.77% | 96.96% |
According to the human body action recognition method based on multi-sensor data fusion, firstly, the result contract transformation improves the traditional linear discriminant analysis algorithm, a new feature selection algorithm RLDA is provided, and the algorithm mainly utilizes the contract transformation of the matrix to re-estimate the feature value of the intra-class dispersion matrix inverse, so that the error disturbance caused by inaccurate estimation of the smaller feature value is reduced, and the algorithm precision is improved. The invention also provides a layered fusion model for identifying various human body actions, and the fusion algorithm mainly comprises two layers. The first layer is a classifier fusion layer, and the second layer is a decision weight of each layer of the sensor fusion layer and is mainly obtained through an entropy method. The fusion algorithm provided by the invention has two advantages, the final output result can be more accurate due to the use of the first layered structure, the advantages of a strong classifier can be exerted to a certain extent, and in addition, the use of an upper entropy method in the algorithm can ensure that the classification algorithm has stronger robustness.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some or all technical features may be made without departing from the scope of the technical solutions of the embodiments of the present invention.
Claims (4)
1. A human body action recognition method based on multi-sensor data fusion is characterized by comprising the following steps:
step 100, acquiring human body action data by using N inertial sensor nodes respectively fixed at different parts of a human body;
200, performing window segmentation on the human body action data acquired by each sensor node by using a sliding window segmentation technology to obtain a plurality of action data segments of each sensor node;
step 300, extracting the characteristics of the action data segments of each sensor node to obtain corresponding characteristic vectors;
step 400, performing feature dimension reduction on the obtained feature vector of each sensor node by using an RLDA (recursive least squares) algorithm;
step 500, performing parameter training and modeling by using the feature vector after dimension reduction of each sensor node as training data to obtain a corresponding layered fusion model, including steps 501 to 506:
step 501, verifying the reduced-dimension feature vector of each sensor node by a k-fold cross verification method to obtain the contribution rate of each action to each classifier;
step 502, establishing an evaluation matrix of a fusion layer of the following classifier according to the contribution rate:
wherein Y represents an evaluation matrix, c represents an action class, k represents the number of classifiers, i represents the ith inertial sensor node, mijDenotes the jth classifier, y, relative to the ith inertial sensor nodeqjRepresenting the contribution rate of the qth action under the jth classifier;
step 503, according to the evaluation matrix obtained in step 502, obtaining the shannon entropy of each classifier by using the following formula:
wherein e isjrepresents Shannon entropy, η is a constant, and η is 1/log2(c);
And obtaining the redundancy quantity of the classifier according to the Shannon entropy by using the following formula:
rj=1-ej
wherein r isjRepresenting the amount of redundancy;
obtaining the weight values of the ith sensor node and the jth classifier through the following formulas:
wherein,representing the weight values of the ith sensor node and the jth classifier;
obtaining an output result of the ith sensor node by the following formula:
wherein λ isi,qRepresenting that the test sample x is classified into q classes;
step 504, for the feature vector after the dimensionality reduction in the ith sensor node, obtaining the recognition rate of the qth action class through the following formula:
wherein,indicating the recognition rate of the q-th action class;
step 505, establishing an evaluation matrix of the following sensor fusion layers according to the recognition rate:
step 506, according to the evaluation matrix obtained in step 505, the shannon entropy of each sensor is obtained by using the following formula:
and obtaining the redundancy quantity of the sensor according to the Shannon entropy by using the following formula:
the output weight of each sensor node is calculated by the following formula:
the following hierarchical fusion model was obtained:
wherein λ isqRepresenting that the test sample is classified into q classes;
and step 600, recognizing the human body action by using the obtained layered fusion model.
2. The human body motion recognition method based on multi-sensor data fusion of claim 1, wherein the window segmentation of the human body motion data collected by each sensor node by using a sliding window segmentation technique comprises:
for the ith sensor node, the size of the segmentation window is set to be l, and if the length of the motion data matrix is set to be lThe motion data matrix aiCan be divided intoAnd a data window, wherein the size of the partitioned data matrix in each window is (l multiplied by 6) dimension, and the repetition rate of each two adjacent data windows is 50%.
3. The human body motion recognition method based on multi-sensor data fusion as claimed in claim 1, wherein the feature extraction is performed on the motion data segment of each sensor node, and the extracted features include: root mean square, mean square error, kurtosis, covariance, zero-crossing rate, and energy of the triaxial acceleration data and triaxial angular velocity data.
4. The human body motion recognition method based on multi-sensor data fusion of claim 1, wherein the RLDA algorithm is used for performing feature dimension reduction on the obtained feature vector of each sensor node, and the method comprises the following steps:
step 401, for the eigenvector space corresponding to the ith sensor node, obtaining a corresponding intra-class scattering matrix and inter-class scattering matrix:
wherein S isωRepresents an intra-class scatter matrix, SbRepresents an inter-class scatter matrix, μaRepresents the mean of the sum of all the feature vectors in class a, and μ represents the feature vector space XiAverage of all feature vector sums of (a);
step 402, solving a reversible matrix according to the contract theorem of the matrix and the basic matrix transformation to obtain the following formula:
PTSωP=In
wherein, P represents an invertible matrix,is SbCharacteristic value of (1), andnrepresenting an n-dimensional identity matrix;
step 403, according to the result obtained in step 402, using fisher criterion to obtain the following optimal projection matrix:
φopt=KPT
wherein phi isoptRepresents the optimal projection matrix, K ═ phi (P)T)-1,Representing a projection matrix to be solved;
and step 404, performing feature dimension reduction by using the optimal projection matrix.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803749.8A CN109086698B (en) | 2018-07-20 | 2018-07-20 | Human body action recognition method based on multi-sensor data fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810803749.8A CN109086698B (en) | 2018-07-20 | 2018-07-20 | Human body action recognition method based on multi-sensor data fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109086698A true CN109086698A (en) | 2018-12-25 |
CN109086698B CN109086698B (en) | 2021-06-25 |
Family
ID=64838379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810803749.8A Active CN109086698B (en) | 2018-07-20 | 2018-07-20 | Human body action recognition method based on multi-sensor data fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109086698B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784418A (en) * | 2019-01-28 | 2019-05-21 | 东莞理工学院 | A kind of Human bodys' response method and system based on feature recombination |
CN109902565A (en) * | 2019-01-21 | 2019-06-18 | 深圳市烨嘉为技术有限公司 | The Human bodys' response method of multiple features fusion |
CN109919034A (en) * | 2019-01-31 | 2019-06-21 | 厦门大学 | A kind of identification of limb action with correct auxiliary training system and method |
CN110058699A (en) * | 2019-04-28 | 2019-07-26 | 电子科技大学 | A kind of user behavior recognition method based on Intelligent mobile equipment sensor |
CN110377159A (en) * | 2019-07-24 | 2019-10-25 | 张洋 | Action identification method and device |
CN110796188A (en) * | 2019-10-23 | 2020-02-14 | 华侨大学 | Multi-type inertial sensor collaborative construction worker work efficiency monitoring method |
CN111950574A (en) * | 2019-05-15 | 2020-11-17 | 北京荣森利泰科贸有限公司 | Ritual training information pushing method and device based on wearable equipment |
CN112016430A (en) * | 2020-08-24 | 2020-12-01 | 郑州轻工业大学 | Hierarchical action identification method for multi-mobile-phone wearing positions |
CN112434669A (en) * | 2020-12-14 | 2021-03-02 | 武汉纺织大学 | Multi-information fusion human behavior detection method and system |
CN113057628A (en) * | 2021-04-04 | 2021-07-02 | 北京泽桥传媒科技股份有限公司 | Inertial sensor based motion capture method |
CN113065581A (en) * | 2021-03-18 | 2021-07-02 | 重庆大学 | Vibration fault migration diagnosis method for reactance domain adaptive network based on parameter sharing |
CN114241603A (en) * | 2021-12-17 | 2022-03-25 | 中南民族大学 | Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment |
CN114832277A (en) * | 2022-05-20 | 2022-08-02 | 广东沃莱科技有限公司 | Rope skipping mode identification method and rope skipping |
CN114897035A (en) * | 2021-10-09 | 2022-08-12 | 国网浙江省电力有限公司电力科学研究院 | Multi-source data feature fusion method for 10kV cable state evaluation |
CN115731602A (en) * | 2021-08-24 | 2023-03-03 | 中国科学院深圳先进技术研究院 | Human body activity recognition method, device, equipment and storage medium based on topological representation |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268577A (en) * | 2014-06-27 | 2015-01-07 | 大连理工大学 | Human body behavior identification method based on inertial sensor |
CN105868779A (en) * | 2016-03-28 | 2016-08-17 | 浙江工业大学 | Method for identifying behavior based on feature enhancement and decision fusion |
CN106210269A (en) * | 2016-06-22 | 2016-12-07 | 南京航空航天大学 | A kind of human action identification system and method based on smart mobile phone |
-
2018
- 2018-07-20 CN CN201810803749.8A patent/CN109086698B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268577A (en) * | 2014-06-27 | 2015-01-07 | 大连理工大学 | Human body behavior identification method based on inertial sensor |
CN105868779A (en) * | 2016-03-28 | 2016-08-17 | 浙江工业大学 | Method for identifying behavior based on feature enhancement and decision fusion |
CN106210269A (en) * | 2016-06-22 | 2016-12-07 | 南京航空航天大学 | A kind of human action identification system and method based on smart mobile phone |
Non-Patent Citations (3)
Title |
---|
ORESTI BANOS ET AL.: "Multi-sensor Fusion Based on Asymmetric Decision Weighting for Robust Activity Recognition", 《NEURAL PROCESS LETTERS》 * |
姜鸣,王哲龙: "基于无线传感网络的人体动作识别系统", 《2009中国自动化大会暨两化融合高峰会议论文集》 * |
陈野 等: "基于BSN和神经网络的人体日常动作识别方法", 《大连理工大学学报》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902565A (en) * | 2019-01-21 | 2019-06-18 | 深圳市烨嘉为技术有限公司 | The Human bodys' response method of multiple features fusion |
CN109784418A (en) * | 2019-01-28 | 2019-05-21 | 东莞理工学院 | A kind of Human bodys' response method and system based on feature recombination |
CN109784418B (en) * | 2019-01-28 | 2020-11-17 | 东莞理工学院 | Human behavior recognition method and system based on feature recombination |
CN109919034A (en) * | 2019-01-31 | 2019-06-21 | 厦门大学 | A kind of identification of limb action with correct auxiliary training system and method |
CN110058699B (en) * | 2019-04-28 | 2021-04-27 | 电子科技大学 | User behavior identification method based on intelligent mobile device sensor |
CN110058699A (en) * | 2019-04-28 | 2019-07-26 | 电子科技大学 | A kind of user behavior recognition method based on Intelligent mobile equipment sensor |
CN111950574A (en) * | 2019-05-15 | 2020-11-17 | 北京荣森利泰科贸有限公司 | Ritual training information pushing method and device based on wearable equipment |
CN110377159A (en) * | 2019-07-24 | 2019-10-25 | 张洋 | Action identification method and device |
CN110796188A (en) * | 2019-10-23 | 2020-02-14 | 华侨大学 | Multi-type inertial sensor collaborative construction worker work efficiency monitoring method |
CN110796188B (en) * | 2019-10-23 | 2023-04-07 | 华侨大学 | Multi-type inertial sensor collaborative construction worker work efficiency monitoring method |
CN112016430B (en) * | 2020-08-24 | 2022-10-11 | 郑州轻工业大学 | Hierarchical action identification method for multi-mobile-phone wearing positions |
CN112016430A (en) * | 2020-08-24 | 2020-12-01 | 郑州轻工业大学 | Hierarchical action identification method for multi-mobile-phone wearing positions |
CN112434669B (en) * | 2020-12-14 | 2023-09-26 | 武汉纺织大学 | Human body behavior detection method and system based on multi-information fusion |
CN112434669A (en) * | 2020-12-14 | 2021-03-02 | 武汉纺织大学 | Multi-information fusion human behavior detection method and system |
CN113065581A (en) * | 2021-03-18 | 2021-07-02 | 重庆大学 | Vibration fault migration diagnosis method for reactance domain adaptive network based on parameter sharing |
CN113065581B (en) * | 2021-03-18 | 2022-09-16 | 重庆大学 | Vibration fault migration diagnosis method for reactance domain self-adaptive network based on parameter sharing |
CN113057628A (en) * | 2021-04-04 | 2021-07-02 | 北京泽桥传媒科技股份有限公司 | Inertial sensor based motion capture method |
CN115731602A (en) * | 2021-08-24 | 2023-03-03 | 中国科学院深圳先进技术研究院 | Human body activity recognition method, device, equipment and storage medium based on topological representation |
CN114897035A (en) * | 2021-10-09 | 2022-08-12 | 国网浙江省电力有限公司电力科学研究院 | Multi-source data feature fusion method for 10kV cable state evaluation |
CN114241603A (en) * | 2021-12-17 | 2022-03-25 | 中南民族大学 | Shuttlecock action recognition and level grade evaluation method and system based on wearable equipment |
CN114832277A (en) * | 2022-05-20 | 2022-08-02 | 广东沃莱科技有限公司 | Rope skipping mode identification method and rope skipping |
CN114832277B (en) * | 2022-05-20 | 2024-02-06 | 广东沃莱科技有限公司 | Rope skipping mode identification method and rope skipping |
Also Published As
Publication number | Publication date |
---|---|
CN109086698B (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109086698B (en) | Human body action recognition method based on multi-sensor data fusion | |
CN104268577B (en) | Human body behavior identification method based on inertial sensor | |
Siirtola et al. | Personalizing human activity recognition models using incremental learning | |
Dehzangi et al. | IMU-based robust human activity recognition using feature analysis, extraction, and reduction | |
CN108549856A (en) | A kind of human action and road conditions recognition methods | |
Whelan et al. | Leveraging IMU data for accurate exercise performance classification and musculoskeletal injury risk screening | |
Shi et al. | Gait recognition via random forests based on wearable inertial measurement unit | |
Noor et al. | Detection of freezing of gait using unsupervised convolutional denoising autoencoder | |
WO2023035093A1 (en) | Inertial sensor-based human body behaviour recognition method | |
CN110163264B (en) | Walking pattern recognition method based on machine learning | |
Lu et al. | MFE-HAR: multiscale feature engineering for human activity recognition using wearable sensors | |
Wang et al. | A2dio: Attention-driven deep inertial odometry for pedestrian localization based on 6d imu | |
CN115273236A (en) | Multi-mode human gait emotion recognition method | |
CN108717548A (en) | A kind of increased Activity recognition model update method of facing sensing device dynamic and system | |
Zhou et al. | A self-supervised human activity recognition approach via body sensor networks in smart city | |
Liu et al. | MRD-NETS: multi-scale residual networks with dilated convolutions for classification and clustering analysis of spacecraft electrical signal | |
Dohnálek et al. | Human activity recognition on raw sensor data via sparse approximation | |
Kilinc et al. | Inertia based recognition of daily activities with anns and spectrotemporal features | |
Ghobadi et al. | Foot-mounted inertial measurement unit for activity classification | |
CN115273237B (en) | Human body posture and action recognition method based on integrated random configuration neural network | |
CN114171194B (en) | Quantitative assessment method, device, electronic device and medium for Parkinson multiple symptoms | |
Ambati et al. | A comparative study of machine learning approaches for human activity recognition | |
Mai et al. | Human activity recognition of exoskeleton robot with supervised learning techniques | |
Feng et al. | Research on Human Activity Recognition Based on Random Forest Classifier | |
Doroz et al. | Method of signature recognition with the use of the mean differences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |