Nothing Special   »   [go: up one dir, main page]

CN109522850B - Action similarity evaluation method based on small sample learning - Google Patents

Action similarity evaluation method based on small sample learning Download PDF

Info

Publication number
CN109522850B
CN109522850B CN201811396297.2A CN201811396297A CN109522850B CN 109522850 B CN109522850 B CN 109522850B CN 201811396297 A CN201811396297 A CN 201811396297A CN 109522850 B CN109522850 B CN 109522850B
Authority
CN
China
Prior art keywords
video
data
sampling
motion
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811396297.2A
Other languages
Chinese (zh)
Other versions
CN109522850A (en
Inventor
郑伟诗
胡康
朱智慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201811396297.2A priority Critical patent/CN109522850B/en
Publication of CN109522850A publication Critical patent/CN109522850A/en
Application granted granted Critical
Publication of CN109522850B publication Critical patent/CN109522850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a small sample learning-based motion similarity evaluation method, which comprises the steps of establishing a data preprocessing model, a training model and a testing model, extracting a human body overall skeleton motion video and positions of all joint points by adopting a human body posture estimation model, eliminating background interference, splitting human motion according to the positions of the joint points, setting a sampling pixel value and a sampling interval, intercepting to obtain a sampling video comprising the human body overall skeleton motion video and the joint motion taking all the joint points as centers, combining local information and global information with the sampling video, training by using a rewritten triple loss function after data preprocessing, mapping video data to a cosine space, calculating a cosine distance, and outputting results of human motion integrity and similarity degrees of all the joints in the video. According to the invention, a good action characteristic mapping model can be learned by using only a few samples, so that a good action similarity result is obtained.

Description

Action similarity evaluation method based on small sample learning
Technical Field
The invention relates to the field of computer vision, in particular to an action similarity evaluation method based on small sample learning.
Background
At present, the main motion similarity evaluation method is to adopt a dual-stream architecture, namely RGB stream and optical stream. The RGB streams extract spatial features of people in the videos, the optical streams extract motion features of people in the videos, double-stream features are fused, the two videos adopt the same operation to obtain two fused double-stream features, and the two fused double-stream features are input into a decision network to obtain similarity scores.
The model of the dual-stream architecture is generally used for analyzing the whole picture in a general way, global information is concerned more, which is caused by the structure of the dual-stream model, and local information is concerned more in the similarity evaluation. Moreover, because a large amount of noise exists in the data, a model adopting a dual-flow architecture needs a large amount of data to be trained in order to learn good characteristics, so as to reduce the influence of the noise on the final result and alleviate the overfitting phenomenon.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method based on small sample learning, which is characterized in that a small video in a 100x100 pixel range taking a human body joint point as a center is intercepted according to a result obtained by a human body posture estimation model, the small video is combined with an original video and input into a neural network to extract features, local information and global information are concerned, the trained model can well evaluate the similarity of any two actions, and a better evaluation result is obtained in a data set shot in a real scene.
In order to achieve the purpose, the invention adopts the following technical scheme:
a motion similarity evaluation method based on small sample learning comprises the following steps:
establishing a data preprocessing model:
extracting a human body whole skeleton motion video and the positions of all joint points by adopting a human body posture estimation model;
setting sampling pixel values and sampling intervals, and intercepting to obtain a sampling video, wherein the sampling video comprises a human body whole skeleton motion sampling video and a joint motion sampling video taking each joint point as a center;
establishing a training model:
selecting training data, and determining template data, positive data and negative data;
training by adopting a triple loss function, respectively inputting template data, positive data and negative data into a feature extractor of a three-dimensional convolutional neural network to obtain a feature map, and respectively obtaining template feature vectors, positive feature vectors and negative feature vectors through calculation of a full connection layer;
the template feature vector and the positive feature vector are correspondingly calculated to obtain a first cosine distance;
the template feature vector and the negative class feature vector correspondingly calculate a second cosine distance;
training by adopting a triple loss function, updating parameters and outputting a trained feature extractor;
the calculation formula of the first cosine distance or the second cosine distance is as follows:
Figure BDA0001875271950000021
wherein cos is the cosine value obtained by calculation, and the value range is [0,1 ]]The closer the cosine value is to 1, indicating that the closer the included angle is to 0 degrees, the more similar the two eigenvectors are, x i Is the i-th feature vector, y, of the template data i The ith feature vector of the positive or negative class data;
establishing a test model:
inputting two sections of videos to be detected, and performing data preprocessing through a data preprocessing model;
inputting the feature vectors into trained feature extractors respectively, and correspondingly calculating cosine distances of the obtained feature vectors;
and calculating the average value of the cosine distances, and outputting the similarity score of the overall motion of the human in the video.
As a preferred technical solution, the sampling step in establishing the data preprocessing model is specifically as follows:
zooming pixels of the human body overall skeleton motion video, and sampling at soft intervals to obtain a sampled video;
and intercepting a joint action video taking each joint point as a center, zooming pixels, sampling at soft intervals, adding video frames into the target video if the modulus value of the video frame number to the sampling interval is less than 1, and zooming the pixels of the joint action video to be the same as the whole skeleton motion video of the human body.
As a preferred technical solution, the sampling interval calculation method of the soft interval sampling is as described in the following formula:
sample=t/(fps/25)/36;
wherein sample is a sampling interval, t is a total video frame number, and fps is a video frame rate.
As a preferred technical scheme, each joint position in the data preprocessing model comprises a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hip, a right hip, a left knee, a right knee, a left ankle and a right ankle.
As a preferred technical solution, the pixel values set in the data preprocessing model are 100 × 100 pixel values.
As a preferred technical solution, the triple loss function calculation formula in the step of establishing the training model is as follows:
Figure BDA0001875271950000031
wherein N is i Calculating cosine distance to obtain cosine value P for ith template data action and ith negative data action i Cosine values obtained by calculating cosine distances for the ith template data action and the ith sine data action, n =13, m is the minimum interval of two cosine distances, m is 0.9, + denotes [, ]]When the value in the value is larger than zero, the value is taken for loss, otherwise, the loss is zero.
As a preferred technical solution, the specific steps of establishing the selected training set in the training model are as follows:
respectively selecting k videos which are most similar to the template data action from the s types of action videos, and co-selecting sx k training samples, wherein s is greater than 1, k is greater than 1;
after the triple loss function is adopted for data training, the training samples have s x k x k combination modes in total.
As a preferred technical solution, the three-dimensional convolution network in the training model is established in a parameter sharing manner.
As a preferred technical solution, the feature map information in the training model includes motion information, color information, shape information, and position information.
As a preferred technical solution, the test model further comprises the following steps:
and respectively and independently outputting the cosine distances to obtain the results of the similarity degree of each joint position.
Compared with the prior art, the invention has the following advantages and beneficial effects:
(1) The human body posture estimation model is utilized to obtain the human body skeleton motion video, so that the interference of background information is eliminated, and the model is helped to learn more favorable characteristics.
(2) According to the invention, the actions of the people are split according to the positions of the joint points, the combined part and the whole are combined to obtain the final similarity score, the local information and the global information are effectively concerned, and the final result is more reliable.
(3) The method utilizes the triple loss function characteristics to carry out data training, fully excavates the intrinsic value of the data, can learn a good feature mapping model only by using a small number of samples, can map the video data to the cosine space, and further obtains a good action similarity result.
Drawings
FIG. 1 is a schematic representation of the human skeleton of the data preprocessing model of the present invention;
FIG. 2 is a schematic diagram of a training model according to the present invention;
FIG. 3 is a flow chart of a test model of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
In this embodiment, a method for evaluating motion similarity based on small sample learning is provided, which includes the following specific steps:
establishing a data preprocessing model: the human skeleton motion video and the joint point positions are extracted using alphaPose (human pose estimation model), as shown in FIG. 1. The background information of the data is ignored, the influence of background noise on the model result is greatly reduced, and the features learned by the model are focused on the action. The area of 100 × 100 pixels centered on the joint point was cut out for 12 (excluding 5 points on the head), and the area was left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, and right ankle. Because each video has different lengths, in order to unify the lengths without losing the continuity of the motion, the embodiment proposes to use soft interval sampling, that is, to first solve the sampling interval (the calculation method is shown in formula 1), and read in the video frames in sequence, and if the modulo value of the current frame number to the sampling interval is less than 1, add the frame to the target video. And finishing soft interval sampling, and uniformly obtaining short videos with a frame rate of 25 and a total frame number of 36.
sample=t/(fps/25)/36 (1)
Wherein sample is a sampling interval, t is a total video frame number, and fps is a video frame rate.
In this embodiment, a training model is established: the whole skeleton motion video is zoomed to 100x100, the time sequence also uses soft interval sampling to obtain 36 frames of short video, and then the short video and the cut joint point motion video are added to obtain data of 13x 36x 100x 100x 3 dimension (13 is 12 partial videos and 1 whole video). Using triple loss (triple loss function) training, namely template data, positive data (the same action as the template) and negative data (different action from the template) are respectively input into a C3D (three-dimensional convolution network) feature extractor to obtain a feature map, and then calculating through a full connection layer to respectively obtain a template feature vector, a positive feature vector and a negative feature vector; the three-dimensional convolution network adopts a parameter sharing mode, and one iteration needs to be repeatedly calculated for 39 times, so that 3x 13 different feature vectors are obtained. And calculating cosine distances (shown by a formula 2) by corresponding 13 feature vectors belonging to the template and 13 feature vectors belonging to the class II to obtain 13 cosine values, and averaging the cosine values to obtain the similarity score of the template data action and the class II action data. Similarly, the cosine distance is calculated by the template characteristic vector and the negative characteristic vector to obtain the similarity score of the template data action and the negative data action. The triplet loss function is shown by equation 3. And after the triple loss function training, updating the related parameters, and outputting a trained feature extractor for extracting features in the test stage.
Figure BDA0001875271950000061
Wherein cos is the cosine value obtained by calculation, and the value range is [0,1 ]]The closer the cosine value is to 1, indicating thatThe closer the angle is to 0 degrees, the more similar the two eigenvectors are, x i Is the i-th feature vector, y, of the template data i The ith eigenvector of the positive data or the negative data, n is 13 in this embodiment; (ii) a
Figure BDA0001875271950000062
Wherein N is i Calculating cosine distance to obtain cosine value P for ith template data action and ith negative data action i Cosine values obtained by calculating cosine distances for the ith template data action and the ith positive data action, n =13, m is the minimum interval of two cosine distances, m is 0.9, and + represents [ 2 ]]When the value in the value is larger than zero, the value is taken for loss, otherwise, the loss is zero.
In this embodiment, the training data set is selected by the following steps: and respectively selecting k (k > 1) videos which are most similar to the template data motion from the s types (s > 1) of motions, and selecting s x k training samples. Due to the fact that the triple loss function is used for training, the training samples have s x k x k combination modes which are k times of the number of the originally selected samples. In this embodiment, s is 5, k is 20, only 100 samples are used, and there are 2000 combination modes during training, which is equivalent to 2000 training samples. The selected samples can be positive data or negative data, and during training, if the selected samples are similar to the template data video, the selected samples are positive data, and if the selected samples are not similar to the template data video, the selected samples are negative data. The video of the template data is the movement of the fitness trainer.
In this embodiment, a test model is established: and iterating for multiple times by using a random gradient descent algorithm to obtain a feature extractor, wherein the model can map the video data to a cosine space so as to calculate the cosine distance. Inputting any two sections of videos, respectively extracting a human skeleton motion video through a human posture estimation model, intercepting video subregions and other data for preprocessing, then respectively inputting the videos into a trained C3D feature extractor, calculating cosine distances of the obtained feature vectors in a one-to-one correspondence manner, totaling 13, taking the mean value as a final output result, namely the action similarity degree of people in the two sections of videos, if the 13 cosine distances are respectively output, obtaining actions similar to those joint positions and which joint positions are not similar, and showing a flow chart 3 in a test stage.
In the embodiment, a human body posture estimation model is adopted to extract a human body overall skeleton motion video and the positions of all joint points, background interference is eliminated, human actions are split according to the positions of the joint points, local information and global information are combined, a rewritten triple loss function is used for training after the data processing method is adopted, and the framework provided by the embodiment can learn a good action characteristic mapping model by using few samples, so that a good action similarity result is obtained.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such modifications are intended to be included in the scope of the present invention.

Claims (7)

1. A motion similarity evaluation method based on small sample learning is characterized by comprising the following steps:
establishing a data preprocessing model:
extracting a human body overall skeleton motion video and the positions of all joint points by adopting a human body posture estimation model;
setting sampling pixel values and sampling intervals, and intercepting to obtain a sampling video, wherein the sampling video comprises a human body whole skeleton motion sampling video and a joint motion sampling video taking each joint point as a center;
the sampling step is specifically as follows:
zooming pixels of the human body overall skeleton motion video, and sampling at soft intervals to obtain a sampled video;
intercepting a joint action video taking each joint point as a center and zooming pixels, sampling at soft intervals, adding video frames into a target video if the modulo value of the number of the video frames on the sampling interval is less than 1, wherein the zoomed pixel value of the joint action video is the same as that of a human body whole skeleton motion video;
the sampling interval of the soft interval sampling is calculated according to the following formula:
sample=t/(fps/25)/36;
wherein sample is a sampling interval, t is a total video frame number, and fps is a video frame rate;
establishing a training model:
selecting training data, and determining template data, positive data and negative data;
respectively inputting the template data, the positive class data and the negative class data into a feature extractor of a three-dimensional convolutional neural network to obtain a feature map, and respectively obtaining a template feature vector, a positive class feature vector and a negative class feature vector through calculation of a full connection layer;
the template feature vector and the normal feature vector are corresponding to calculate a first cosine distance;
the template feature vector and the negative class feature vector correspondingly calculate a second cosine distance;
training by adopting a triple loss function, updating parameters and outputting a trained feature extractor;
the calculation formula of the first cosine distance or the second cosine distance is as follows:
Figure FDA0003990878760000021
wherein cos is the cosine value obtained by calculation, and the value range is [0,1 ]]The closer the cosine value is to 1, indicating that the closer the included angle is to 0 degrees, the more similar the two eigenvectors are, x i Is the i-th feature vector, y, of the template data i The ith feature vector is positive class or negative class data;
the formula for calculating the triple loss function is as follows:
Figure FDA0003990878760000022
wherein N is i Calculating cosine distance to obtain cosine value P for ith template data action and ith negative data action i Cosine values obtained by calculating cosine distances for the ith template data action and the ith sine data action, n =13, m is the minimum interval of two cosine distances, m is 0.9, + denotes [, ]]When the internal value is larger than zero, the value is taken for loss calculation, otherwise, the loss is zero;
establishing a test model:
inputting two sections of videos to be detected, and performing data preprocessing through a data preprocessing model;
inputting the feature vectors into trained feature extractors respectively, and correspondingly calculating cosine distances of the obtained feature vectors;
and calculating the average value of the cosine distances, and outputting the similarity score of the overall motion of the human in the video.
2. The method for motion similarity assessment based on small sample learning according to claim 1, wherein the joint positions in the data preprocessing model comprise left and right shoulders, left and right elbows, left and right wrists, left and right hips, left and right knees and left and right ankles.
3. The method for evaluating motion similarity based on small sample learning according to claim 1 or 2, wherein the pixel values set in the establishing of the data preprocessing model are 100x100 pixel values.
4. The method for evaluating action similarity based on small sample learning according to claim 1, wherein the specific steps of establishing the selected training set in the training model are as follows:
respectively selecting k videos which are most similar to the template data action from the s types of action videos, and co-selecting s x k training samples, wherein s is greater than 1, k is greater than 1;
after the triple loss function is adopted for data training, the training samples have s x k x k combination modes in total.
5. The method for evaluating action similarity based on small sample learning according to claim 1, wherein a three-dimensional convolution network in the training model is established in a parameter sharing manner.
6. The small sample learning-based motion similarity evaluation method according to claim 1, wherein the feature map information in the training model is established to include motion information, color information, shape information and position information.
7. The small sample learning-based action similarity evaluation method according to claim 1, wherein the test model further comprises the following steps:
and respectively and independently outputting the cosine distances to obtain the results of the similarity degree of each joint position.
CN201811396297.2A 2018-11-22 2018-11-22 Action similarity evaluation method based on small sample learning Active CN109522850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811396297.2A CN109522850B (en) 2018-11-22 2018-11-22 Action similarity evaluation method based on small sample learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811396297.2A CN109522850B (en) 2018-11-22 2018-11-22 Action similarity evaluation method based on small sample learning

Publications (2)

Publication Number Publication Date
CN109522850A CN109522850A (en) 2019-03-26
CN109522850B true CN109522850B (en) 2023-03-10

Family

ID=65778341

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811396297.2A Active CN109522850B (en) 2018-11-22 2018-11-22 Action similarity evaluation method based on small sample learning

Country Status (1)

Country Link
CN (1) CN109522850B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378362A (en) * 2019-04-22 2019-10-25 浙江师范大学 Concept learning method based on concept invariant feature and its differentiation network
KR102194282B1 (en) * 2019-05-17 2020-12-23 네이버 주식회사 Method for generating pose similarity measurement model and apparatus for the same
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
CN110717554B (en) * 2019-12-05 2023-02-28 广东虚拟现实科技有限公司 Image recognition method, electronic device, and storage medium
CN113128283A (en) * 2019-12-31 2021-07-16 沸腾时刻智能科技(深圳)有限公司 Evaluation method, model construction method, teaching machine, teaching system and electronic equipment
GB2600922B (en) * 2020-11-05 2024-04-10 Thales Holdings Uk Plc One shot learning for identifying data items similar to a query data item
CN112508105B (en) * 2020-12-11 2024-03-19 南京富岛信息工程有限公司 Fault detection and retrieval method for oil extraction machine
CN113033622B (en) * 2021-03-05 2023-02-03 北京百度网讯科技有限公司 Training method, device, equipment and storage medium for cross-modal retrieval model
CN113052138B (en) * 2021-04-25 2024-03-15 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions
CN114492363B (en) * 2022-04-15 2022-07-15 苏州浪潮智能科技有限公司 Small sample fine adjustment method, system and related device
CN115331154B (en) * 2022-10-12 2023-01-24 成都西交智汇大数据科技有限公司 Method, device and equipment for scoring experimental steps and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012141881A (en) * 2011-01-05 2012-07-26 Kddi Corp Human body motion estimation device, human body motion estimation method and computer program
CN107392097A (en) * 2017-06-15 2017-11-24 中山大学 A kind of 3 D human body intra-articular irrigation method of monocular color video
WO2017206005A1 (en) * 2016-05-30 2017-12-07 中国石油大学(华东) System for recognizing postures of multiple people employing optical flow detection and body part model
CN107832672A (en) * 2017-10-12 2018-03-23 北京航空航天大学 A kind of pedestrian's recognition methods again that more loss functions are designed using attitude information
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012141881A (en) * 2011-01-05 2012-07-26 Kddi Corp Human body motion estimation device, human body motion estimation method and computer program
WO2017206005A1 (en) * 2016-05-30 2017-12-07 中国石油大学(华东) System for recognizing postures of multiple people employing optical flow detection and body part model
CN107392097A (en) * 2017-06-15 2017-11-24 中山大学 A kind of 3 D human body intra-articular irrigation method of monocular color video
CN107832672A (en) * 2017-10-12 2018-03-23 北京航空航天大学 A kind of pedestrian's recognition methods again that more loss functions are designed using attitude information
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss

Also Published As

Publication number Publication date
CN109522850A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
CN109522850B (en) Action similarity evaluation method based on small sample learning
CN110135375B (en) Multi-person attitude estimation method based on global information integration
CN104167016B (en) A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
CN103854283B (en) A kind of mobile augmented reality Tracing Registration method based on on-line study
CN113205595B (en) Construction method and application of 3D human body posture estimation model
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN109508661B (en) Method for detecting hand lifter based on object detection and posture estimation
CN110688929B (en) Human skeleton joint point positioning method and device
CN112597814A (en) Improved Openpos classroom multi-person abnormal behavior and mask wearing detection method
CN112258555A (en) Real-time attitude estimation motion analysis method, system, computer equipment and storage medium
CN110751100A (en) Auxiliary training method and system for stadium
CN111507184B (en) Human body posture detection method based on parallel cavity convolution and body structure constraint
CN114038062B (en) Examinee abnormal behavior analysis method and system based on joint key point characterization
CN111524183A (en) Target row and column positioning method based on perspective projection transformation
CN113989928B (en) Motion capturing and redirecting method
CN112001217A (en) Multi-person human body posture estimation algorithm based on deep learning
CN107895145A (en) Method based on convolutional neural networks combination super-Gaussian denoising estimation finger stress
WO2024060978A1 (en) Key point detection model training method and apparatus and virtual character driving method and apparatus
CN111178201A (en) Human body sectional type tracking method based on OpenPose posture detection
CN114092863A (en) Human body motion evaluation method for multi-view video image
CN112633083A (en) Method for detecting abnormal behaviors of multiple persons and wearing of mask based on improved Openpos examination
CN112329571B (en) Self-adaptive human body posture optimization method based on posture quality evaluation
CN116721468A (en) Intelligent guided broadcast switching method based on multi-person gesture estimation action amplitude detection
CN117152829A (en) Industrial boxing action recognition method of multi-view self-adaptive skeleton network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant