Nothing Special   »   [go: up one dir, main page]

CN105809144B - A kind of gesture recognition system and method using movement cutting - Google Patents

A kind of gesture recognition system and method using movement cutting Download PDF

Info

Publication number
CN105809144B
CN105809144B CN201610177623.5A CN201610177623A CN105809144B CN 105809144 B CN105809144 B CN 105809144B CN 201610177623 A CN201610177623 A CN 201610177623A CN 105809144 B CN105809144 B CN 105809144B
Authority
CN
China
Prior art keywords
gesture
cutting
data
module
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610177623.5A
Other languages
Chinese (zh)
Other versions
CN105809144A (en
Inventor
李红波
张少波
欧阳文
范张群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201610177623.5A priority Critical patent/CN105809144B/en
Publication of CN105809144A publication Critical patent/CN105809144A/en
Application granted granted Critical
Publication of CN105809144B publication Critical patent/CN105809144B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A kind of gesture recognition system and method using movement cutting is claimed in the present invention, is related to machine vision and field of human-computer interaction, the described method comprises the following steps: detection head movement first, and calculates head pose variation;Then sliced signal is sent according to Attitude estimation information and judges gesture cutting whole story point, capture gesture sequence of frames of video in the time interval that this gesture executes if signal indicates the cutting of active gesture motion, and pretreatment and feature extraction are carried out to gesture frame image;Sequence of frames of video is acquired in real time if signal indicates automatic gesture motion cutting and automatically analyzes cut-off by analyzing the motion change rule between adjacent gesture, carry out movement cutting, vision extraneous features are extracted by effective first gesture sequence that cutting obtains again, and by having the Gesture Recognition Algorithm for eliminating space and time difference to obtain types results.Present invention substantially reduces the computing costs of the redundancy of continuous gesture and recognizer, improve the accuracy and real-time of gesture identification.

Description

A kind of gesture recognition system and method using movement cutting
Technical field
The invention belongs to digital pictures and field of human-computer interaction, more particularly, to a kind of gesture identification using movement cutting System and method.
Background technique
With the development that mobile phone touch operation and human body tracking identify, people, which have realized gesture interaction mode, to be had with people Centered on naturality, the advantages such as terseness and substantivity, the interactive interface based on manpower intelligent input becoming new skill Art trend, especially with the rise of immersive VR new equipment, various interaction schemes are used to improve immersion experience, It is wherein the most succinct with gesture interaction, direct, natural.
Gesture identification is widely used to augmented reality, virtual reality, somatic sensation television game etc. as a kind of human-computer interaction means Scene, for these application scenarios, operating gesture is randomly-embedded in continuous action stream, current many view-based access control models Gesture recognition system assumes that between each movement of input there is the independent gesture paused or segmented, and in real-time scene Under application study it is relatively fewer.
The Chinese invention patent of Publication No. CN102789568B discloses a kind of gesture identification side based on depth information Method obtains the motion profile of hand tracking, recycles by detecting hand outline information on each independent human region Hidden Markov Model carries out modeling to motion profile and identifies gesture, in analysis time limit fixed track length in real time to filter hand Gesture track.Method also is to allow user when executing gesture by clicking button come the starting point and end of cutting gesture Point.
Hand and non-rigid have flexibility, and the hand in movement has the characteristics that shape is changeable, the same movement of the same person It is inscribed in different viewing angles or observation, can all there be larger difference in form and duration, and different people is more difficult to reappear The same movement, therefore had very using which kind of gesture feature description to the effect and universality of view-based access control model gesture identification It is big to influence.Currently, gesture model has the gesture model based on apparent gesture model and based on 3D model, Publication No. The Chinese invention patent of CN102880865B provides a kind of dynamic gesture identification method based on the colour of skin and morphological feature, root Threshold value cutting processing is made to image according to distribution characteristics of the skin color of people in YCrCb color space, and is obtained with ellipse fitting Manpower colour of skin block is obtained, then marks the center of gravity of color lump, and then judge the movement of hand according to gravity motion, reaches identifying purpose.
In conclusion existing gesture identification exchange method has the disadvantage in that (1) is difficult under the conditions of practical application Positioning has the beginning and end key point of function definition gesture in complicated gesture stream;(2) same gesture is due to executing speed There is spatio-temporal difference in different different with movement range and inevitable area, and the accuracy rate and robustness to identification will cause very big It influences.
Summary of the invention
For the deficiency of the above technology, propose a kind of precision that gesture identification can be improved and efficiency using movement The gesture recognition system and method for cutting.Technical scheme is as follows: a kind of gesture recognition system using movement cutting, Comprising: data interface module, data interface module includes gesture data stream interface component and head pose data stream interface;With In the initial data for obtaining simultaneously management header posture and hand gestures;
Head pose estimation module, including feature calculation module and the predefined module of posture, for head attitude data The initial data that stream interface is sent handles original header attitude data with algorithm for estimating and output estimation result gives movement segmentation Module;
Movement segmentation module, comprising automatic segmentation module, actively divides module and buffer area, according to head pose estimation mould The output estimation of block is as a result, carry out cutting processing to gesture data using segmentation algorithm;
Gesture recognition module, the gesture sequence for obtaining to movement segmentation module cutting carry out gesture identification.
Further, the data interface module is responsible for management header attitude data stream and gesture video requency frame data stream, hand Gesture acquisition equipment includes that perhaps data glove head pose acquisition equipment includes depth camera or wears biography depth camera Sensor, the head pose acquisition equipment have programmable application programming interfaces and are able to carry out according to collecting work.
Further, feature used by the head pose estimation module is described as head pose angle, and head pose is adopted Collection equipment acquisition data through head towards vector calculate after be used to express head pose Euler's rotation angle or face orientation to Amount, head or face orientation ο (x, y, z) are found out by following formula: ο=z (α) x (β) z (γ) P, wherein z (α) is indicated Around reference frame z-axis rotation alpha angle value, z (γ) indicates that head rotates γ angle value, x (β) around reference frame z-axis on head Head is indicated around reference frame x-axis rotation β angle value, P (1,0,0) indicates the point under reference frame.
Further, by the way of being determined based on head pose, head pose refers to fixed in advance the module of actively dividing The face of justice towards or headwork, to represent head pose that gesture starts as the starting point of cutting, to represent gesture End point of the head pose of end as cutting;The automatic segmentation module uses the template matching side based on empty-handed potential model Method judges the inflection point acted between different gestures according to the rate variation of gesture motion variation;The relevant parameter of automatic segmentation has SPOTTING_START cutting starting point threshold value, SPOTTING_END cutting end point threshold value, DURATION gesture length threshold, Cutting starting point is marked as rate α (t) the > SPOTTING_START of the characterization hand exercise of a certain moment t, as α (t) < When SPOTTING_END, cutting end point is marked, the gesture sequence length L (g) that cutting is obtained is compared with DURATION, If the gesture sequence duration of cutting is too short, the sequence is abandoned.
Further, the gesture recognition module includes characteristic extracting module, disaggregated model module and match cognization mould Block, characteristic extracting module are used to carry out feature extraction and calculating to the data that movement segmentation obtains, and disaggregated model module is for building Vertical gesture classification model, the model are trained study to the feature being calculated, and match cognization module is used for defeated according to gesture Enter the feature learnt in feature and disaggregated model and carry out match cognization, the quantified output of the recognition result is mapped as operation and refers to It enables.
Further, the match cognization module of the gesture recognition module uses Discrete HMM Hidden Markov Model Or DTW dynamic time warping sorting algorithm is handled and exports recognition result.
Further, the characteristic extracting module is used using the local quaternary number in gesture joint as feature vector, is calculated Steps are as follows: first finding out the bone direction vector under local coordinate system, renormalization;And then find out direction vector and three axis units The angle of vector;After obtaining the angle of three axis, the local quaternary number expression of gesture joint point feature is calculated.
A kind of gesture identification method using movement cutting based on the system comprising following steps:
1) a kind of acquisition equipment of data, is selected first, and establishes data-interface to provide continuous gesture number in real time According to source;2), detection head athletic posture is first gesture, and is matched with first gesture type predetermined;If 3), detect master Movement cutting head gesture is then according to cutting head gesture signal calling data-interface acquisition gesture data is started, according to end cutting head gesture Signal-off data-interface stops data collection;4), data-interface is called to start if detecting auto-action cutting head gesture Data are acquired, while gesture motion cutting is carried out according to hand exercise state or empty-handed potential model;5), finally with recognizer pair The gesture sequence that cutting obtains carries out gesture identification.
Further, the step of step 4) automatic segmentation includes:
Continuous data and abstraction templates are subjected to preliminary matches, enable GtIndicate the input gesture of a length of t at one section, essence It is an eigenmatrix, line number is the hits in time t, and every behavior gesture feature vector, the vector is by the quaternary number spy Sign is calculated, template library (g1,g2,…,gn) indicate that known n kind is abstracted gesture template, S (Gt,gi) measurement current data stream With the similitude of certain gesture template;When hand exercise stops, by gesture sequence GtWith template (g1,g2,…,gn) feature into Row compares, and seeks locally optimal solution by Hill Climbing algorithm, if in GtIn detect known gesture template piece It, then be compared by section with the template gesture, if detecting similitude, the similitude of the two just be will increase, until occurring Similitude drop point, that is, inflection point is detected by the end point of movement segment, puts it into alternative collection, to its with similitude Its gesture template carries out same detection step, then takes whole story point of the maximum segment of similarity measure as cutting in alternative collection, i.e., Set metasequence can be converted by input gesture.
It advantages of the present invention and has the beneficial effect that:
The present invention uses that user is unrelated and feature description of the vision extraneous features as gesture, so that different users can be with With identical gesture identification model do not have to additionally train new gesture template library, by with the movement based on head pose estimation Cutting method effectively cutting complexity gesture sequence, reduction redundant data can improve gesture identification precision and efficiency.The present invention mentions The gesture identification method and system using movement cutting out, can be improved the precision and efficiency of gesture identification, can be effective It solves under real-time scene, the gesture cutting problems between the spatio-temporal difference problem and gesture start and ending of continuous dynamic gesture.
Detailed description of the invention
Fig. 1 is that the present invention provides preferred embodiment gesture recognition system structural block diagram;
Fig. 2 is head pose angle model and embodiment schematic diagram referenced by the present invention;
Fig. 3 is the corresponding head pose estimation method flow diagram of the present invention;
Fig. 4 is the active cutting method flow chart in present invention movement cutting method.
Fig. 5 is the present invention to the automatic segmentation method flow diagram in movement cutting method.
Fig. 6 is gesture identification method flow chart of the present invention using movement cutting.
Specific embodiment
Below in conjunction with attached drawing, the invention will be further described:
A kind of gesture identification method and system using movement cutting, the system specifically include that as shown in Figure 1
A1~A14:A1 be with data acquisition equipment be adapted data interface module, A2 be head pose estimation module, A3 is that movement cutting module, A4 are gesture recognition modules, wherein data interface module contains head pose data stream interface group Part A5 and gesture data stream interface component A6, head pose estimation module contain feature calculation component A7 and the predefined mould of posture Board group part A8, movement cutting module contain automatic segmentation component A9 and active cutting component A10, A11 use in buffer area therein First gesture segment is stored, gesture feature extraction assembly A12, gesture-type and operational order are contained in gesture recognition module reflects The disaggregated model component A14 for penetrating component A13, being obtained by training.
By taking immersive VR scene as an example, head pose angle embodiment schematic diagram of the invention is as shown in Figure 2 b, head Attitude data stream interface corresponding equipment in portion's is the head-mounted display comprising inertial sensor, and such product has had in the market It sells, and substantially all includes attitude angle inducing function.Head pose angle model schematic diagram is as shown in Figure 2 a, and head coordinate system is opposite to join Three Euler's angular dependences for examining coordinate system reflect the posture on head: the angle of head coordinate system and referential horizontal plane is pitching Angle θ (pitch) bows when on the horizontal plane that the positive axis of head coordinate system was located at the origin of reference frame (new line) The elevation angle is positive, and is otherwise negative.Head coordinate system xbThe projection of axis in the horizontal plane and reference frame xgAngle between axis is inclined It navigates angle ψ (yaw), by xgAxis rotates counterclockwise to head xbProjection line when, yaw angle is positive, otherwise be negative.Head coordinate system zb Axis with pass through head xbAngle between the vertical guide of axis is roll angle φ (roll), and the right wryneck in head is positive, otherwise is negative.
Algorithm of Head Pose Estimation process is as shown in Figure 3.
B1~B3: for wearing sensor described in [0023], B1 is to pass through inertial sensor detection and capture head appearance State variation, and with the data interface management attitude angle data of building, B2 be to acquired attitude angle data progress feature extraction, Calculation method is as follows:
ο=z (α) x (β) z (γ) P (1)
Wherein, ο (x, y, z) indicate head or face towards vector, z (α) indicate head around referential z-axis rotation alpha angle, Z (γ) indicates that head rotates γ angle around reference frame z-axis, and x (β) indicates that head rotates around x axis β angle, uses spin matrix It indicates:
Wherein, head is around the rotation angle [alpha] of each axis, and beta, gamma can be obtained by the data-interface for wearing inertial sensor, by formula Sub (2)-(5), which are brought into (1) formula, can be obtained head towards vector ο, be used to the head pose feature indicated.
B6~B5: according to head pose vector set Ο (ν predeterminedi) in template νiWith the feature of extract real-time Vector ο is matched, according to Distance conformability degree principle judge ο posture type and output quantization as a result, finally according to current appearance Whether state type decision enters cutting or terminates cutting.
A16~A17: by bis- positioning results of sobel, inputting SVM license plate judgment models, obtain real license plate block, and Output.
B7: at the time of timer is used to record generation cutting posture, counter is used to calculate the cutting in threshold time period Frequency can expand sliced signal by this two pieces processing, i.e., changed by head pose or acted to generate Cutting foundation.
Acting cutting is the necessary hand for applying to gesture identification effectively in real-time scene such as immersive VR Section, unlike the isolated gesture identification under experiment condition, real-time conditions are increasingly complex, are containing excessive gesture and not operation gesture Continuous data stream, the present invention devise two kinds of cutting methods from validity and real-time, and active syncopation is based on head appearance State estimation, the subjective initiative of user can be played as gesture and does not interfere the normal operating of hand, its main feature is that with Centered on family, cutting accuracy is high;Automatic segmentation is the supplement to active cutting, for handling i.e. first gesture of effective gesture or so For empty-handed gesture or the apparent simple scenario that pauses.
It is illustrated in figure 4 the implementation flow chart of active cutting method:
C1~C5: when continuous gesture stream initial data introduces system by data-interface, head is determined according to the method For portion's posture to determine cut-off, the data after cutting are exactly first gesture, i.e., the sequence fragment of effective gesture.It is specifically exactly to work as Detect cutting initial point signal i.e. head pose when call data-interface currently to start data data introduce pending data In buffer area, relevant interface is then closed when detecting cutting distal tip signal, stops introducing the data of buffer area, cutting First gesture data to be processed can be obtained in end, which will be delivered to characteristic extracting module and do processing in next step, and actively cut Sub-module continues to monitor the sliced signal from head pose estimation module.
It is illustrated in figure 5 the method flow diagram of automatic segmentation:
D1~D7: continuous data and abstraction templates carry out preliminary matches, GtIndicate the dynamic gesture of a length of t at one section, mould Plate library (g1,g2,…,gn) indicate that known n kind is abstracted gesture template, S (Gt,gi) measure current data stream and certain gesture template Similitude.Both when hand exercise stops, whole string data is compared with template, once detect similitude, then Similitude will be stepped up, and be detected by the end point of movement segment when inflection point occurs in similitude decline, then take the overall situation Whole story point of the maximum string of similarity measure as cutting, can convert gesture to set metasequence.
It is illustrated in figure 6 the gesture identification method flow chart using movement cutting:
For E1~E3 by taking immersive VR scene as an example, used head pose input equipment is with inertia sensing The head-mounted display that can detecte head pose angle of device, gesture input device are the depth cameras that can carry out bone tracking Machine passes through head pose data-interface real-time detection head pose and converts sliced signal for estimated result, and gesture data connects Mouth carries out active cutting to gesture with reference to sliced signal, and when such as executing gesture, user is hoped to arm, generation end rotation variation, Beginning sliced signal is then generated, when user terminates gesture simultaneously while going back to positive head, then generates end sliced signal, this process pair E9~E12 is answered to can choose if user is not desired to using active mode into automatic gesture cutting mode E8, the mode is to transport Dynamic inflection point judges dicing position, with reference to cut-off and data-interface is called to introduce metadata streams to data buffer zone.
E4~E5: the metadata obtained by [0031] is subjected to gesture pretreatment, further according to depth camera such as Kinect The gesture associated joint point data that tracking bone obtains carries out feature extraction and calculation, and the present embodiment is using local the four of gesture joint For first number as feature vector, calculation method is as follows:
(1) the bone direction vector under local coordinate system is first found out, then unitization.
(2) angle of direction vector and three axis unit vectors is found out again.
(3) after the angle for obtaining three axis, local quaternary number q is calculated.
E6~E7: in order to solve the problems, such as that accuracy of identification caused by spatio-temporal difference declines, the present embodiment uses dynamic time Regular DTW algorithm carries out model training and match cognization, can be very good to solve the problems, such as that length of time series is unequal, specifically Process is to give two time serieses: R=< r1,r2,…,rm> and T=< t1,t2,…,tn>, DTW by calculate R and T it Between fitst water matching φ=(φRT) make the distance of corresponding element after matching only and minimum.For Optimum Matching φ, φR =(φ1 R2 R,…φK R),(1≤φi R≤ m, 1≤i≤K), and φT=(φ1 T2 T,…φK T),(1≤φi T≤n,1 ≤i≤K).DTW distance definition between time series R and T is as follows:Wherein, d (i, j) indicates in R the distance between j-th of element in i-th of element and T, and the present embodiment takes d (i, j)=(Li+Lj)/2, wherein LiAnd LjIndicate first gesture length.Classify finally by based on the likeness in form degree of distance measurement to gesture, the present embodiment uses Arest neighbors NN classifier is classified.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.? After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes Change and modification equally falls into the scope of the claims in the present invention.

Claims (7)

1. a kind of gesture recognition system using movement cutting characterized by comprising data interface module, data-interface mould Block includes gesture data stream interface component and head pose data stream interface;For obtaining simultaneously management header posture and hand gestures Initial data;
Head pose estimation module, including feature calculation module and the predefined module of posture, for being connect to head attitude data stream The initial data that mouth is sent handles original header attitude data with algorithm for estimating and output estimation result gives movement segmentation module;
Movement segmentation module, comprising automatic segmentation module, actively divides module and buffer area, according to head pose estimation module Output estimation is as a result, carry out cutting processing to gesture data using segmentation algorithm;The module of actively dividing is used based on head The mode of pose discrimination, head pose refers to face's direction predetermined or headwork, to represent the head that gesture starts Starting point of the posture as cutting, to represent head pose that gesture terminates as the end point of cutting;The automatic segmentation mould Block uses the template matching method based on empty-handed potential model, judges to move between different gestures according to the rate variation of gesture motion variation The inflection point of work;The relevant parameter of automatic segmentation has SPOTTING_START cutting starting point threshold value, SPOTTING_END cutting knot Beam spot threshold value, DURATION gesture length threshold, as rate α (t) the > SPOTTING_ of the characterization hand exercise of a certain moment t Cutting starting point is marked when START, as α (t) < SPOTTING_END, marks cutting end point, the gesture that cutting is obtained Sequence length L (g) is compared with DURATION, if the gesture sequence duration of cutting is too short, abandons the sequence;
Gesture recognition module, the gesture sequence for obtaining to movement segmentation module cutting carry out gesture identification.
2. the gesture recognition system according to claim 1 using movement cutting, which is characterized in that the data-interface mould Block is responsible for management header attitude data stream and gesture video requency frame data stream, and it includes depth camera or data that gesture, which acquires equipment, Gloves, head pose acquisition equipment include depth camera or wear sensor, and the head pose acquisition equipment has can The application programming interfaces of programming are simultaneously able to carry out according to collecting work.
3. the gesture recognition system according to claim 1 using movement cutting, which is characterized in that the gesture identification mould Block includes characteristic extracting module, disaggregated model module and match cognization module, and characteristic extracting module is used to obtain movement segmentation Data carry out feature extraction and calculating, disaggregated model module is for establishing gesture classification model, and the model is to being calculated Feature is trained study, and match cognization module is used to be carried out according to the feature learnt in gesture input feature and disaggregated model Match cognization, the quantified output of the recognition result are mapped as operational order.
4. the gesture recognition system according to claim 3 using movement cutting, which is characterized in that the gesture identification mould The match cognization module of block is used to be carried out using Discrete HMM Hidden Markov Model or DTW dynamic time warping sorting algorithm It handles and exports recognition result.
5. the gesture recognition system according to claim 3 using movement cutting, which is characterized in that the feature extraction mould For block using the local quaternary number in gesture joint as feature vector, steps are as follows for calculating: first finding out the bone side under local coordinate system To vector, renormalization;And then find out the angle of direction vector and three axis unit vectors;After obtaining the angle of three axis, calculate Local quaternary number to gesture artis feature is expressed.
6. it is a kind of based on system described in claim 1 using movement cutting gesture identification method, which is characterized in that including with Lower step:
1) a kind of acquisition equipment of data, is selected first, and establishes data-interface to provide continuous gesture data source in real time; 2), detection head athletic posture is first gesture, and is matched with first gesture type predetermined;If 3), detected actively dynamic Make cutting head gesture then according to cutting head gesture signal calling data-interface acquisition gesture data is started, according to end cutting head gesture signal Data-interface is closed to stop data collection;4), data-interface is called to start to acquire if detecting auto-action cutting head gesture Data, while gesture motion cutting is carried out according to hand exercise state or empty-handed potential model;5), finally with recognizer to cutting Obtained gesture sequence carries out gesture identification.
7. the gesture identification method according to claim 6 using movement cutting, which is characterized in that the step 4) is automatic The step of cutting includes:
Continuous data and abstraction templates are subjected to preliminary matches, GtIndicate the input gesture data sequence of a length of t at one section, Matter is an eigenmatrix, and line number is the hits in time t, and every behavior gesture feature vector, the vector is by quaternary number feature It is calculated, template library (g1,g2,…,gn) indicate that known n kind is abstracted gesture template, S (Gt,gi) measurement current data stream and The similitude of certain gesture template;When hand exercise stops, by the whole string data G of gesture sequencetWith template (g1,g2,…,gn) It is compared, locally optimal solution is sought by Hill Climbing algorithm, if in GtIn detect known gesture template piece It, then be compared by section with the template gesture, if detecting similitude, the similitude of the two just be will increase, until occurring Similitude drop point is detected by the end point of movement segment, puts it into alternative collection, to other gestures with similitude Template carries out same detection step, then takes whole story point of the maximum segment of similarity measure as cutting in alternative collection, can will be defeated Enter gesture and is converted into set metasequence.
CN201610177623.5A 2016-03-24 2016-03-24 A kind of gesture recognition system and method using movement cutting Active CN105809144B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610177623.5A CN105809144B (en) 2016-03-24 2016-03-24 A kind of gesture recognition system and method using movement cutting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610177623.5A CN105809144B (en) 2016-03-24 2016-03-24 A kind of gesture recognition system and method using movement cutting

Publications (2)

Publication Number Publication Date
CN105809144A CN105809144A (en) 2016-07-27
CN105809144B true CN105809144B (en) 2019-03-08

Family

ID=56453890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610177623.5A Active CN105809144B (en) 2016-03-24 2016-03-24 A kind of gesture recognition system and method using movement cutting

Country Status (1)

Country Link
CN (1) CN105809144B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909042B (en) * 2017-11-21 2019-12-10 华南理工大学 continuous gesture segmentation recognition method

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108427194A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of display methods and equipment based on augmented reality
CN107272878B (en) * 2017-02-24 2020-06-16 广州幻境科技有限公司 Identification method and device suitable for complex gesture
CN107092349A (en) * 2017-03-20 2017-08-25 重庆邮电大学 A kind of sign Language Recognition and method based on RealSense
CN106998409B (en) * 2017-03-21 2020-11-27 华为技术有限公司 Image processing method, head-mounted display and rendering equipment
CN106991398B (en) * 2017-04-01 2020-03-27 北京工业大学 Gesture recognition method based on image recognition and matched with graphical gloves
CN107092882B (en) * 2017-04-19 2020-04-28 南京大学 Behavior recognition system based on sub-action perception and working method thereof
CN108510525B (en) 2018-03-30 2019-03-12 百度在线网络技术(北京)有限公司 Template method for tracing, device, augmented reality system and storage medium
CN108596150A (en) * 2018-05-10 2018-09-28 南京大学 A kind of Activity recognition system and its working method excluding abnormal operation
CN109101879B (en) * 2018-06-29 2022-07-01 温州大学 Posture interaction system for VR virtual classroom teaching and implementation method
CN110472396B (en) * 2018-08-17 2022-12-30 中山叶浪智能科技有限责任公司 Somatosensory gesture touch method, system, platform and storage medium
CN110866864B (en) * 2018-08-27 2024-10-22 阿里巴巴华东有限公司 Face pose estimation/three-dimensional face reconstruction method and device and electronic equipment
CN110913279B (en) * 2018-09-18 2022-11-01 中科海微(北京)科技有限公司 Processing method for augmented reality and augmented reality terminal
CN109598229B (en) * 2018-11-30 2024-06-21 李刚毅 Monitoring system and method based on action recognition
CN109751998A (en) * 2019-01-14 2019-05-14 重庆邮电大学 A kind of recognizing model of movement method based on dynamic time warping
CN109966739B (en) * 2019-01-17 2022-11-11 珠海金山数字网络科技有限公司 Method and system for optimizing game operation
CN109977890B (en) * 2019-03-30 2021-08-17 绵阳硅基智能科技有限公司 Action recognition method and recognition system thereof
CN110163086B (en) * 2019-04-09 2021-07-09 有品国际科技(深圳)有限责任公司 Body-building action recognition method, device, equipment and medium based on neural network
CN110163113B (en) * 2019-04-25 2023-04-07 上海师范大学 Human behavior similarity calculation method and device
CN110059661B (en) * 2019-04-26 2022-11-22 腾讯科技(深圳)有限公司 Action recognition method, man-machine interaction method, device and storage medium
CN110263743B (en) * 2019-06-26 2023-10-13 北京字节跳动网络技术有限公司 Method and device for recognizing images
CN110263742A (en) * 2019-06-26 2019-09-20 北京字节跳动网络技术有限公司 The method and apparatus of image for identification
CN112181148A (en) * 2020-09-29 2021-01-05 中国人民解放军军事科学院国防科技创新研究院 Multimodal man-machine interaction method based on reinforcement learning
CN112464847B (en) * 2020-12-07 2021-08-31 北京邮电大学 Human body action segmentation method and device in video
CN113840177B (en) * 2021-09-22 2024-04-30 广州博冠信息科技有限公司 Live interaction method and device, storage medium and electronic equipment
CN114533039B (en) * 2021-12-27 2023-07-25 重庆邮电大学 Human joint position and angle resolving method based on redundant sensor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508547A (en) * 2011-11-04 2012-06-20 哈尔滨工业大学深圳研究生院 Computer-vision-based gesture input method construction method and system
CN102982315A (en) * 2012-11-05 2013-03-20 中国科学院计算技术研究所 Gesture segmentation recognition method capable of detecting non-gesture modes automatically and gesture segmentation recognition system
CN103649967A (en) * 2011-06-23 2014-03-19 阿尔卡特朗讯 Dynamic gesture recognition process and authoring system
CN104933408A (en) * 2015-06-09 2015-09-23 深圳先进技术研究院 Hand gesture recognition method and system
CN105320937A (en) * 2015-09-25 2016-02-10 北京理工大学 Kinect based traffic police gesture recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103649967A (en) * 2011-06-23 2014-03-19 阿尔卡特朗讯 Dynamic gesture recognition process and authoring system
CN102508547A (en) * 2011-11-04 2012-06-20 哈尔滨工业大学深圳研究生院 Computer-vision-based gesture input method construction method and system
CN102982315A (en) * 2012-11-05 2013-03-20 中国科学院计算技术研究所 Gesture segmentation recognition method capable of detecting non-gesture modes automatically and gesture segmentation recognition system
CN104933408A (en) * 2015-06-09 2015-09-23 深圳先进技术研究院 Hand gesture recognition method and system
CN105320937A (en) * 2015-09-25 2016-02-10 北京理工大学 Kinect based traffic police gesture recognition method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909042B (en) * 2017-11-21 2019-12-10 华南理工大学 continuous gesture segmentation recognition method

Also Published As

Publication number Publication date
CN105809144A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
TWI430185B (en) Facial expression recognition systems and methods and computer program products thereof
Li et al. Joint distance maps based action recognition with convolutional neural networks
Shreve et al. Macro-and micro-expression spotting in long videos using spatio-temporal strain
US8401248B1 (en) Method and system for measuring emotional and attentional response to dynamic digital media content
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN105740779B (en) Method and device for detecting living human face
JP3962803B2 (en) Head detection device, head detection method, and head detection program
CN107092349A (en) A kind of sign Language Recognition and method based on RealSense
CN107480586B (en) Face characteristic point displacement-based biometric photo counterfeit attack detection method
Marcos-Ramiro et al. Body communicative cue extraction for conversational analysis
CN110796101A (en) Face recognition method and system of embedded platform
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
Monir et al. Rotation and scale invariant posture recognition using Microsoft Kinect skeletal tracking feature
US11676357B2 (en) Modification of projected structured light based on identified points within captured image
CN104794446B (en) Human motion recognition method and system based on synthesis description
CN106599779A (en) Human ear recognition method
Hoque et al. Computer vision based gesture recognition for desktop object manipulation
Zraqou et al. An efficient approach for recognizing and tracking spontaneous facial expressions
CN104751144B (en) A kind of front face fast appraisement method of facing video monitoring
CN110705453A (en) Real-time fatigue driving detection method
Liang et al. Face pose estimation using near-infrared images
CN111652019B (en) Face living body detection method and device
KR20130081126A (en) Method for hand-gesture recognition and apparatus thereof
Afroze et al. Talking vs non-talking: A vision based approach to detect human speaking mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant