Nothing Special   »   [go: up one dir, main page]

CN103824051A - Local region matching-based face search method - Google Patents

Local region matching-based face search method Download PDF

Info

Publication number
CN103824051A
CN103824051A CN201410053334.5A CN201410053334A CN103824051A CN 103824051 A CN103824051 A CN 103824051A CN 201410053334 A CN201410053334 A CN 201410053334A CN 103824051 A CN103824051 A CN 103824051A
Authority
CN
China
Prior art keywords
image
face
vector
organ
middle level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410053334.5A
Other languages
Chinese (zh)
Other versions
CN103824051B (en
Inventor
姜宇宁
印奇
曹志敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201410053334.5A priority Critical patent/CN103824051B/en
Publication of CN103824051A publication Critical patent/CN103824051A/en
Application granted granted Critical
Publication of CN103824051B publication Critical patent/CN103824051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a local region matching-based face search method. The method includes the following steps that: 1) faces of each image in a face image set A are aligned with a face of a standard format, and areas of various organs are divided; 2) bottom-level feature vectors of each organ are extracted from and are clustered; 3) any two classifications are selected from clustering results of each organ and are adopted as positive and negative samples, and a support vector machine classifier is trained; training is performed in a paired combination manner, such that a classifier set of the organs can be obtained, and the results of discrimination of the bottom-level feature vectors which is performed by each classifier in the classifier set are united so as to form new feature vectors, namely, middle-level feature vectors of the organs; 4) the ratio of the distance of each key point on each face contour to left and right eyes to the distance between the two eyes is calculated and is adopted as the middle-level feature vector of the corresponding face contour; the above middle-level feature vectors are combined such that Vr can be obtained; and 5) a middle-level feature vector Vq is generated for a face image q to be searched; and the Vq is matched with the Vr in the A, and query results are returned. With the local region matching-based face search method of the invention adopted, a search effect of similar faces can be improved.

Description

A kind of face searching method based on regional area coupling
Technical field
The present invention relates to a kind of face searching method, relate in particular to a kind of face searching method based on regional area coupling, belong to image recognition technology field.
Background technology
Recognition of face detection technique is used widely in each field at present, become a current study hotspot, such as the patent documentation of application number 201210313721.9, title " face identification method ", the patent documentation of application number 201210310643.7, title " a kind of face identification method and system thereof ".
Similar face search (face search), be a given face to be found (query face), to from the image data base that comprises hundreds thousand of even more plurality of human faces, find the result similar to its appearance, and return according to the sequence of pictures of its similarity degree sequence.Along with the burst of internet picture nowadays increases and safety monitoring equipment day by day universal, all can produce magnanimity facial image data every day, and these face data all need effectively to be organized index and search analysis.Under this background, similar face search technique, especially for the similar face search technique of large scale database, just by demand urgently.
In traditional similar face search system, in the time of the similarity of calculating between any two faces, every face is taken as an entirety and considers, be that every face is represented as a single proper vector, obtain two global similarity degree between face by calculating the distance of two proper vectors afterwards.But, because different user is different to the definition of face " appearance is similar ", often can not allow user satisfied (for example take whole face similarity degree as standard search result out, some users tend to the similar face of removal search eyes, and other users tend to the similar face of removal search face contour).Meanwhile, whole face similarity degree is easy to be subject to the interference of localized variation, especially very responsive to partial occlusion (as wear dark glasses) etc.In this case, its whole face similarity degree tends to be had a strong impact on by these noisy regions, make should be very similar face dragged down overall similarity degree by the regional area of these " bad ", thereby cannot be searched and be returned.
Summary of the invention
For the technical matters existing in prior art, the object of the present invention is to provide a kind of face searching method based on regional area coupling.
Technical scheme of the present invention is:
Based on a face searching method for regional area coupling, the steps include:
1) each image in facial image collection A is carried out to face detection and critical point detection, the key point position on output face rectangular area, the each organ of face and facial contour;
2) utilize the key point position of each image that the face of this image is snapped to a standard format on the face, and mark off the regional location of this each organ of face;
3), for each organic region of each standard format face, extract respectively the low-level image feature vector of each organ;
4) the low-level image feature vector of each organ is carried out to cluster, the low-level image feature vector of each organ is divided into respectively to some classes, and record all kinds of central points;
5) to each organ, its cluster result is k class, therefrom appoints and gets two classes, as positive sample, another kind of as negative sample with the low-level image feature vector that wherein a class comprises, and trains a support vector machine classifier; So all combination of two in this organ of traversal k class, obtain the support vector machine classifier set of this organ, then utilize each support vector machine classifier in this set to carry out discriminant classification to low-level image feature vector described in step 4), this differentiation result is unified into new proper vector, i.e. this organ middle level features vector;
6) for step 2) each standard format face of obtaining, calculate each key point on standard format face profile and divide the distance that is clipped to right and left eyes, and middle level features vector using the ratio of this distance and two spacing as the facial contour of correspondence image;
7) the middle level features vector of the each organ of each facial image and the middle level features vector of facial contour are combined, obtain the middle level features vector Vr of this facial image;
8) to any facial image q to be found, generate its middle level proper vector Vq;
9) the middle level features vector of facial image in Vq and this image set A is carried out to similarity calculating, return to the Query Result of coupling.
Further, all low-level image feature vectors that step 3) is extracted carry out respectively projection dimensionality reduction, and then the low-level image feature vector of step 4) after to the dimensionality reduction of each organ carries out cluster.
Further, utilize successively all low-level image features that Principal Component Analysis Algorithm, linear discriminant analysis algorithm extract step 3) to carry out respectively projection dimensionality reduction.
Further, described low-level image feature vector is histograms of oriented gradients proper vector and local binary patterns proper vector.
Further, utilize the key point position of each image the face of this image to be rotated and convergent-divergent rectification, the face of this image is snapped on the form of a standard.
Further, described organ comprises eyebrow, eye, nose, mouth; The middle level features Vr=(Vr_ profile, Vr_ eyebrow, Vr_ eye, Vr_ nose, Vr_ mouth) of each facial image.
Further, according to the distance dist that calculates the middle level features vector of facial image in Vq and this image set A, determine the similarity with Vq.
Further, described calculating formula of similarity is: dist=w_ profile * d_ profile+w_ eyebrow * d_ eyebrow+w_ eye * d_ eye+w_ nose * d_ nose+w_ mouth * d_ mouth; Wherein, d_ profile is the distance of the profile middle level features vector of facial image in profile middle level features vector and this image set A in Vq; D_ eyebrow is the distance of the eyebrow middle level features vector of facial image in middle level features vector and this image set A of organ eyebrow in Vq; D_ eye is the distance of the eye middle level features vector of facial image in eye middle level features vector and this image set A in Vq; D_ nose is the distance of the nose middle level features vector of facial image in nose middle level features vector and this image set A in Vq; D_ mouth is the distance of the mouth middle level features vector of facial image in mouth middle level features vector and this image set A in Vq; W_ profile, w_ eyebrow, w_ eye, w_ nose, w_ mouth is respectively the weight of each organ.
Compared with prior art, good effect of the present invention is:
1) build a kind of similar face search system based on different people face (eyebrow, eye, nose, mouth, face contour), it is no longer according to whole face characteristic that human face similarity degree calculates, but according to the combination of the similarity of face difference regional areas; User can, by regulating the mode of each organic region weight to define personalized " similar face ", experience thereby reach optimum user.Such distance is calculated more flexible, because can select different region weights to the definition of " similar face " is different according to different user, makes whole system have better user to experience;
2) middle level semantic feature has replaced bottom textural characteristics, is used for describing regional area; Such feature has more descriptive and generalization, no longer responsive to illumination condition, expression shape change, attitude variation etc. while making face coupling, makes whole system robust stability more.Selected comprise 5,000 famous persons totally 30,000 pictures as search test set, only use low-level image feature, its average front ten accuracys rate are 42%; And in use layer feature after, its accuracy rate is 51%.
3) middle level semantic layer can draw by machine learning automatic cluster on the basis of bottom layer image unity and coherence in writing feature, makes this feature have stronger descriptive power and generalization ability; This feature turns plane internal rotation and convergent-divergent all can remain unchanged, and can describe well people's face contour feature.
Based on above reason, the present invention has significantly promoted effect and the experience of similar face search.
Accompanying drawing explanation
Fig. 1. based on the similarity calculating method of regional area coupling;
Fig. 2. human face middle level features computing method;
Fig. 3. the computing method of facial contour feature.
Embodiment
The invention discloses a kind of similar face search system based on regional area coupling, its concrete system flow is as follows:
A) set up facial image collection A as search tranining database;
B) utilize face detection and face critical point detection algorithm to do face to each image in A and detect and critical point detection, be output as key point position on each organ of face rectangular area and face (eyebrow, eye, nose, mouth) and face contour;
C) utilize the position of key point all faces that detect in A to be rotated and convergent-divergent rectification, make, on its form that all snaps to a standard, to mark off the regional location of every each organ of face simultaneously;
D) for eyebrow, eye, nose, mouth organic region of each the standard format face generating in c), extract respectively histograms of oriented gradients proper vector (Histogram of Oriented Gradients, HOG) and local binary patterns proper vector (Local Binary Patterns, LBP) as low-level image feature;
E) in all test sets that obtain after d) step on the low-level image feature of picture, utilize successively classical Principal Component Analysis Algorithm (Principal Component Analysis, PCA) and linear discriminant analysis algorithm (Linear Discriminant Analysis, LDA) the low-level image feature vector of each organ is done respectively to projection dimensionality reduction;
F) (different organs can be selected different k values to utilize clustering algorithm in machine learning that the proper vector after the dimensionality reduction of each organ of gained in e) is divided into respectively to k class, use identical k value when the cluster for writing the different device features of convenient following acquiescence), and record all kinds of central points;
G) to appointing and get two classes in a certain organ k class, as positive sample, and another kind of as negative sample by the proper vector that wherein a class comprises, train a support vector machine classifier (Support Vector Machine, SVM).So all combination of two of traversal, finally obtain to each organ the set that comprises k* (k-1)/2 support vector machine classifier altogether;
H) to some organs (take eyes as example), each proper vector about this organ in the set A that step is produced in e), does discriminant classification by the sorter set of this organ, obtains k* (k-1)/2 and differentiates result.This k* (k-1)/2 differentiation result combined and become new proper vector, i.e. this organ middle level features, is designated as Vr_ eye (see figure 2);
I) for the face contour of each the standard format face generating in b), calculate key point on each profile and divide the distance that is clipped to right and left eyes, and feature Vr_ profile using the ratio of this distance and two spacing as facial contour, see Fig. 3;
J) in conjunction with h) and the i) Output rusults of step, to any facial image r in database, it is expressed as 5 kinds of combinations for the proper vector of Different Organs, i.e. Vr=(Vr_ profile, Vr_ eyebrow, Vr_ eye, Vr_ nose, Vr_ mouth);
K), to any facial image q to be found, repeating step a) b) c) d) e) h) i) j), generates and it is characterized by Vq=(Vq_ profile, Vq_ eyebrow, Vq_ eye, Vq_ nose, Vq_ mouth);
L) all people's face proper vector Vr in Vq and image library is calculated to distance, computing formula is dist=w_ profile * d_ profile+w_ eyebrow * d_ eyebrow+w_ eye * d_ eye+w_ nose * d_ nose+w_ mouth * d_ mouth, wherein (w_ profile, w_ eyebrow, w_ eye, w_ nose, w_ mouth) be the weight of each organ that meets own similar definition of user's input, as shown in Figure 1;
M) according to the distance of face q to be found, face picture in database being arranged by ascending order, return to front several pictures of this sequence, be lookup result.

Claims (8)

1. the face searching method based on regional area coupling, the steps include:
1) each image in facial image collection A is carried out to face detection and critical point detection, the key point position on output face rectangular area, the each organ of face and facial contour;
2) utilize the key point position of each image that the face of this image is snapped to a standard format on the face, and mark off the regional location of this each organ of face;
3), for each organic region of each standard format face, extract respectively the low-level image feature vector of each organ;
4) the low-level image feature vector of each organ is carried out to cluster, the low-level image feature vector of each organ is divided into respectively to some classes, and record all kinds of central points;
5) to each organ, its cluster result is k class, therefrom appoints and gets two classes, as positive sample, another kind of as negative sample with the low-level image feature vector that wherein a class comprises, and trains a support vector machine classifier; So all combination of two in this organ of traversal k class, obtain the support vector machine classifier set of this organ, then utilize each support vector machine classifier in this set to carry out discriminant classification to low-level image feature vector described in step 4), this differentiation result is unified into new proper vector, i.e. this organ middle level features vector;
6) for step 2) each standard format face of obtaining, calculate each key point on standard format face profile and divide the distance that is clipped to right and left eyes, and middle level features vector using the ratio of this distance and two spacing as the facial contour of correspondence image;
7) the middle level features vector of the each organ of each facial image and the middle level features vector of facial contour are combined, obtain the middle level features vector Vr of this facial image;
8) to any facial image q to be found, generate its middle level proper vector Vq;
9) the middle level features vector of facial image in Vq and this image set A is carried out to similarity calculating, return to the Query Result of coupling.
2. the method for claim 1, is characterized in that all low-level image feature vectors that step 3) is extracted carry out respectively projection dimensionality reduction, and then the low-level image feature vector of step 4) after to the dimensionality reduction of each organ carries out cluster.
3. method as claimed in claim 2, is characterized in that utilizing successively all low-level image features that Principal Component Analysis Algorithm, linear discriminant analysis algorithm extract step 3) to carry out respectively projection dimensionality reduction.
4. the method as described in claim 1 or 2 or 3, is characterized in that described low-level image feature vector is histograms of oriented gradients proper vector and local binary patterns proper vector.
5. the method as described in claim 1 or 2 or 3, is characterized in that utilizing the key point position of each image the face of this image to be rotated and convergent-divergent rectification, the face of this image is snapped on the form of a standard.
6. the method for claim 1, is characterized in that described organ comprises eyebrow, eye, nose, mouth; The middle level features Vr=(Vr_ profile, Vr_ eyebrow, Vr_ eye, Vr_ nose, Vr_ mouth) of each facial image.
7. method as claimed in claim 6, is characterized in that, according to the distance dist that calculates the middle level features vector of facial image in Vq and this image set A, determining the similarity with Vq.
8. method as claimed in claim 7, is characterized in that described calculating formula of similarity is: dist=w_ profile * d_ profile+w_ eyebrow * d_ eyebrow+w_ eye * d_ eye+w_ nose * d_ nose+w_ mouth * d_ mouth; Wherein, d_ profile is the distance of the profile middle level features vector of facial image in profile middle level features vector and this image set A in Vq; D_ eyebrow is the distance of the eyebrow middle level features vector of facial image in middle level features vector and this image set A of organ eyebrow in Vq; D_ eye is the distance of the eye middle level features vector of facial image in eye middle level features vector and this image set A in Vq; D_ nose is the distance of the nose middle level features vector of facial image in nose middle level features vector and this image set A in Vq; D_ mouth is the distance of the mouth middle level features vector of facial image in mouth middle level features vector and this image set A in Vq; W_ profile, w_ eyebrow, w_ eye, w_ nose, w_ mouth is respectively the weight of each organ.
CN201410053334.5A 2014-02-17 2014-02-17 Local region matching-based face search method Active CN103824051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410053334.5A CN103824051B (en) 2014-02-17 2014-02-17 Local region matching-based face search method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410053334.5A CN103824051B (en) 2014-02-17 2014-02-17 Local region matching-based face search method

Publications (2)

Publication Number Publication Date
CN103824051A true CN103824051A (en) 2014-05-28
CN103824051B CN103824051B (en) 2017-05-03

Family

ID=50759103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410053334.5A Active CN103824051B (en) 2014-02-17 2014-02-17 Local region matching-based face search method

Country Status (1)

Country Link
CN (1) CN103824051B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104584030A (en) * 2014-11-15 2015-04-29 深圳市三木通信技术有限公司 Verification application method and device based on face recognition
CN104850600A (en) * 2015-04-29 2015-08-19 百度在线网络技术(北京)有限公司 Method and device for searching images containing faces
CN105260702A (en) * 2015-09-15 2016-01-20 重庆智韬信息技术中心 Auxiliary evaluation authorization method based on face recognition
CN105825224A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Obtaining method and apparatus of classifier family
CN105844267A (en) * 2016-06-14 2016-08-10 皖西学院 Face recognition algorithm
CN106203242A (en) * 2015-05-07 2016-12-07 阿里巴巴集团控股有限公司 A kind of similar image recognition methods and equipment
CN106372068A (en) * 2015-07-20 2017-02-01 中兴通讯股份有限公司 Method and device for image search, and terminal
CN106650798A (en) * 2016-12-08 2017-05-10 南京邮电大学 Indoor scene recognition method combining deep learning and sparse representation
CN106874861A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of face antidote and system
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN108509880A (en) * 2018-03-21 2018-09-07 南京邮电大学 A kind of video personage behavior method for recognizing semantics
CN108804988A (en) * 2017-05-04 2018-11-13 上海荆虹电子科技有限公司 A kind of remote sensing image scene classification method and device
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN109034055A (en) * 2018-07-24 2018-12-18 北京旷视科技有限公司 Portrait plotting method, device and electronic equipment
CN109766754A (en) * 2018-12-04 2019-05-17 平安科技(深圳)有限公司 Human face five-sense-organ clustering method, device, computer equipment and storage medium
CN110019905A (en) * 2017-10-13 2019-07-16 北京京东尚科信息技术有限公司 Information output method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959702A (en) * 2006-10-10 2007-05-09 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN101964064A (en) * 2010-07-27 2011-02-02 上海摩比源软件技术有限公司 Human face comparison method
WO2014001610A1 (en) * 2012-06-25 2014-01-03 Nokia Corporation Method, apparatus and computer program product for human-face features extraction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1959702A (en) * 2006-10-10 2007-05-09 南京搜拍信息技术有限公司 Method for positioning feature points of human face in human face recognition system
CN101964064A (en) * 2010-07-27 2011-02-02 上海摩比源软件技术有限公司 Human face comparison method
WO2014001610A1 (en) * 2012-06-25 2014-01-03 Nokia Corporation Method, apparatus and computer program product for human-face features extraction

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104584030A (en) * 2014-11-15 2015-04-29 深圳市三木通信技术有限公司 Verification application method and device based on face recognition
CN104584030B (en) * 2014-11-15 2017-02-22 深圳市三木通信技术有限公司 Verification application method and device based on face recognition
CN104850600A (en) * 2015-04-29 2015-08-19 百度在线网络技术(北京)有限公司 Method and device for searching images containing faces
CN104850600B (en) * 2015-04-29 2019-05-28 百度在线网络技术(北京)有限公司 A kind of method and apparatus for searching for the picture comprising face
CN106203242B (en) * 2015-05-07 2019-12-24 阿里巴巴集团控股有限公司 Similar image identification method and equipment
CN106203242A (en) * 2015-05-07 2016-12-07 阿里巴巴集团控股有限公司 A kind of similar image recognition methods and equipment
CN106372068A (en) * 2015-07-20 2017-02-01 中兴通讯股份有限公司 Method and device for image search, and terminal
CN105260702A (en) * 2015-09-15 2016-01-20 重庆智韬信息技术中心 Auxiliary evaluation authorization method based on face recognition
CN106919884A (en) * 2015-12-24 2017-07-04 北京汉王智远科技有限公司 Human facial expression recognition method and device
CN105825224A (en) * 2016-03-10 2016-08-03 浙江生辉照明有限公司 Obtaining method and apparatus of classifier family
CN105844267A (en) * 2016-06-14 2016-08-10 皖西学院 Face recognition algorithm
CN106650798A (en) * 2016-12-08 2017-05-10 南京邮电大学 Indoor scene recognition method combining deep learning and sparse representation
CN106650798B (en) * 2016-12-08 2019-06-21 南京邮电大学 A kind of indoor scene recognition methods of combination deep learning and rarefaction representation
CN106874861A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of face antidote and system
CN106980819A (en) * 2017-03-03 2017-07-25 竹间智能科技(上海)有限公司 Similarity judgement system based on human face five-sense-organ
CN108804988A (en) * 2017-05-04 2018-11-13 上海荆虹电子科技有限公司 A kind of remote sensing image scene classification method and device
CN108804988B (en) * 2017-05-04 2020-11-20 深圳荆虹科技有限公司 Remote sensing image scene classification method and device
CN108932456A (en) * 2017-05-23 2018-12-04 北京旷视科技有限公司 Face identification method, device and system and storage medium
CN110019905A (en) * 2017-10-13 2019-07-16 北京京东尚科信息技术有限公司 Information output method and device
CN108509880A (en) * 2018-03-21 2018-09-07 南京邮电大学 A kind of video personage behavior method for recognizing semantics
CN109034055A (en) * 2018-07-24 2018-12-18 北京旷视科技有限公司 Portrait plotting method, device and electronic equipment
CN109766754A (en) * 2018-12-04 2019-05-17 平安科技(深圳)有限公司 Human face five-sense-organ clustering method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN103824051B (en) 2017-05-03

Similar Documents

Publication Publication Date Title
CN103824051A (en) Local region matching-based face search method
Xian et al. Latent embeddings for zero-shot classification
Yang et al. Visual sentiment prediction based on automatic discovery of affective regions
US11416710B2 (en) Feature representation device, feature representation method, and program
Amor et al. 4-D facial expression recognition by learning geometric deformations
CN103824052B (en) Multilevel semantic feature-based face feature extraction method and recognition method
CN104616316B (en) Personage's Activity recognition method based on threshold matrix and Fusion Features vision word
WO2016110005A1 (en) Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CN105205501B (en) A kind of weak mark image object detection method of multi classifier combination
CN103810500B (en) A kind of place image-recognizing method based on supervised learning probability topic model
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN105095884B (en) A kind of pedestrian's identifying system and processing method based on random forest support vector machines
US20160342828A1 (en) Method and apparatus for recognising expression using expression-gesture dictionary
Ravì et al. Real-time food intake classification and energy expenditure estimation on a mobile device
CN103246891A (en) Chinese sign language recognition method based on kinect
Misra et al. Development of a hierarchical dynamic keyboard character recognition system using trajectory features and scale-invariant holistic modeling of characters
CN110232331B (en) Online face clustering method and system
Li et al. Facial expression classification using salient pattern driven integrated geometric and textual features
Rafiq et al. Wearable sensors-based human locomotion and indoor localization with smartphone
Samara et al. Sensing affective states using facial expression analysis
Li et al. Automatic affect classification of human motion capture sequences in the valence-arousal model
Wang et al. Hand motion and posture recognition in a network of calibrated cameras
Niaz et al. Fusion methods for multi-modal indexing of web data
Li et al. A novel art gesture recognition model based on two channel region-based convolution neural network for explainable human-computer interaction understanding

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant