CN107066932A - The detection of key feature points and localization method in recognition of face - Google Patents
The detection of key feature points and localization method in recognition of face Download PDFInfo
- Publication number
- CN107066932A CN107066932A CN201710028277.9A CN201710028277A CN107066932A CN 107066932 A CN107066932 A CN 107066932A CN 201710028277 A CN201710028277 A CN 201710028277A CN 107066932 A CN107066932 A CN 107066932A
- Authority
- CN
- China
- Prior art keywords
- key feature
- face
- feature points
- image
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses the detection of key feature points in recognition of face and localization method, including step:Facial image is pre-processed;Preset 22 key feature points;According to the demarcation of default key feature points order, the key feature points are manually demarcated in training sample and constitute one group of shape vector training sample;Carry out the structure of global shape model and local texture model;Carry out the search positioning of key feature points;Carry out the optimization of key feature point;Key feature points automatic Calibration and the output of key feature points Unitary coordinateization.Due to reducing the scope of sampling by the present invention, so as to the effective sampling efficiency improved when subsequently carrying out extraction key feature points.
Description
Technical field
The present invention relates to living things feature recognition, the detection of key feature points and positioning side more particularly in recognition of face
Method.
Background technology
Face recognition technology generally comprises four parts, respectively man face image acquiring, facial image pretreatment, people
Face image feature extraction and matching and identification, specifically:
Man face image acquiring and detection refer to the video for including face by the first-class video image acquisition device collection of camera lens
Or view data, can be still image, dynamic image, different positions, different expressions of acquisition target etc..
Facial image pretreatment refers to the part that face is determined from the view data of collection, and carries out gray correction, makes an uproar
The image preprocessings such as sound filtering, so that follow-up facial image characteristic extraction procedure can be more accurate and efficient.
In the prior art, for facial image, its preprocessing process mainly includes light compensation, the ash of facial image
Spend conversion, histogram equalization, normalization, geometric correction, filtering and sharpening etc..
Inventor has found that the facial image after being handled through image preprocessing mode of the prior art is follow-up
Facial image key feature points extract when, the probability that error occurs in key feature point is higher, so as to influence face to know
Other whole efficiency and effect.
The content of the invention
The technical problems to be solved by the invention are how to improve the efficiency and effect of recognition of face, specifically:
The embodiments of the invention provide a kind of detection of key feature points in recognition of face and localization method, including step:
S11, facial image is pre-processed;
S12, default 22 key feature points;22 key feature points specifically include two angle points, each eyes of each eyebrow
Two angle points, the top point of each eyelid and lowest point, prenasale, two wing of nose points, two angle points of face, upper lip
Top point and lowest point, the top point of lower lip and lowest point, and, lower jaw point;According to the demarcation of default key feature points
Sequentially, the key feature points are manually demarcated in training sample;According to special as one group of the Face image synthesis of training sample
Point coordinate data is levied, one group of shape vector training sample is constituted;
S13, the structure according to shape vector training sample progress global shape model and local texture model;
S14, the search positioning for carrying out key feature points, including:Treated according to the global shape model and local texture model
Detect in facial image by key feature points shown in iteration positioning;
S15, the optimization for carrying out key feature point, including:Positioning result in step S14 is used as initial results, root
The key feature points are optimized according to default image processing algorithm;
S16, key feature points automatic Calibration and the output of key feature points Unitary coordinateization, including:By the key feature after optimization
Then point demarcation converts according to the reference point and reference distance of setting in the facial image to be detected and exports the key
The coordinate of characteristic point.
It is preferred that, it is in embodiments of the present invention, described that facial image is pre-processed, including step:
S21, from the view data of collection determine identification object face-image;
S22, in the face-image carry out double vision identification and respectively determine double vision position;
S23, the facial front view for the identification object corrected the face-image according to the position relationship of double vision;
S24, the sampling scope according to each key feature points to be determined of front view determination.
It is preferred that, in embodiments of the present invention, the position relationship according to double vision corrects the face-image for institute
The facial front view of identification object is stated, including:
S31, by by double vision position adjustment be horizontal mode, the face-image is adjusted to level;
Interpupillary distance between S32, acquisition double vision;
S33, the chin position for obtaining the face-image, and the chin position is calculated to the distance at double vision line midpoint;
S34, the face-image estimated according to the ratio of the interpupillary distance and the chin position to the distance at double vision line midpoint
Side gyration;
S35, according to the side gyration face-image is corrected, the face-image is adjusted to front view.
It is preferred that, in embodiments of the present invention, the position relationship according to double vision corrects the face-image for institute
The facial front view of identification object is stated, including:
The front view is scaled to the image of pre-set dimension.
It is preferred that, it is in embodiments of the present invention, described according to the interpupillary distance and the chin position to double vision line midpoint
The ratio of distance estimate the side gyration of the face-image, including:
Include interpupillary distance and the chin position according to the appraising model to the ratio of the distance at double vision line midpoint with side to turn
The corresponding relation of angle, calculates the side gyration of the face-image;
The side gyration according to correction parameters revision;The correction parameter includes age bracket and/or ethnic group.
It is preferred that, it is in embodiments of the present invention, described that global shape model is carried out according to the shape vector training sample
Built with local texture model, including:
S41, by affine transformation by the shape vector training sample vector align;
S42, by PCA algorithm dimensionality reductions, main deformation pattern is decomposited, so as to obtain global shape model;
S43, according to the local gray level regularity of distribution around each key feature points, be that each key feature points are attached in current location
It is near to find optimal candidate position.
It is preferred that, it is in embodiments of the present invention, described by affine transformation that the shape vector training sample vector is right
Together, including:
By rotation, scaling and/or translation, shape vector training sample vector is alignd.
It is preferred that, it is in embodiments of the present invention, described that global shape model is carried out according to the shape vector training sample
Built with local texture model, including:
By obtained bianry image being eliminated into noise through excessive erosion expansive working and pixel involves influence.
It is preferred that, in embodiments of the present invention, the corrosion expansive working uses 2*3 rectangular windows.
It is preferred that, it is in embodiments of the present invention, described that global shape model is carried out according to the shape vector training sample
Built with local texture model, including:
Extract profile by finding the largest connected region of human face region binary image, and use traversal each point of profile with
Find ultra-left point, rightest point, top point and the lowest point of profile.
In the embodiment of the present invention, on the basis of the image preprocessing such as progress gray correction in the prior art, noise filtering,
The step of also add determination key feature points sampling scope, by the way that face-image is carried out after correction front view, according to the mankind
The position that facial each key feature points should be at carries out preliminary scope and delimited;Due to by the embodiment of the present invention, reducing
The scope of sampling, so as to it is effective improve it is follow-up carry out extraction key feature points when sampling efficiency.
Further, since passing through the embodiment of the present invention, sampling sampling model corresponding without departing from its of each key feature points
Enclose, so the probability of the appearance error in sampling process can be also effectively reduced, so as to improve the efficiency and effect of recognition of face
Really.
In addition, in the embodiment of the present invention, the mode of face-image correction being carried out according to double vision position, very little can be passed through
Amount of calculation is obtained with the front view of face-image, so as to efficiently be corrected to face-image.
Further, in embodiments of the present invention, it is also proposed that by interpupillary distance and chin position to double vision line midpoint away from
From ratio, come the method for the side gyration of estimating face-image;Concrete principle is:Same identification object, its side carryover degree is got over
It is high, then the interpupillary distance identified in original face-image is also just smaller, and the chin position of the identification object connects to double vision
The distance at line midpoint is not change with the change of side gyration;Therefore, by interpupillary distance and chin position into double vision line
The ratio of the distance of point, it is possible to easily the side gyration of identification object is estimated, so as to be provided accurately for the correction of figure
Foundation, the figure after correction is more pressed close to the real front view of identification object.
Further, in order to improve the rectification effect of face-image, in embodiments of the present invention, facial figure can also be preset
The standard size of shape, so that unitized face figure, and then make it that the sampling scope of key feature points is more accurate.
Further, in embodiments of the present invention, correction parameter has been additionally provided with further to correct side gyration;Due to
In different ages and different ethnic groups, its real front view, interpupillary distance and chin position to the distance at double vision line midpoint
Ratio also difference, therefore, by can further correct side gyration provided with corrected parameter, obtains more accurate side
Gyration result, so as to also just make the front view after correction more press close to the real front view of identification object.
In addition, key feature points selected in the embodiment of the present invention, the variation track of its position can be more accurate
Characterize the change of facial emotions, it is possible to the effective degree of accuracy for improving face Emotion identification.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the accompanying drawing used required in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments described in application, for those of ordinary skill in the art, on the premise of not paying creative work,
Other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is the step schematic diagram of the facial image preprocess method in recognition of face described herein;
Fig. 2 is the another step schematic diagram of the facial image preprocess method in recognition of face described herein;
Fig. 3 is the another step schematic diagram of the facial image preprocess method in recognition of face described herein;
Fig. 4 is the another step schematic diagram of the facial image preprocess method in recognition of face described herein.
Embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only this
Invent a part of embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art exist
The every other embodiment obtained under the premise of creative work is not made, the scope of protection of the invention is belonged to.
In order to improve the efficiency and effect of recognition of face, the detection and positioning of key feature points in a kind of recognition of face
Method, as shown in figure 1, including step:
S11, facial image is pre-processed;
In actual applications, the detailed process pre-processed to facial image can be with as shown in Fig. 2 comprise the following steps:
S21, from the view data of collection determine identification object face-image;
In embodiments of the present invention, identification object refers to the people for needing to be identified, many times, image capture device collection
Image can include a lot of other human bodies unrelated with identification or adjacent articles etc., at this time, it may be necessary to first by face-image
Identify.
S22, in the face-image carry out double vision identification and respectively determine double vision position;
After face-image is obtained, first have to carry out the identification of two pupils in face-image, may thereby determine that double
The position data of pupil.
S23, the facial front view for the identification object corrected the face-image according to the position relationship of double vision;
The front view of face, is characterized in that double vision should be at horizontality first;So according to the position of double vision in step S22
Put, to be adjusted to face-image, in this manner it is possible to which the face-image during identification object torticollis got is corrected.
In addition, sometimes identification object can also turn one's head, or get be exactly certain angle side image;This
When, after double vision rectification is carried out, in addition it is also necessary to carry out the correction of side gyration, specific step can be as shown in figure 3, bag
Include:
S31, by by double vision position adjustment be horizontal mode, the face-image is adjusted to level;
First, face-image is adjusted to level.
Interpupillary distance between S32, acquisition double vision;
Then, according to the position data of double vision, the interpupillary distance between double vision can be calculated.
S33, the chin position for obtaining face-image, and chin position is calculated to the distance at double vision line midpoint;
The position data of lower jaw can also easily recognize that the lower jaw in acquisition, the embodiment of the present invention refers in particular to the summit position of lower jaw
Put, that is, corresponding to face-image front view center line lower jaw position.
S34, the side for estimating according to the ratio of interpupillary distance and chin position to the distance at double vision line midpoint the face-image
Gyration;
Concrete principle is:Same identification object, its side carryover degree is higher, then the interpupillary distance identified in original face-image
Also it is just smaller, and the chin position of the identification object is not change with the change of side gyration to the distance at double vision line midpoint
's;Therefore, the ratio of interpupillary distance and chin position to the distance at double vision line midpoint is passed through, it is possible to convenient to estimate identification pair
The side gyration of elephant, so as to provide accurate foundation for the correction of figure.
S35, according to the side gyration face-image is corrected, the face-image is adjusted to face
Figure.
Interpupillary distance and chin position are being got to after the ratio of the distance at double vision line midpoint, it is possible to easily estimate
The side gyration of identification object, so that also accurate foundation can be provided with the correction for figure, so as to so that after correction
Figure more presses close to the real front view of identification object.
S24, the sampling scope according to each characteristic point to be determined of front view determination.
In the embodiment of the present invention, on the basis of the image preprocessing such as progress gray correction in the prior art, noise filtering,
The step of also add determination key feature points sampling scope, by the way that face-image is carried out after correction front view, according to the mankind
The position that facial each key feature points should be at carries out preliminary scope and delimited;Due to by the embodiment of the present invention, reducing
The scope of sampling, so as to it is effective improve it is follow-up carry out extraction key feature points when sampling efficiency.
Further, since passing through the embodiment of the present invention, sampling sampling model corresponding without departing from its of each key feature points
Enclose, so the probability of the appearance error in sampling process can be also effectively reduced, so as to improve the efficiency and effect of recognition of face
Really.
In addition, in the embodiment of the present invention, the mode of face-image correction being carried out according to double vision position, very little can be passed through
Amount of calculation is obtained with the front view of face-image, so as to efficiently be corrected to face-image.
Further, in embodiments of the present invention, it is also proposed that by interpupillary distance and chin position to double vision line midpoint away from
From ratio, come the method for the side gyration of estimating face-image;Concrete principle is:Same identification object, its side carryover degree is got over
It is high, then the interpupillary distance identified in original face-image is also just smaller, and the chin position of the identification object connects to double vision
The distance at line midpoint is not change with the change of side gyration;Therefore, by interpupillary distance and chin position into double vision line
The ratio of the distance of point, it is possible to easily the side gyration of identification object is estimated, so as to be provided accurately for the correction of figure
Foundation, the figure after correction is more pressed close to the real front view of identification object.
Further, in order to improve the rectification effect of face-image, in embodiments of the present invention, facial figure can also be preset
The standard size of shape, so that unitized face figure, and then make it that the sampling scope of key feature points is more accurate.
Further, in embodiments of the present invention, correction parameter has been additionally provided with further to correct side gyration;Due to
In different ages and different ethnic groups, its real front view, interpupillary distance and chin position to the distance at double vision line midpoint
Ratio also difference, therefore, by can further correct side gyration provided with corrected parameter, obtains more accurate side
Gyration result, so as to also just make the front view after correction more press close to the real front view of identification object.
S12, default 22 key feature points;22 key feature points specifically include two angle points, each of each eyebrow
Two angle points, the top point of each eyelid and the lowest point of eyes, prenasale, two wing of nose points, two angle points, the upper mouths of face
The top point and lowest point of lip, the top point of lower lip and lowest point, and, lower jaw point;According to default key feature points
Demarcation order, manually demarcates the key feature points in training sample;According to the Face image synthesis one as training sample
Group feature point coordinate data, constitutes one group of shape vector training sample;
In the prior art, the selection mode for key feature points is generally:9 characteristic points of face are chosen, these are crucial special
The distribution levied a little has an angle invariability, respectively 2 eyeball central points, 4 canthus points, the midpoint in two nostrils and 2 corners of the mouths
Point.Other characteristic points position of each organ characteristic of the face relevant with recognizing and extension can be readily available on this basis
Put, for further recognizer.Inventor has found that through selected key feature points of the prior art,
When carrying out Emotion identification, the degree of accuracy of identification is relatively low.In order to further improve the accuracy rate of Emotion identification, the embodiment of the present invention
In, the feature modeling method in recognition of face is also optimized, specifically, the selected point of key feature points is optimized first, this
In inventive embodiments, have chosen more key feature points, there is 22 altogether, this 22 key feature points in facial image,
Change with mood is closely related, by the variation track of these key feature points, can more accurately judge identification pair
The emotional change situation of elephant.
S13, the structure according to shape vector training sample progress global shape model and local texture model;
In embodiments of the present invention, as shown in figure 4, carrying out global shape model and part according to the shape vector training sample
The detailed process that texture model is built may comprise steps of:
S41, by affine transformation by the shape vector training sample vector align;
S42, by PCA algorithm dimensionality reductions, main deformation pattern is decomposited, so as to obtain global shape model;
S43, according to the local gray level regularity of distribution around each key feature points, be that each key feature points are attached in current location
It is near to find optimal candidate position.
In addition, in actual applications, global shape model and local grain mould are being carried out according to shape vector training sample
During type is built, it can specifically include:
By obtained bianry image being eliminated into noise through excessive erosion expansive working and pixel involves influence.Specifically,
The optimum way for corroding expansive working is to use 2*3 rectangular windows.
In addition, carrying out global shape model and local texture model structure according to the shape vector training sample, specifically
For, it can also include:
Extract profile by finding the largest connected region of human face region binary image, and use traversal each point of profile with
Find ultra-left point, rightest point, top point and the lowest point of profile.
As seen from the above, key feature points selected in the embodiment of the present invention, the variation track of its position can be more
Plus accurately characterize the change of facial emotions, it is possible to the effective degree of accuracy for improving face Emotion identification.
S14, the search positioning for carrying out key feature points, including:According to the global shape model and local texture model
Shown key feature points are positioned by iteration in facial image to be detected;
S15, the optimization for carrying out key feature point, including:Positioning result in step S14 is used as initial results, root
The key feature points are optimized according to default image processing algorithm;In actual applications, preset algorithm can include two
Value, corrosion expansion and contours extract etc..
S16, key feature points automatic Calibration and the output of key feature points Unitary coordinateization, including:By the key after optimization
Characteristic point is demarcated in the facial image to be detected, is then converted according to the reference point and reference distance of setting and is exported described
The coordinate of key feature points.
In this step, it is chosen for the midpoint of two inner eye corner points of eyes(It is designated as A points)With two external eyes angle points of eyes
Midpoint(It is designated as B points), the midpoint of A points and B points is set to reference point;That is, the horizontal stroke of reference point, ordinate are respectively eyes
Two external eyes angle points of two inner eye corner points and eyes totally 4 points of abscissa average and ordinate average.
In this step, reference length chooses the distance of the inner eye corner point of eyes two, and its length is set into fixed length;
In actual applications, length of the regular length preferably for 30 pixels.
More than specific case used herein the principle and embodiment of the present invention are set forth, above example
Explanation be only intended to help to understand the method and its core concept of the present invention.It should be pointed out that for the common of the art
For technical staff, under the premise without departing from the principles of the invention, some improvement and modification can also be carried out to the present invention, these
Improve and modification is also fallen into the protection domain of the claims in the present invention.
Claims (10)
1. the detection of key feature points and localization method in a kind of recognition of face, it is characterised in that including step:
S11, facial image is pre-processed;
S12, default 22 key feature points;22 key feature points specifically include two angle points, each eyes of each eyebrow
Two angle points, the top point of each eyelid and lowest point, prenasale, two wing of nose points, two angle points of face, upper lip
Top point and lowest point, the top point of lower lip and lowest point, and, lower jaw point;According to the demarcation of default key feature points
Sequentially, the key feature points are manually demarcated in training sample;According to special as one group of the Face image synthesis of training sample
Point coordinate data is levied, one group of shape vector training sample is constituted;
S13, the structure according to shape vector training sample progress global shape model and local texture model;
S14, the search positioning for carrying out key feature points, including:Treated according to the global shape model and local texture model
Detect in facial image by key feature points shown in iteration positioning;
S15, the optimization for carrying out key feature point, including:Positioning result in step S14 is used as initial results, root
The key feature points are optimized according to default image processing algorithm;
S16, key feature points automatic Calibration and the output of key feature points Unitary coordinateization, including:By the key feature after optimization
Then point demarcation converts according to the reference point and reference distance of setting in the facial image to be detected and exports the key
The coordinate of characteristic point.
2. the detection of key feature points and localization method in recognition of face, described that facial image is entered according to claim 1
Row pretreatment, including step:
S21, from the view data of collection determine identification object face-image;
S22, in the face-image carry out double vision identification and respectively determine double vision position;
S23, the facial front view for the identification object corrected the face-image according to the position relationship of double vision;
S24, the sampling scope according to each key feature points to be determined of front view determination.
3. the detection of key feature points and localization method in recognition of face according to claim 2, it is characterised in that described
According to the position relationship of double vision by facial front view of the face-image correction for the identification object, including:
S31, by by double vision position adjustment be horizontal mode, the face-image is adjusted to level;
Interpupillary distance between S32, acquisition double vision;
S33, the chin position for obtaining the face-image, and the chin position is calculated to the distance at double vision line midpoint;
S34, the face-image estimated according to the ratio of the interpupillary distance and the chin position to the distance at double vision line midpoint
Side gyration;
S35, according to the side gyration face-image is corrected, the face-image is adjusted to front view.
4. the detection of key feature points and localization method in recognition of face according to claim 2, it is characterised in that described
According to the position relationship of double vision by facial front view of the face-image correction for the identification object, including:
The front view is scaled to the image of pre-set dimension.
5. the detection of key feature points and localization method in recognition of face according to claim 3, it is characterised in that described,
The side corner of the face-image is estimated to the ratio of the distance at double vision line midpoint according to the interpupillary distance and the chin position
Degree, including:
Include interpupillary distance and the chin position according to the appraising model to the ratio of the distance at double vision line midpoint with side to turn
The corresponding relation of angle, calculates the side gyration of the face-image;
The side gyration according to correction parameters revision;The correction parameter includes age bracket and/or ethnic group.
6. the detection of key feature points and localization method in recognition of face according to claim 1, it is characterised in that described
Global shape model is carried out according to the shape vector training sample and local texture model is built, including:
S41, by affine transformation by the shape vector training sample vector align;
S42, by PCA algorithm dimensionality reductions, main deformation pattern is decomposited, so as to obtain global shape model;
S43, according to the local gray level regularity of distribution around each key feature points, be that each key feature points are attached in current location
It is near to find optimal candidate position.
7. the detection of key feature points and localization method in recognition of face according to claim 6, it is characterised in that described logical
Cross affine transformation shape vector training sample vector aligns, including:
By rotation, scaling and/or translation, shape vector training sample vector is alignd.
8. the detection of key feature points and localization method in recognition of face according to claim 1, it is characterised in that described
Global shape model is carried out according to the shape vector training sample and local texture model is built, including:
By obtained bianry image being eliminated into noise through excessive erosion expansive working and pixel involves influence.
9. the detection of key feature points and localization method in recognition of face according to claim 8, it is characterised in that the corruption
Lose expansive working and use 2*3 rectangular windows.
10. the detection of key feature points and localization method in recognition of face according to claim 6, it is characterised in that described
Global shape model is carried out according to the shape vector training sample and local texture model is built, including:
Extract profile by finding the largest connected region of human face region binary image, and use traversal each point of profile with
Find ultra-left point, rightest point, top point and the lowest point of profile.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710028277.9A CN107066932A (en) | 2017-01-16 | 2017-01-16 | The detection of key feature points and localization method in recognition of face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710028277.9A CN107066932A (en) | 2017-01-16 | 2017-01-16 | The detection of key feature points and localization method in recognition of face |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107066932A true CN107066932A (en) | 2017-08-18 |
Family
ID=59599274
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710028277.9A Pending CN107066932A (en) | 2017-01-16 | 2017-01-16 | The detection of key feature points and localization method in recognition of face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107066932A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388141A (en) * | 2018-03-21 | 2018-08-10 | 特斯联(北京)科技有限公司 | A kind of wisdom home control system and method based on recognition of face |
CN108833721A (en) * | 2018-05-08 | 2018-11-16 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN109323753A (en) * | 2018-06-22 | 2019-02-12 | 田梅 | Lifting policy-making body based on big data storage |
CN109508700A (en) * | 2018-12-28 | 2019-03-22 | 广州粤建三和软件股份有限公司 | A kind of face identification method, system and storage medium |
CN109635659A (en) * | 2018-11-12 | 2019-04-16 | 东软集团股份有限公司 | Face key independent positioning method, device, storage medium and electronic equipment |
CN111598867A (en) * | 2020-05-14 | 2020-08-28 | 国家卫生健康委科学技术研究所 | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome |
CN112417947A (en) * | 2020-09-17 | 2021-02-26 | 重庆紫光华山智安科技有限公司 | Method and device for optimizing key point detection model and detecting face key points |
CN112528977A (en) * | 2021-02-10 | 2021-03-19 | 北京优幕科技有限责任公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN113361643A (en) * | 2021-07-02 | 2021-09-07 | 人民中科(济南)智能技术有限公司 | Deep learning-based universal mark identification method, system, equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101783026A (en) * | 2010-02-03 | 2010-07-21 | 北京航空航天大学 | Method for automatically constructing three-dimensional face muscle model |
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
CN102136069A (en) * | 2010-01-25 | 2011-07-27 | 华晶科技股份有限公司 | Object image correcting device and method for identification |
CN102360421A (en) * | 2011-10-19 | 2012-02-22 | 苏州大学 | Face identification method and system based on video streaming |
CN104318603A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for generating 3D model by calling picture from mobile phone photo album |
CN104951743A (en) * | 2015-03-04 | 2015-09-30 | 苏州大学 | Active-shape-model-algorithm-based method for analyzing face expression |
-
2017
- 2017-01-16 CN CN201710028277.9A patent/CN107066932A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101877055A (en) * | 2009-12-07 | 2010-11-03 | 北京中星微电子有限公司 | Method and device for positioning key feature point |
CN102136069A (en) * | 2010-01-25 | 2011-07-27 | 华晶科技股份有限公司 | Object image correcting device and method for identification |
CN101783026A (en) * | 2010-02-03 | 2010-07-21 | 北京航空航天大学 | Method for automatically constructing three-dimensional face muscle model |
CN102360421A (en) * | 2011-10-19 | 2012-02-22 | 苏州大学 | Face identification method and system based on video streaming |
CN104318603A (en) * | 2014-09-12 | 2015-01-28 | 上海明穆电子科技有限公司 | Method and system for generating 3D model by calling picture from mobile phone photo album |
CN104951743A (en) * | 2015-03-04 | 2015-09-30 | 苏州大学 | Active-shape-model-algorithm-based method for analyzing face expression |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108388141A (en) * | 2018-03-21 | 2018-08-10 | 特斯联(北京)科技有限公司 | A kind of wisdom home control system and method based on recognition of face |
CN108833721B (en) * | 2018-05-08 | 2021-03-12 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN108833721A (en) * | 2018-05-08 | 2018-11-16 | 广东小天才科技有限公司 | Emotion analysis method based on call, user terminal and system |
CN109323753A (en) * | 2018-06-22 | 2019-02-12 | 田梅 | Lifting policy-making body based on big data storage |
CN109323753B (en) * | 2018-06-22 | 2019-06-18 | 南京微云信息科技有限公司 | Lifting policy-making body based on big data storage |
CN109635659A (en) * | 2018-11-12 | 2019-04-16 | 东软集团股份有限公司 | Face key independent positioning method, device, storage medium and electronic equipment |
CN109508700A (en) * | 2018-12-28 | 2019-03-22 | 广州粤建三和软件股份有限公司 | A kind of face identification method, system and storage medium |
CN111598867A (en) * | 2020-05-14 | 2020-08-28 | 国家卫生健康委科学技术研究所 | Method, apparatus, and computer-readable storage medium for detecting specific facial syndrome |
CN112417947A (en) * | 2020-09-17 | 2021-02-26 | 重庆紫光华山智安科技有限公司 | Method and device for optimizing key point detection model and detecting face key points |
CN112417947B (en) * | 2020-09-17 | 2021-10-26 | 重庆紫光华山智安科技有限公司 | Method and device for optimizing key point detection model and detecting face key points |
CN112528977A (en) * | 2021-02-10 | 2021-03-19 | 北京优幕科技有限责任公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN112528977B (en) * | 2021-02-10 | 2021-07-02 | 北京优幕科技有限责任公司 | Target detection method, target detection device, electronic equipment and storage medium |
CN113361643A (en) * | 2021-07-02 | 2021-09-07 | 人民中科(济南)智能技术有限公司 | Deep learning-based universal mark identification method, system, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066932A (en) | The detection of key feature points and localization method in recognition of face | |
CN108921100B (en) | Face recognition method and system based on visible light image and infrared image fusion | |
CN104680121B (en) | Method and device for processing face image | |
CN108898125A (en) | One kind being based on embedded human face identification and management system | |
CN104091155B (en) | The iris method for rapidly positioning of illumination robust | |
CN101923645B (en) | Iris splitting method suitable for low-quality iris image in complex application context | |
WO2015067084A1 (en) | Human eye positioning method and apparatus | |
CN113920568B (en) | Face and human body posture emotion recognition method based on video image | |
CN111291701B (en) | Sight tracking method based on image gradient and ellipse fitting algorithm | |
CN102654903A (en) | Face comparison method | |
CN106980845B (en) | Face key point positioning method based on structured modeling | |
CN110796101A (en) | Face recognition method and system of embedded platform | |
CN106650574A (en) | Face identification method based on PCANet | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN105069745A (en) | face-changing system based on common image sensor and enhanced augmented reality technology and method | |
WO2015165227A1 (en) | Human face recognition method | |
WO2022110917A1 (en) | Method for determining driving state of driver, computer storage medium, and electronic device | |
CN106919898A (en) | Feature modeling method in recognition of face | |
Yu et al. | Improvement of face recognition algorithm based on neural network | |
JP7531168B2 (en) | Method and system for detecting a child's sitting posture based on child's face recognition | |
US8971592B2 (en) | Method for determining eye location on a frontal face digital image to validate the frontal face and determine points of reference | |
CN116128814A (en) | Standardized acquisition method and related device for tongue diagnosis image | |
CN106909880A (en) | Facial image preprocess method in recognition of face | |
CN109255293A (en) | Model's showing stage based on computer vision walks evaluation method | |
Rafik et al. | Application of metaheuristic for optimization of iris Image segmentation by using evaluation Hough Transform and methods Daugman |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170818 |