CN109376717A - Personal identification method, device, electronic equipment and the storage medium of face comparison - Google Patents
Personal identification method, device, electronic equipment and the storage medium of face comparison Download PDFInfo
- Publication number
- CN109376717A CN109376717A CN201811538951.9A CN201811538951A CN109376717A CN 109376717 A CN109376717 A CN 109376717A CN 201811538951 A CN201811538951 A CN 201811538951A CN 109376717 A CN109376717 A CN 109376717A
- Authority
- CN
- China
- Prior art keywords
- picture
- face region
- training sample
- module
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses personal identification method, device, electronic equipment and the storage medium of a kind of comparison of face, the personal identification method is the following steps are included: obtain training picture;Based on RCNN thought, using Haar feature and CART classifier is combined to carry out weak typing study to training picture, to filter out candidate face region, then candidate face region is inputted into CNN convolutional neural networks and is trained, generate training sample database;Picture to be measured input CNN convolutional neural networks are detected and positioned, to obtain human face region;Training sample database is traversed, determines whether the human face region in picture to be measured matches with the training sample in training sample database, to identify the identity of the face in picture to be measured.The present invention combines Haar feature with CNN convolutional neural networks, rapidly can detect determining human face region from training picture;And determines whether the human face region in picture to be measured matches with the training sample in training sample database again, keep the accuracy of the comparing result of the personal identification method higher.
Description
Technical field
The invention belongs to technical field of data recognition, set more particularly to the personal identification method, device, electronics of face comparison
Standby and storage medium.
Background technique
In business scopes such as banking and insurance businesses, carrying out certification to client's true identity is ring necessary to handling multi-traffic
Section, such as the multi-services link such as insure, settle a claim is being supported to require to veritify the identity of client.
In the prior art, the database for the face recognition technology for supporting the testimony of a witness to compare, can only dock identity card picture, extract
Identity information be extremely limited;Therefore, other than the precision for needing to improve recognition of face, comprehensive biological skill how is established
Art information bank becomes the key of technology development and application.
There are following several methods for extracting face characteristic in the prior art, but all there are some problems:
1. the method based on face local shape factor
This method generates complete identification by the sample database using neural network algorithm training face information, by cascade
Compare System.The invention solves basic face alignment authentication, but is cascaded due to using multiple neural networks,
Lead to the inefficiency of total system, and serious computation burden is caused to the study of machine;
2. the face based on PCA (principal Component Analysis, i.e. principal component analytical method) algorithm is known
Other comparison method
This method takes dimensionality reduction to give feature ordering by carrying out two-dimentional pretreatment, generation characteristic sequence to image, from
And it generates special characteristic matrix and is compared for classification;The disadvantages of this method is: since the image of input is limited the positive face of qualitative picture,
There are the factors such as many limitations, such as the pixel for image, size, background, illumination limited in front-end image obtaining step
System is difficult to obtain the feature of face if one or more of factors are unsatisfactory for requiring.
Summary of the invention
The object of the present invention is to provide personal identification method, device, electronic equipment and the storage mediums of a kind of comparison of face;
By carrying out weak typing study to training picture using Haar feature and in conjunction with CART classifier, to filter out candidate face region
And training sample database is generated, then picture to be measured is compared with the training sample in training sample database, to identify picture to be measured
Face identity, to reduce the consumption of operation.
The technical scheme is that
A kind of personal identification method of face comparison, comprising the following steps:
Step S100: training picture is obtained;
Step S200: using Haar feature and CART classifier is combined to carry out weak typing study to training picture, with screening
Candidate face region out;
Step S300: candidate face region input CNN convolutional neural networks are trained, training sample is generated
Library;
Step S400: picture to be measured input CNN convolutional neural networks are detected and is positioned, to obtain human face region;
Step S500: traversal training sample database, determine human face region in picture to be measured whether in training sample database
Training sample matches, to identify the identity of the face in picture to be measured.
Further, the step S200 includes following sub-step:
The trained picture is traversed using integrogram and sliding window, obtains the Haar feature of training picture;
The Haar feature of the trained picture is inputted into CART classifier, determines whether the trained picture includes candidate
Face region, and filter out candidate face region.
Further, the step S500 includes following sub-step:
The human face region is pre-processed, the two-dimensional image data of human face region is obtained;
The feature of two-dimensional image data is extracted using eigenfaces, and in conjunction with LDA to the feature of the two-dimensional image data
The dimensionality reduction on more direction vectors is carried out, the feature vector of picture human face region to be measured is obtained;
Training sample database is traversed, the feature vector of each training sample is obtained;
It is to be measured to determine by comparing the feature vector of picture human face region to be measured and the feature vector of the training sample
Whether the human face region in picture matches with the training sample in training sample database.
Further, the feature vector packet of the feature vector of the picture human face region more to be measured and the training sample
Include following sub-step:
The feature vector for obtaining picture human face region to be measured is similar to the cosine between the feature vector of the training sample
Degree;
If the cosine similarity is greater than preset threshold value, the human face region and the instruction in the picture to be measured are determined
It is high to practice Sample Similarity.
A kind of identity recognition device of face comparison, the identity recognition device include that training picture obtains module, screening
Module, training module, human face region determining module and matching module;
The trained picture obtains module for obtaining trained picture;
The screening module is using Haar feature and CART classifier is combined to carry out weak typing study to training picture, with sieve
Select candidate face region;
Candidate face region input CNN convolutional neural networks are trained by the training module, generate training sample
This library;
Picture to be measured input CNN convolutional neural networks are detected and are positioned by the human face region determining module, to obtain
Obtain human face region;
Matching module traverses training sample database, determine human face region in picture to be measured whether with the instruction in training sample database
Practice sample to match, to identify the identity of the face in picture to be measured.
Further, the screening module includes that fisrt feature obtains module and the first determination module;
The fisrt feature is obtained module and is traversed using integrogram and sliding window to the trained picture, is instructed
Practice the Haar feature of picture;
The Haar feature of the trained picture is inputted CART classifier by first determination module, determines the training figure
Whether piece includes candidate face region, and filters out candidate face region.
Further, the matching module includes preprocessing module, second feature acquisition module, dimensionality reduction module, traversal mould
Block and comparison module;
The preprocessing module pre-processes the human face region, obtains the two-dimensional image data of human face region;
The second feature obtains the feature that module extracts two-dimensional image data using eigenfaces, the dimensionality reduction module benefit
The dimensionality reduction on more direction vectors is carried out with feature of the LDA to the two-dimensional image data, obtains the spy of picture human face region to be measured
Levy vector;
The spider module traverses training sample database, obtains the feature vector of each training sample;
The comparison module for picture human face region more to be measured feature vector and the training sample feature to
Amount, to determine whether the human face region in picture to be measured matches with the training sample in training sample database.
Further, the comparison module includes that similarity obtains module and the second determination module;
The similarity obtain module be used to obtaining picture human face region to be measured feature vector and the training sample
Cosine similarity between feature vector;
If the cosine similarity is greater than preset threshold value, second determination module determines in the picture to be measured
Human face region and the training sample similarity are high.
A kind of electronic equipment, comprising: processor and memory, the memory is stored with computer-readable instruction, described
Computer-readable instruction realizes identification side according to claim 1-4 when being executed by the processor
Method.
A kind of computer readable storage medium is stored with computer program on the computer readable storage medium, described
Computer program executes identification side according to claim 1-4 when being run by processor or computer
Method.
The invention has the benefit that
The present invention by acquire training picture, and by the combination of Haar feature and CNN convolutional network algorithm with rapidly from
Determining human face region is detected in training picture, realizes the model that RCNN quickly detects face;And utilize eigenfaces
The feature that determining human face region is extracted with the cosine law, it is similar to the training sample in training sample database to compare picture to be measured
Degree;To which the accuracy of the comparing result for the personal identification method for comparing face of the present invention is higher.
Detailed description of the invention
Fig. 1 is the flow chart of the one embodiment for the personal identification method that a kind of face of the invention compares;
Fig. 2 is the structural schematic diagram of the one embodiment for the identity recognition device that a kind of face of the invention compares;
Fig. 3 is the structural schematic diagram of one embodiment of a kind of electronic equipment of the invention.
Specific embodiment
The present invention is described in detail for each embodiment shown in reference to the accompanying drawing, but it should be stated that, these
Embodiment is not limitation of the present invention, those of ordinary skill in the art according to these embodiments made by function, method,
Or equivalent transformation or substitution in structure, all belong to the scope of protection of the present invention within.
Embodiment one
One, the personal identification method of face comparison
Fig. 1 is a kind of flow chart of one embodiment of the personal identification method of face comparison of the invention;Referring to Fig. 1,
This method comprises:
Step S100: training picture is obtained;
Step S200: using Haar feature and CART classifier is combined to carry out weak typing study to training picture, with screening
Candidate face region out;
Step S300: candidate face region input CNN convolutional neural networks are trained, training sample is generated
Library;
Step S400: picture to be measured input CNN convolutional neural networks are detected and is positioned, to obtain human face region;
Step S500: traversal training sample database, determine human face region in picture to be measured whether in training sample database
Training sample matches, to identify the identity of the face in picture to be measured.
Two, the specific work process of the personal identification method of face comparison
The course of work of the personal identification method of the face comparison in embodiment one will be specifically described below.
S100: training picture is obtained;
In the present embodiment, training picture and picture to be measured can derive from the following aspects;
(1) picture comprising different people that user voluntarily acquires, and everyone includes plurality of pictures;
(2) picture for the different people being collected into using the ORL database that network is increased income.
S200: using Haar (Ha Er) feature and CART classifier is combined to carry out weak typing study to training picture, with sieve
Select candidate face region;
Haar feature in the present embodiment is the feature obtained based on haar wavelet transformation;
CART (Classification And Regression Tree), i.e. post-class processing algorithm, abbreviation CART are calculated
Method, it is a kind of realization of decision tree.
The step 200 includes step S210, S220, specific as follows;
S210: traversing the trained picture using integrogram and sliding window, and the Haar for obtaining training picture is special
Sign;
In the present embodiment, training picture can be scanned with fixed sliding window and with preset step-length, thus
The Haar feature of the trained picture is obtained, and the Haar feature of the trained picture is saved.
Haar feature in the present embodiment can be divided into following several classes: (1) edge feature, (2) linear character, (3) center are special
Diagonal line feature etc. of seeking peace assemblage characteristics template;It wherein include white rectangle and black rectangle, this feature in the feature templates
The characteristic value of template be white rectangle pixel and with black rectangle pixel and difference absolute value, the feature of the feature templates
Value reflects the grey scale change situation of image, i.e., some features of face can be indicated by rectangular characteristic, such as eyes
Gray scale it is deeper than the greyscale color of cheek.
S220: inputting CART classifier for the Haar feature of the trained picture, determine the trained picture whether include
Candidate face region, and filter out candidate face region;
In the present embodiment, the Haar feature of the trained picture can be inputted in CART classifier, obtain classification results;
If the classification results are greater than preset first threshold, determine that the training picture includes candidate face region;Into
One step, the position in the candidate face region is marked and is recorded;
If the classification results are less than or equal to the preset first threshold, determine that the training picture does not include face area
Domain.
The method of acquisition candidate region includes: in the prior art
1, RCNN (Region Convolution NeuralNetwork, the volume based on region in the prior art, are utilized
Product neural network) obtain candidate face region method be usually the method for utilizing RPN, first will training picture in each region
Divide, a simple CNN convolution layer network then be sent into each region and carries out feature extraction and scoring, then choose score compared with
High region is as candidate face region, however this method characteristic extraction procedure calculation amount is very big, to the meter of machine training picture
Calculation amount causes greatly to bear, and is unfavorable for efficiently detecting the candidate face region of picture;
It 2, is to utilize selective search (selective search) there are also another method for obtaining candidate face region
Method obtains the candidate frame (bounding box) of picture effective coverage, and the method logic is relatively disorderly and defeated by more than 2000 a candidate frames
Enter CNN network, so that the computation burden of the training picture of machine learning is increased, so being also unfavorable for efficiently detecting to train
The candidate face region of picture;
And can rapidly extract the feature of picture using Haar feature in the present embodiment, and using cart classifier into
One subseries of row can be obtained more accurate and negligible amounts candidate frame as candidate face region, so that generating training
The process of sample is highly efficient, also facilitates subsequent Face datection.
In other embodiments, it also can use HOG feature and Preliminary detection carried out to training picture, i.e., in a training figure
On piece determines candidate face region;
Above-mentioned HOG feature (Histogram of Oriented Gradient, histograms of oriented gradients) is for mesh
Mark detection Feature Descriptor, this feature method be by image local occur direction gradient number count, this method and
Edge orientation histogram is similar, the difference is that the calculating of HOG feature improves accuracy rate based on the density matrix of uniform space.
S300: candidate face region input CNN convolutional neural networks are trained, training sample database is generated;
The feedforward that can use CNN convolutional neural networks in the present embodiment and rear feed network are to the candidate face region
Successive ignition is carried out to distinguish face and non-face part to be gradually fitted ideal face characteristic;It is understood that
It is that in other embodiments, RNN Recognition with Recurrent Neural Network can also be selected to be trained the candidate face region to generate instruction
Practice sample database.
S400: picture to be measured input CNN convolutional neural networks are detected and is positioned to the sample to be tested, to obtain
Human face region;
The restricted of front end user input picture can be reduced by positioning to the picture to be measured, in complex environment
Under background or different illumination conditions, people's face image information is extracted as precisely as possible and completes identity veritification, to extract people
Human face image information under complex background;Therefore for non-limiting people's face image can preferably carry out detection with than
It is right, to reduce operand.
In the present embodiment, the candidate face region filtered out in step 200 can seriatim be inputted into CNN neural network mould
Type, to extract the feature of picture to be measured;Then, using returning the SoftMax classifier of layer for the feature of the picture to be measured
The feature of the training sample in training sample database generated with step 300 is compared, and calculates each candidate face region
Scoring and probability;The highest candidate face region of probability is exported again, is determined as human face region, the face after the determination
Region can be used in the matching process of following step 500;
It is understood that in other embodiments, RNN Recognition with Recurrent Neural Network can also be selected to comprising candidate face area
The training picture in domain is detected and is positioned.
S500: traversal training sample database, determine human face region in picture to be measured whether with the training in training sample database
Sample matches, to identify the identity of the face in picture to be measured.
Wherein, step S500 includes sub-step S510-S540, specific as follows:
S510: pre-processing the human face region, obtains the two-dimensional image data of human face region;
In the present embodiment, the human face region of the determination in step 400 is subjected to two dimensionization data processing, due to determining people
The image data in face region is Three Channel Color data, and the image data needed in the present embodiment is not comprising color characteristic
Gradation data, therefore first can carry out gray processing processing by the image to determining human face region, and unite using normalized method
The size of the human face region of the determination of one all inputs, thus can be generated the eigenmatrix of fixed dimension size, to obtain
The two-dimensional image data of human face region is obtained, to be used for subsequent step S520.
S520: (PCA (principal Component Analysis, principal component point are based primarily upon using eigenfaces
Analysis) method) extract two-dimensional image data feature, and combine LDA (Linear Discriminant Analysis, linearly sentences
Not Fen Xi) dimensionality reduction on more direction vectors is carried out to the feature of the two-dimensional image data;
Due to that can be carried out from different dimensions direction to the feature of two-dimensional image data using PCA and combination LDA method
The feature of the two-dimensional image data of different directions can separately be projected, obtain the two-dimensional image data of multiple classification by dimensionality reduction
Feature keeps the feature difference of the dimensional data image of each classification more obvious;If using only PCA or LDA method at some
The dimensionality reduction projection of picture is distinguished on direction, then certain differences being not obvious or feature extremely kindred class will
It is overlapped on projecting direction, is unfavorable for distinguishing the variety classes in the feature of two-dimensional image data, so sharp in the present embodiment
The variety classes of the feature of two-dimensional image data can be distinguished with the method for PCA and combination LDA.
In other embodiments, it can also be used based on the histogram feature of the feature of texture or image and extract the two dimension
The feature of image data.
S530: traversing training sample database, obtains the feature vector of each training sample;
Specifically, all training samples of training sample database can be traversed first in the present embodiment, it is every to obtain
The feature vector of a training sample;
Further, the feature vector of each training sample got is labeled with label, wherein the number of label
Amount is equal to for providing the number of the people of trained picture, carries out identification comparison in order to treat mapping piece.
S540: by comparing the feature vector of picture human face region to be measured and the feature vector of the training sample, to sentence
Whether the human face region in fixed picture to be measured matches with the training sample in training sample database.
Wherein, in the present embodiment, the step S540 includes following sub-step:
S541: the cosine between the feature vector of picture human face region to be measured and the feature vector of the training sample is obtained
Similarity;
S542: if the cosine similarity is greater than preset second threshold, determine the face area in the picture to be measured
Domain and the training sample similarity are high.
Wherein, the second threshold can determine based on experience value, using the cosine folder between vector in the present embodiment
Foundation of the angle value as judgement, when cosine value illustrates closer to 1 people and people's phase of the candidate target picture in sample to be tested
Like spending higher or being same people, in the present embodiment, the value range of the second threshold is between (0.9,1), it is possible to understand that
It is that in other embodiments, according to the actual situation, second threshold can also select other value ranges.
It in other embodiments, can also be by comparing the feature vector of picture human face region to be measured and the spy of training sample
Levy vector between Euclidean distance, come determine the human face region in picture to be measured whether with the training sample phase in training sample database
Match, with the people in determination picture to be measured whether with people's similarity in training sample it is whether high or be same people.
Therefore pass through foregoing description, it can be seen that the present embodiment by the combination of Haar feature and CNN convolutional network algorithm with
Determining human face region rapidly is detected from training picture, realizes the model that RCNN quickly detects face;And it utilizes
Eigenfaces simultaneously combine LDA method that can carry out dimensionality reduction to the feature of two-dimensional image data from different dimensions direction, preferably
The difference of the feature of two-dimensional image data is distinguished, and using the method for cosine similarity come picture face more to be measured
The feature vector of the feature vector in region and the training sample, to judge the phase of picture human face region and training sample to be measured
Like degree;The accuracy of the comparing result of personal identification method to make the face of the present embodiment compare is higher.
Embodiment two
Fig. 2 is the structural schematic diagram of the one embodiment for the identity recognition device that a kind of face of the invention compares, wherein
The identity recognition device includes that training picture obtains module, screening module, training module, human face region determining module and matching
Module;
The trained picture obtains module for obtaining trained picture;
The screening module is using Haar feature and CART classifier is combined to carry out weak typing study to training picture, with sieve
Select candidate face region;
Candidate face region input CNN convolutional neural networks are trained by the training module, generate training sample
This library;
Picture to be measured input CNN convolutional neural networks are detected and are positioned by the human face region determining module, to obtain
Obtain human face region;
Matching module traverses training sample database, determine human face region in picture to be measured whether with the instruction in training sample database
Practice sample to match, to identify the identity of the face in picture to be measured.
Further, the screening module in the present embodiment includes that fisrt feature obtains module and the first determination module;
The fisrt feature is obtained module and is traversed using integrogram and sliding window to the trained picture, is instructed
Practice the Haar feature of picture;
The Haar feature of the trained picture is inputted CART classifier by first determination module, determines the training figure
Whether piece includes candidate face region, and filters out candidate face region.
Further, the matching module in the present embodiment includes preprocessing module, second feature acquisition module, dimensionality reduction mould
Block, spider module and comparison module;
The preprocessing module pre-processes the human face region, obtains the two-dimensional image data of human face region;
The second feature obtains the feature that module extracts two-dimensional image data using eigenfaces, the dimensionality reduction module benefit
The dimensionality reduction on more direction vectors is carried out with feature of the LDA to the two-dimensional image data, obtains the spy of picture human face region to be measured
Levy vector;
The spider module traverses training sample database, obtains the feature vector of each training sample;
The comparison module for picture human face region more to be measured feature vector and the training sample feature to
Amount, to determine whether the human face region in picture to be measured matches with the training sample in training sample database.
Further, the comparison module in the present embodiment includes that similarity obtains module and the second determination module;
The similarity obtain module be used to obtaining picture human face region to be measured feature vector and the training sample
Cosine similarity between feature vector;
If the cosine similarity is greater than preset threshold value, second determination module determines in the picture to be measured
Human face region and the training sample similarity are high.
The specific embodiment of above-mentioned modules is consistent with the specific embodiment of each method and step of embodiment one,
Details are not described herein.
Embodiment three
In the present embodiment, a kind of electronic equipment is provided, including but not limited to smart phone, fixed-line telephone, tablet computer,
The electronic equipments such as laptop, wearable device, the electronic equipment include: processor and memory, and the memory is deposited
Computer-readable instruction is contained, the computer-readable instruction realizes the face of aforementioned present invention when being executed by the processor
The personal identification method of comparison.
Example IV
In the present embodiment, a kind of computer readable storage medium is provided, can for ROM (such as read-only memory,
FLASH memory, transfer device etc.), optical storage medium (for example, CD-ROM, DVD-ROM, paper card etc.), magnetic storage medium
(for example, tape, disc driver etc.) or other kinds of program storage;It is stored on the computer readable storage medium
Computer program, the computer program execute the body that the face of aforementioned present invention compares when being run by processor or computer
Part recognition methods.
The present invention is based on artificial intelligence machine study, can not only make up the defect of human eye differentiation, additionally it is possible to identify
The biological data picture in addition to certificate photo is compared, has broken original testimony of a witness comparison the relevant technologies to the one of image credit
A little limitations.
Present invention is mainly used for application scenarios such as identity veritification, antifraud in banking and insurance business industry, support insures, settles a claim
Equal multi-services link to reduce artificial cost, and improves work effect easily and effectively to carry out user identity verification
Rate;.But it can also need to compare the field of image recognition so that it is determined that identity to by face the present invention is applied in other
In, do not do the limitation of specific field herein.
The invention has the following advantages that
The present invention by acquire training picture, and by the combination of Haar feature and CNN convolutional network algorithm with rapidly from
Determining human face region is detected in training picture, realizes the model that RCNN quickly detects face;And utilize eigenfaces
The feature that determining human face region is extracted with the cosine law, it is similar to the training sample in training sample database to compare picture to be measured
Degree;To which the accuracy of the comparing result for the personal identification method for comparing face of the present invention is higher.
Those of ordinary skill in the art may be aware that the embodiment in conjunction with disclosed in the embodiment of the present invention describe it is each
Exemplary unit and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These
Function is implemented in hardware or software actually, the specific application and design constraint depending on technical solution.Profession
Technical staff can use different methods to achieve the described function each specific application, but this realization is not answered
Think beyond the scope of this invention.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only
A kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or components can combine or
Person is desirably integrated into another system, or some features can be ignored or not executed.Another point, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or communication link of device or unit
It connects, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.
And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, ROM, RAM, magnetic or disk etc. are various can store program code
Medium.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be subject to the protection scope in claims.
Claims (10)
1. a kind of personal identification method of face comparison, which comprises the following steps:
Step S100: training picture is obtained;
Step S200: using Haar feature and CART classifier is combined to carry out weak typing study to training picture, is waited with filtering out
Select human face region;
Step S300: candidate face region input CNN convolutional neural networks are trained, training sample database is generated;
Step S400: picture to be measured input CNN convolutional neural networks are detected and is positioned, to obtain human face region;
Step S500: traversal training sample database, determine human face region in picture to be measured whether with the training in training sample database
Sample matches, to identify the identity of the face in picture to be measured.
2. personal identification method according to claim 1, which is characterized in that the step S200 includes following sub-step:
The trained picture is traversed using integrogram and sliding window, obtains the Haar feature of training picture;
The Haar feature of the trained picture is inputted into CART classifier, determines whether the trained picture includes candidate face area
Domain, and filter out candidate face region.
3. personal identification method according to claim 1, which is characterized in that the step S500 includes following sub-step:
The human face region is pre-processed, the two-dimensional image data of human face region is obtained;
The feature of two-dimensional image data is extracted using eigenfaces, and is carried out in conjunction with feature of the LDA to the two-dimensional image data
Dimensionality reduction on more direction vectors obtains the feature vector of picture human face region to be measured;
Training sample database is traversed, the feature vector of each training sample is obtained;
By comparing the feature vector of picture human face region to be measured and the feature vector of the training sample, to determine picture to be measured
In human face region whether match with the training sample in training sample database.
4. personal identification method according to claim 3, which is characterized in that the spy of the picture human face region more to be measured
The feature vector for levying vector and the training sample includes following sub-step:
Obtain the cosine similarity between the feature vector of picture human face region to be measured and the feature vector of the training sample;
If the cosine similarity is greater than preset threshold value, the human face region and the trained sample in the picture to be measured are determined
This similarity is high.
5. a kind of identity recognition device of face comparison, which is characterized in that the identity recognition device includes that training picture obtains
Module, screening module, training module, human face region determining module and matching module;
The trained picture obtains module for obtaining trained picture;
The screening module is using Haar feature and CART classifier is combined to carry out weak typing study to training picture, to filter out
Candidate face region;
Candidate face region input CNN convolutional neural networks are trained by the training module, generate training sample database;
Picture to be measured input CNN convolutional neural networks are detected and are positioned by the human face region determining module, to obtain people
Face region;
Matching module traverses training sample database, determine human face region in picture to be measured whether with the training sample in training sample database
Originally match, to identify the identity of the face in picture to be measured.
6. identity recognition device according to claim 5, which is characterized in that the screening module includes that fisrt feature obtains
Module and the first determination module;
The fisrt feature is obtained module and is traversed using integrogram and sliding window to the trained picture, and training figure is obtained
The Haar feature of piece;
The Haar feature of the trained picture is inputted CART classifier by first determination module, determines that the trained picture is
No includes candidate face region, and filters out candidate face region.
7. identity recognition device according to claim 5, which is characterized in that the matching module include preprocessing module,
Second feature obtains module, dimensionality reduction module, spider module and comparison module;
The preprocessing module pre-processes the human face region, obtains the two-dimensional image data of human face region;
The second feature obtains the feature that module extracts two-dimensional image data using eigenfaces, and the dimensionality reduction module utilizes
LDA carries out the dimensionality reduction on more direction vectors to the feature of the two-dimensional image data, obtains the feature of picture human face region to be measured
Vector;
The spider module traverses training sample database, obtains the feature vector of each training sample;
The comparison module comes for the feature vector of picture human face region more to be measured and the feature vector of the training sample
Determine whether the human face region in picture to be measured matches with the training sample in training sample database.
8. identity recognition device according to claim 7, which is characterized in that the comparison module includes that similarity obtains mould
Block and the second determination module;
The similarity obtains module for obtaining the feature vector of picture human face region to be measured and the feature of the training sample
Cosine similarity between vector;
If the cosine similarity is greater than preset threshold value, second determination module determines the face in the picture to be measured
Region and the training sample similarity are high.
9. a kind of electronic equipment characterized by comprising processor and memory, the memory are stored with computer-readable
Instruction, the computer-readable instruction realize body according to claim 1-4 when being executed by the processor
Part recognition methods.
10. a kind of computer readable storage medium, which is characterized in that be stored with computer on the computer readable storage medium
Program, the computer program execute body according to claim 1-4 when being run by processor or computer
Part recognition methods.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811538951.9A CN109376717A (en) | 2018-12-14 | 2018-12-14 | Personal identification method, device, electronic equipment and the storage medium of face comparison |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811538951.9A CN109376717A (en) | 2018-12-14 | 2018-12-14 | Personal identification method, device, electronic equipment and the storage medium of face comparison |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109376717A true CN109376717A (en) | 2019-02-22 |
Family
ID=65373989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811538951.9A Pending CN109376717A (en) | 2018-12-14 | 2018-12-14 | Personal identification method, device, electronic equipment and the storage medium of face comparison |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109376717A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993061A (en) * | 2019-03-01 | 2019-07-09 | 珠海亿智电子科技有限公司 | A kind of human face detection and tracing method, system and terminal device |
CN110329856A (en) * | 2019-07-09 | 2019-10-15 | 日立楼宇技术(广州)有限公司 | A kind of elevator selects layer method, device, elevator device and storage medium |
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
CN110472509A (en) * | 2019-07-15 | 2019-11-19 | 中国平安人寿保险股份有限公司 | Fat or thin recognition methods and device, electronic equipment based on facial image |
CN111160094A (en) * | 2019-11-26 | 2020-05-15 | 苏州方正璞华信息技术有限公司 | Method and device for identifying hand selection in running snapshot photo |
CN111680622A (en) * | 2020-06-05 | 2020-09-18 | 上海一由科技有限公司 | Identity recognition method based on fostering environment |
CN111797763A (en) * | 2020-07-02 | 2020-10-20 | 北京灵汐科技有限公司 | Scene recognition method and system |
CN112256906A (en) * | 2020-10-23 | 2021-01-22 | 安徽启新明智科技有限公司 | Method, device and storage medium for marking annotation on display screen |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599121A (en) * | 2009-06-30 | 2009-12-09 | 徐勇 | The authenticating colorized face images system and method |
CN102831447A (en) * | 2012-08-30 | 2012-12-19 | 北京理工大学 | Method for identifying multi-class facial expressions at high precision |
CN104504365A (en) * | 2014-11-24 | 2015-04-08 | 闻泰通讯股份有限公司 | System and method for smiling face recognition in video sequence |
CN105701466A (en) * | 2016-01-13 | 2016-06-22 | 杭州奇客科技有限公司 | Rapid all angle face tracking method |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
CN106462736A (en) * | 2014-08-07 | 2017-02-22 | 华为技术有限公司 | A processing device and method for face detection |
CN106503687A (en) * | 2016-11-09 | 2017-03-15 | 合肥工业大学 | The monitor video system for identifying figures of fusion face multi-angle feature and its method |
CN108108677A (en) * | 2017-12-12 | 2018-06-01 | 重庆邮电大学 | One kind is based on improved CNN facial expression recognizing methods |
-
2018
- 2018-12-14 CN CN201811538951.9A patent/CN109376717A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599121A (en) * | 2009-06-30 | 2009-12-09 | 徐勇 | The authenticating colorized face images system and method |
CN102831447A (en) * | 2012-08-30 | 2012-12-19 | 北京理工大学 | Method for identifying multi-class facial expressions at high precision |
CN106462736A (en) * | 2014-08-07 | 2017-02-22 | 华为技术有限公司 | A processing device and method for face detection |
CN104504365A (en) * | 2014-11-24 | 2015-04-08 | 闻泰通讯股份有限公司 | System and method for smiling face recognition in video sequence |
CN105701466A (en) * | 2016-01-13 | 2016-06-22 | 杭州奇客科技有限公司 | Rapid all angle face tracking method |
CN106228142A (en) * | 2016-07-29 | 2016-12-14 | 西安电子科技大学 | Face verification method based on convolutional neural networks and Bayesian decision |
CN106503687A (en) * | 2016-11-09 | 2017-03-15 | 合肥工业大学 | The monitor video system for identifying figures of fusion face multi-angle feature and its method |
CN108108677A (en) * | 2017-12-12 | 2018-06-01 | 重庆邮电大学 | One kind is based on improved CNN facial expression recognizing methods |
Non-Patent Citations (1)
Title |
---|
曹二奎: ""基于Gentle Adaboost的人脸检测算法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993061A (en) * | 2019-03-01 | 2019-07-09 | 珠海亿智电子科技有限公司 | A kind of human face detection and tracing method, system and terminal device |
CN110329856A (en) * | 2019-07-09 | 2019-10-15 | 日立楼宇技术(广州)有限公司 | A kind of elevator selects layer method, device, elevator device and storage medium |
CN110472509A (en) * | 2019-07-15 | 2019-11-19 | 中国平安人寿保险股份有限公司 | Fat or thin recognition methods and device, electronic equipment based on facial image |
CN110472509B (en) * | 2019-07-15 | 2024-04-26 | 中国平安人寿保险股份有限公司 | Fat-lean recognition method and device based on face image and electronic equipment |
CN110458097A (en) * | 2019-08-09 | 2019-11-15 | 软通动力信息技术有限公司 | A kind of face picture recognition methods, device, electronic equipment and storage medium |
CN111160094A (en) * | 2019-11-26 | 2020-05-15 | 苏州方正璞华信息技术有限公司 | Method and device for identifying hand selection in running snapshot photo |
CN111680622A (en) * | 2020-06-05 | 2020-09-18 | 上海一由科技有限公司 | Identity recognition method based on fostering environment |
CN111680622B (en) * | 2020-06-05 | 2023-08-01 | 上海一由科技有限公司 | Identity recognition method based on supporting environment |
CN111797763A (en) * | 2020-07-02 | 2020-10-20 | 北京灵汐科技有限公司 | Scene recognition method and system |
CN112256906A (en) * | 2020-10-23 | 2021-01-22 | 安徽启新明智科技有限公司 | Method, device and storage medium for marking annotation on display screen |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10565433B2 (en) | Age invariant face recognition using convolutional neural networks and set distances | |
CN109376717A (en) | Personal identification method, device, electronic equipment and the storage medium of face comparison | |
US8064653B2 (en) | Method and system of person identification by facial image | |
CN110008909B (en) | Real-name system business real-time auditing system based on AI | |
Pirlo et al. | Verification of static signatures by optical flow analysis | |
KR20080033486A (en) | Automatic biometric identification based on face recognition and support vector machines | |
Wati et al. | Security of facial biometric authentication for attendance system | |
US10423817B2 (en) | Latent fingerprint ridge flow map improvement | |
Kamboj et al. | CED-Net: context-aware ear detection network for unconstrained images | |
Haji et al. | Real time face recognition system (RTFRS) | |
WO2013181695A1 (en) | Biometric verification | |
He et al. | Aggregating local context for accurate scene text detection | |
Hannan et al. | Analysis of detection and recognition of Human Face using Support Vector Machine | |
Barbosa et al. | Transient biometrics using finger nails | |
Aggarwal et al. | Face Recognition System Using Image Enhancement with PCA and LDA | |
Gowda | Fiducial points detection of a face using RBF-SVM and adaboost classification | |
Muthukumaran et al. | Face and Iris based Human Authentication using Deep Learning | |
Almutiry | Efficient iris segmentation algorithm using deep learning techniques | |
Khan | LAFIN: A convolutional neural network-based technique for singular point extraction and classification of latent fingerprints | |
Sehgal | Palm recognition using LBP and SVM | |
Hahmann et al. | Combination of facial landmarks for robust eye localization using the Discriminative Generalized Hough Transform | |
Fekete et al. | Examination of technologies that can be used for the development of an identity verification application | |
Suhas et al. | SIFR-signature fraud recognition | |
Wimmer et al. | Deep Learning Based Age and Gender Recognition Applied to Finger Vein Images | |
Alex et al. | Local alignment of gradient features for face sketch recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190222 |
|
RJ01 | Rejection of invention patent application after publication |