CN104933414B - A kind of living body faces detection method based on WLD-TOP - Google Patents
A kind of living body faces detection method based on WLD-TOP Download PDFInfo
- Publication number
- CN104933414B CN104933414B CN201510350814.2A CN201510350814A CN104933414B CN 104933414 B CN104933414 B CN 104933414B CN 201510350814 A CN201510350814 A CN 201510350814A CN 104933414 B CN104933414 B CN 104933414B
- Authority
- CN
- China
- Prior art keywords
- wld
- living body
- body faces
- svm
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 38
- 239000013598 vector Substances 0.000 claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 28
- 239000011159 matrix material Substances 0.000 claims abstract description 16
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000000034 method Methods 0.000 claims description 39
- 230000004069 differentiation Effects 0.000 claims description 10
- 230000005284 excitation Effects 0.000 claims description 6
- 238000000205 computational method Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000001727 in vivo Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000002790 cross-validation Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007639 printing Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 claims description 2
- 238000002474 experimental method Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000003542 behavioural effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 238000009415 formwork Methods 0.000 description 2
- 230000008929 regeneration Effects 0.000 description 2
- 238000011069 regeneration method Methods 0.000 description 2
- 241000232219 Platanista Species 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000686 essence Substances 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of living body faces detection methods based on WLD TOP, comprise the following steps:(1) training stage:Read training set video, human face region detection is carried out to each frame, and it is converted into Gray Face picture frame sequence, construct three-dimensional image matrix, then construct Filtering Template and calculate WLD features, WLD TOP feature vectors are regenerated, are finally trained feature vector input SVM classifier, so as to establish SVM models;(2) test phase:For the image sequence of test, Face datection is carried out to each frame and is converted to Gray Face image sequence, then constructs three-dimensional image matrix and Filtering Template, calculate WLD features, WLD TOP feature vectors are generated, trained SVM models is finally sent into, draws living body faces testing result.The present invention on the basis of LBP TOP, is not only embodied the magnitude relationship of neighborhood territory pixel and center pixel, has also quantified the difference of neighborhood territory pixel and center pixel using Weber('s)law so that the feature for describing son is more comprehensive.
Description
Technical field
The present invention relates to the research field of Face datection, more particularly to a kind of living body faces detection side based on WLD-TOP
Method.
Background technology
Face recognition technology is by comparing the biological characteristic with analysis face, so as to differentiate the identity of people.Recognition of face skill
Art in the past few decades between achieve significant progress, the product of recognition of face is applied to gate inhibition, important place monitoring, goes out
Multiple occasions such as immigration.One advantage of face recognition technology is automatic identification target, without supervision, but is also left safe
Hidden danger if criminal can easily be out-tricked face identification system using the photo even video of user, will result in danger
Evil, serious threat have arrived the safety and stability of society.
Common face spoofing attack includes photo attack and video attack.Face characteristic of the photo attack with user,
And video attacks the variation of the behavioral characteristics for more carrying validated user, such as blink and facial expression, has more duplicity, seriously shadow
The accuracy of face identification system differentiation is rung.
Present living body faces detection method is mainly the following:First, the method based on texture structure analysis, the party
Method is differentiated by analyzing three-dimensional living body faces and the otherness for retaking face imaging, extraction associated texture feature;Second is that base
In the method for facial movement information analysis, living body faces and retake the essential distinction of face and be that the former is three-dimensional body, the latter
It is two-dimension plane structure, there are the secondary shootings of face, and the movement effects that they are generated are entirely different;Third, based on live body
The method of characteristic information analysis, the living body characteristics such as thermal infrared images, blink and the lip motion of this method analysis face, this side
The detection device that method may need some additional is supported, therefore there are the limitations of hardware in popularization.
The realization of above-mentioned three kinds of methods will use suitable iamge description, it can greatly improve living body faces inspection
The accuracy rate of survey.Since face spoofing attack means are more and more, the fraud of video is based especially on, there are living body faces
Behavioral characteristics, such as can by the dynamic video of validated user obtain blink etc. facial expressions variation so as to reach deception
The purpose of attack, so we need description that can add in time and spatial information to distinguish.
The content of the invention
The shortcomings that it is a primary object of the present invention to overcome the prior art and deficiency, provide a kind of work based on WLD-TOP
Body method for detecting human face by extracting WLD description, and adds in the timeline information of video frame, so as to form WLD-TOP
(Weber Local Descriptor-Three Orthogonal Planes) description, it has merged the sky of WLD description
Between the temporal characteristics of feature and video frame, improve living body faces detection accuracy rate.
In order to achieve the above object, the present invention uses following technical scheme:
A kind of living body faces detection method based on WLD-TOP description, comprises the following steps:
S1, training stage:Training set video is read, human face region detection is carried out to each frame, and is converted into Gray Face
Picture frame sequence constructs three-dimensional image matrix, then constructs Filtering Template and calculates WLD features, regeneration WLD-TOP features to
Feature vector input SVM classifier is finally trained, so as to establish SVM models by amount;
S2, test phase:For the image sequence of test, Face datection is carried out to each frame and is converted to Gray Face figure
As sequence, three-dimensional image matrix and Filtering Template are then constructed, calculates WLD features, WLD-TOP feature vectors is generated, finally send
Enter trained SVM models, draw living body faces testing result.
Preferably, in step S1, the training set video is living body faces video, the photo face recorded, Replay Attack
Or printing picture attack.
Preferably, in step S1, after video frame is read in, haar features are extracted and with adaboosting algorithms into pedestrian
Face region detection extracts colored human face figure therein and switchs to the consistent gray-scale map of size dimension.
Preferably, in step S1, the method for the construction three-dimensional image matrix is:
The video frame length R once read is set, chooses the boundary threshold L of T coordinatesT, then really as center pixel at
The video frame length R of reasont=R-2LT, then one three-dimensional matrice I containing X, Y and T coordinate of gray value reading by this group of video frame
In (x, y, t).
Preferably, in step S1, construct Filtering Template and calculate the methods of WLD features and be:
X, Y-coordinate boundary threshold L are chosen respectivelyX、LY, determine the glide filter template length p of WLD description, form p*p
Filtering Template to three orthogonal planes XY, XT and YT of I, the p*p formwork calculations for being utilized respectively WLD methods remove boundary threshold
The difference excitation ξ and direction gradient Φ of each central pixel point afterwardst, computational methods are as follows:Assuming initially that the central point of calculating is
xc, its eight consecutive points are x respectivelyi, i=0 ..., p2- 1,
DefinitionMake v1=x5-
x1,v2=x7-x3, takeDefinition
ThenWherein θ ' ∈ [0,2 π), S
The dimension of direction gradient feature, then Φt=0,1 ..., S-1 obtains description { the ξ ' (x of WLD by above-mentioned stepsc),Φt}。
Preferably, in step S1,
WLD-TOP calculating process is:First to ξ ' (xc) make following normalization:
Therefore ξ (xc) value is 0 to N-1 this N number of integer value;ΦtIt normalizes to
The S direction represented with integer 0 to S-1, by taking X/Y plane as an example, to WLD { ξ (xc),ΦtTwo-dimensional histogram progress dimensionality reduction, it is fixed
Φt, seek corresponding ξ (xc) sub- histogram, the Φ tieed up according to StIt is divided into the sub- histogram of S groups, according to ΦtOrder from small to large according to
Secondary this S sub- histograms of connection, define f (x, y)=N × Φt+ξ(xc), then the histogram h of X/Y planei,XY=∑x,yM{f(x,
Y)=i }, i=0,1 ..., N Φt- 1, whereinSo as to form N ΦtThe WLD histograms of dimension
HXY, then with this method obtain three orthogonal plane (n=0:XY, n=1:XT, n=2:YT histogram)
hi,n=∑x,y,tM { f (x, y, t)=i }, i=0,1 ..., N Φt- 1, they are switched into N ΦtThe row vector H of dimensionn,
Front and rear connection successively, generation 3N ΦtThe WLD-TOP feature row vectors H of dimensionWT=[H0H1H2]。
Preferably, in step S1,
The SVM classifier uses the SVM implementation tools based on LIBSVM;All k that training set is obtained are special
The vectorial composing training collection eigenmatrix of signTrained using SVM, and with train come model to containing j
The test set eigenmatrix of a feature vectorClassification, whether be living body faces differentiation label, and
It is compared with true tag, so as to obtain the accuracy rate of In vivo detection differentiation.
Preferably, in step S1,
The SVM classifier is trained using cross validation method for the training sample set of input, and utilizes grid
Searching method finds the optimized parameter collection { C, γ } of SVM.
Compared with prior art, the present invention having the following advantages that and advantageous effect:
1st, WLD-TOP methods proposed by the present invention using Weber('s)law, on the basis of LBP-TOP, not only embody neighborhood
The magnitude relationship of pixel and center pixel has also quantified the difference of neighborhood territory pixel and center pixel, and using this species diversity as one
Kind feature, in conjunction with direction gradient feature so that the feature for describing son is more comprehensive.
2nd, WLD is described son and expands to three dimensions by the present invention, adds timeline information, time and spatial information are melted
It is integrated, detection accuracy is improved to the video attack with behavioral characteristics.
3rd, this invention simplifies the mathematical procedure that traditional WLD calculates direction gradient and histogram, computational methods of the invention
The order of feature vector element is only changed in conventional method, is not changing the basis of traditional WLD feature vectors element size
On, construct these features with more intuitive mathematic(al) representation.
4th, the present invention does training and test on different data collection, and data set is detected by constructing SYSU living body faces,
And the experiment for having done cross datasets is combined with CASIA data sets, improve the Generalization Capability of WLD-TOP.
Description of the drawings
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 (a)-Fig. 2 (c) is the effect exemplary plot that WLD-TOP of the present invention describes face;
Fig. 3 is the schematic diagram of WLD difference excitation of the present invention and direction gradient;
Fig. 4 is the histogram of WLD-TOP descriptions of the present invention;
Fig. 5 (a)-Fig. 5 (d) is CASIA human face region figure of the present invention for training and test;
Fig. 6 (a)-Fig. 6 (d) is SYSU human face region figure of the present invention for training and test.
Specific embodiment
With reference to embodiment and attached drawing, the present invention is described in further detail, but embodiments of the present invention are unlimited
In this.
Embodiment
As shown in Figure 1, the present invention is based on the living body faces detection methods of WLD-TOP, comprise the following steps:
(1) training stage:Training set video is read, human face region detection is carried out to each frame, and is converted into Gray Face
Picture frame sequence constructs three-dimensional image matrix, then constructs Filtering Template and calculates WLD features, regeneration WLD-TOP features to
Feature vector input SVM classifier is finally trained, so as to establish SVM models by amount;
The particular content in (1) stage is:
(1.1) training set video is read:Training set video is read, it may be living body faces video, it is also possible to record
Photo face, Replay Attack, printing picture attack etc., we read in video frame, then extract haar features be used in combination
Adaboosting algorithms carry out human face region detection, extract colored human face figure therein and switch to the consistent gray scale of size dimension
Figure;
(1.2) three-dimensional image matrix is constructed:The video frame length R once read is set, chooses the boundary threshold of T coordinates
LT, then really as the video frame length R of center pixel processingt=R-2LT, then the gray value reading one by this group of video frame
In three-dimensional matrice I (x, y, t) containing X, Y and T coordinate.
(1.3) WLD features are calculated:X, Y-coordinate boundary threshold L are chosen respectivelyX, LY, determine the sub glide filter of WLD descriptions
Template length p forms Filtering Template (p=3, L in experiment of p*pX=LY=LT=1);Three orthogonal planes XY, XT to I and
YT is utilized respectively the difference excitation ξ (x of each central pixel point after the p*p formwork calculations removing boundary threshold of WLD methodsc) and
Direction gradient Φt, computational methods are as follows:The central point for assuming initially that calculating is xc, its eight consecutive points are x respectivelyi, i=
0,...,p2- 1,DefinitionMake v1)
=] x5-x1,v2=x7-x3, take) definition
ThenWherein θ ' ∈ [0,2 π), S is the dimension of direction gradient feature, then Φt=0,1 ..., S-1, by above-mentioned
Step obtains description { the ξ ' (x of WLDc),Φt, as shown in figure 3,
(1.4) WLD-TOP feature vectors are generated:First to ξ ' (xc) make following normalization:Therefore ξ (xc) value is 0 to N-1 this N number of integer value;ΦtIt normalizes to integer 0
The S direction represented to S-1, by taking X/Y plane as an example, to WLD { ξ (xc),ΦtTwo-dimensional histogram progress dimensionality reduction, fixed Φt, ask
Corresponding ξ (xc) sub- histogram, the Φ tieed up according to StIt is divided into the sub- histogram of S groups, according to ΦtBeing sequentially connected with from small to large
This S sub- histograms, define f (x, y)=N × Φt+ξ(xc), then the histogram h of X/Y planei,XY=∑x,yM { f (x, y)=i },
I=0,1 ..., N Φt- 1, whereinSo as to form N ΦtThe WLD histograms H of dimensionXY, then use this
Method obtains three orthogonal plane (n=0:XY, n=1:XT, n=2:YT histogram)
hi,n=∑x,y,tM { f (x, y, t)=i }, i=0,1 ..., N Φt- 1, they are switched into N ΦtThe row vector H of dimensionn,
Front and rear connection successively, generation 3N ΦtThe WLD-TOP feature row vectors H of dimensionWT=[H0H1H2], as shown in Figure 4.
(1.5) feature vector input SVM classifier is trained:SVM classifier uses the SVM based on LIBSVM
Implementation tool;All k feature vector composing training collection eigenmatrixes that training set is obtainedUsing
SVM train, and with train come model to the test set eigenmatrix containing j feature vectorPoint
Whether class is the differentiation label of living body faces, and is compared with true tag, so as to obtain the accurate of In vivo detection differentiation
Rate.
(1.6) SVM models are established:SVM classifier is instructed for the training sample set of input using cross validation method
Practice, and the optimized parameter collection { C, γ } of SVM is found using trellis search method.
(2) test phase:For the image sequence of test, Face datection is carried out to each frame and is converted to Gray Face figure
As sequence, three-dimensional image matrix and Filtering Template are then constructed, calculates WLD features, WLD-TOP feature vectors is generated, finally send
Enter trained SVM models, draw living body faces testing result.
Construction three-dimensional image matrix, calculating WLD features, generation WLD-TOP feature vectors and general in above-mentioned steps (2)
The method that feature vector input SVM classifier is trained is identical with the method in step (1).
The present invention illustrates the effect of the present invention by following experiment:(1) experimental data set selects CASIA live body people
Face detection data set (shown in such as Fig. 5 (a)-Fig. 5 (d), Fig. 5 (a), Fig. 5 (b) are respectively the live body of training set and non-living body picture,
Fig. 5 (c), Fig. 5 (d) they are respectively the live body of test set and non-living body picture) and the homemade SYSU living body faces testing number of applicant
According to collection (shown in such as Fig. 6 (a)-Fig. 6 (d), Fig. 6 (a), Fig. 6 (b) are respectively the live body of training set and non-living body picture, Fig. 6 (c),
Fig. 6 (d) is respectively the live body of test set and non-living body picture), the CASIA dataset acquisitions face video of 50 users,
In 20 be used as training set, 30 be used as test set, each user recorded live body, shearing photo attack, manual bending photo
The video of four types is attacked in attack, video, and each type has 3 sections of videos, is divided into basic, normal, high resolution ratio face video, i.e., single
A user shares 3 sections of living body faces videos, 9 sections of personation face videos, the SYSU dataset acquisitions face video of 29 users,
Wherein 20 are used as training, and 9 are used as test, and attack pattern regards for the secondary face that iphone mobile phones and ipad tablets are recorded
Frequently, each user has 3 photographed scenes, corresponding each scene, a total of 3 sections of living body faces videos of single user and 6 sections of personations
Face video.
CASIA data sets and SYSU data set Internal Experiments:The video of experimental data set is pressed, human face region is extracted per frame,
And make normalized into the face gray scale picture of (62 × 62 pixel) in the same size, the glide filter template of WLD description is long
P=3 is spent, boundary threshold is set to LX=LY=LT=1;It is R to read each user's length in training sett=3 face video frame,
As a three-dimensional image matrix, design sketch such as Fig. 2 (a)-Fig. 2 (c) of image is shown, then to each orthogonal plane XY, XT, YT
WLD { ξ (x are calculated successively according to method shown in Fig. 3c), Φt, dimension N=256, the number S=8 of direction gradient of difference excitation,
WLD two-dimensional histograms are calculated using flow chart as shown in Figure 4;The WLD two-dimensional histograms of each orthogonal plane are pressed different
Φt8 one-dimensional sub- histograms are reduced to, they are pressed into ΦtSize order is stitched together to obtain the WLD one dimensional histograms of the pixel,
The histogram contains 2048 dimensional feature row vectors, splices these one dimensional histograms successively according still further to the order of XY, XT, YT plane and obtains
To the histogram of 6144 dimension WLD-TOP feature row vectors of the point;And so on, by the different video of each user of training set
Under face frame be all trained, obtain the eigenmatrix H of a training settrain, by the different user of test set also by (1)-
(2) processing obtains eigenmatrix Htest;It is rightTrained with the svmtrain functions of LIBSVM software packages, obtain training mould
Type model carries out SVM predictions using model and draws differentiation accuracy rate, adjusts-the t of svmtrain functions, the parameters such as-c ,-g,
Finding makes the highest optimized parameter of accuracy rate, show that final differentiations of the WLD-TOP on CASIA (or SYSU) data set is accurate
Rate accuracy.
CASIA data sets and SYSU data set cross-over experiments:Training is done with CASIA data sets in the application, is made of SUSU
Test, then does training with SYSU data sets again, is tested with CASIA.The video of two datasets is pressed, face is extracted per frame
Region, and make normalized into the face gray scale picture of (62 × 62 pixel) in the same size, the glide filter mould of WLD description
Plate length p=3, boundary threshold are set to LX=LY=LT=1;It is R to read each user's length in training sett=3 face video
Frame calculates WLD { ξ successively as a three-dimensional image matrix, then to each orthogonal plane XY, XT, YT according to method shown in Fig. 3
(xc), Φt, dimension N=256, the number S=8 of direction gradient of difference excitation calculate WLD using flow chart as shown in Figure 4
Two-dimensional histogram;Different Φ is pressed to the WLD two-dimensional histograms of each orthogonal planet8 one-dimensional sub- histograms are reduced to, by them
By ΦtSize order is stitched together to obtain the WLD one dimensional histograms of the pixel, according still further to XY, XT, YT plane order successively
Splice the histogram that these one dimensional histograms obtain 6144 dimension WLD-TOP feature row vectors of the point;And so on, by training set
Each user different video under face frame be trained, obtain the eigenmatrix H of a training settrain, will test
The different user of collection also obtains eigenmatrix H by (1)-(2) processingtest;To HtrainIt carries out with LIBSVM software packages
Svmtrain functions are trained, and obtain training pattern model, and carrying out SVM predictions using model draws differentiation accuracy rate, adjusts
- the t of svmtrain functions, the parameters such as-c ,-g, finding makes the highest optimized parameter of accuracy rate, draws WLD-TOP in cross datasets
On final differentiate accuracy rate accuracy.
In experiment, compared with the application has also been done with tri- kinds of methods of WLD, LBPTOP and LBP.Specific comparative result such as table 1- tables
Shown in 4:
Table 1:CASIA data sets are tested
Method | Accuracy rate % |
WLD-top | 90.78 |
LBP-top | 79.06 |
WLD | 87.08 |
LBP | 78.35 |
Table 2:SYSU data sets are tested
Method | Accuracy rate % |
WLD-top | 93.44 |
LBP-top | 81.97 |
WLD | 85.27 |
LBP | 81.42 |
Table 3CASIA data sets and SYSU data set cross-over experiments
Method | Accuracy rate % |
WLD-top | 74.62 |
LBP-top | 53.96 |
WLD | 64.81 |
LBP | 53.85 |
Table 4 respectively describes the average time (R that son differentiates a videot=3)
From the point of view of the result of experiment, WLD-TOP using the present invention describe sub- In vivo detection effect be better than it is existing
LBP-TOP and WLD description, the present invention not only have a certain upgrade in accuracy rate, in actual, complicated video human face attack
The accuracy rate of description in scene than based on LBP-TOP and WLD, which has, significantly to be improved.Since WLD-TOP is in three planes
Upper calculating WLD features, so speed can be slightly slower than other two methods.
Above-described embodiment is the preferable embodiment of the present invention, but embodiments of the present invention and from above-described embodiment
Limitation, other any Spirit Essences without departing from the present invention with made under principle change, modification, replacement, combine, simplification,
Equivalent substitute mode is should be, is included within protection scope of the present invention.
Claims (6)
1. a kind of living body faces detection method based on WLD-TOP description, which is characterized in that comprise the following steps:
S1, training stage:Training set video is read, human face region detection is carried out to each frame, and is converted into Gray Face image
Frame sequence constructs three-dimensional image matrix, then constructs Filtering Template and calculates WLD features, regenerates WLD-TOP feature vectors,
Finally feature vector input SVM classifier is trained, so as to establish SVM models;
S2, test phase:For the image sequence of test, human face region detection is carried out to each frame and is converted to Gray Face figure
As frame sequence, three-dimensional image matrix and Filtering Template are then constructed, calculates WLD features, generates WLD-TOP feature vectors, finally
Trained SVM models are sent into, draw living body faces testing result;
In step S1, construct Filtering Template and calculate the methods of WLD features and be:
X, Y-coordinate boundary threshold L are chosen respectivelyX、LY, determine the glide filter template length p of WLD description, form the filter of p*p
To three orthogonal planes XY, XT and YT of three-dimensional image matrix I, the p*p Filtering Templates for being utilized respectively WLD methods calculate ripple template
Remove the difference excitation ξ and direction gradient Φ of each central pixel point after boundary thresholdt, computational methods are as follows:Assume initially that meter
The central point of calculation is xc, its eight consecutive points are x respectivelyi, i=0 ..., p2- 1,It is fixed
JusticeMake v1=x5-x1,v2=x7-x3, takeIt is fixed
JusticeOrderWherein θ ' ∈ [0,2 π), S is direction
The dimension of Gradient Features, then Φt=0,1 ..., S-1 obtains description { the ξ ' (x of WLD by above-mentioned stepsc),Φt};
In step S1, WLD-TOP calculating process is:First to ξ ' (xc) make following normalization:Therefore ξ (xc) value is 0 to N-1 this N number of integer value;ΦtIt normalizes to integer 0
The S direction represented to S-1, by taking X/Y plane as an example, to WLD { ξ (xc),ΦtTwo-dimensional histogram progress dimensionality reduction, fixed Φt, ask
Corresponding ξ (xc) sub- histogram, the Φ tieed up according to StIt is divided into the sub- histogram of S groups, according to ΦtBeing sequentially connected with from small to large
This S sub- histograms, define f (x, y)=N × Φt+ξ(xc), then the histogram h of X/Y planei,XY=∑x,yM { f (x, y)=i },
I=0,1 ..., N Φt- 1, whereinSo as to form N ΦtThe WLD histograms H of dimensionXY, then use this
Method obtains three orthogonal plane (n=0:XY, n=1:XT, n=2:YT histogram h)i,n=∑x,y,tM { f (x, y, t)=i }, i
=0,1 ..., N Φt- 1, they are switched into N ΦtThe row vector H of dimensionn, front and rear connection successively, generation 3N ΦtThe WLD-TOP of dimension
Feature row vector HWT=[H0 H1 H2]。
2. the living body faces detection method according to claim 1 based on WLD-TOP description, which is characterized in that step
In S1, the training set video is living body faces video, the photo face recorded, Replay Attack or printing picture attack.
3. the living body faces detection method according to claim 1 based on WLD-TOP description, which is characterized in that step
In S1, after video frame is read in, extract haar features and carry out human face region detection with adaboosting algorithms, extraction is wherein
Colored human face figure and switch to the consistent gray-scale map of size dimension.
4. the living body faces detection method according to claim 1 based on WLD-TOP description, which is characterized in that step
In S1, the method for the construction three-dimensional image matrix is:
The video frame length R once read is set, chooses the boundary threshold L of T coordinatesT, then being regarded really as what center pixel was handled
Frequency frame length Rt=R-2LT, then by the gray value of this group of video frame read in one containing X, Y and T coordinate three-dimensional image matrix I (x,
Y, t) in.
5. the living body faces detection method according to claim 1 based on WLD-TOP description, which is characterized in that step
In S1,
The SVM classifier uses the SVM implementation tools based on LIBSVM;By all k features that training set obtains to
Measure composing training collection eigenmatrixTrained using SVM, and with train come model it is special to containing j
Levy the test set eigenmatrix of vectorClassification, whether be living body faces differentiation label, and with it is true
Real label compares, so as to obtain the accuracy rate of In vivo detection differentiation.
6. the living body faces detection method according to claim 1 based on WLD-TOP description, which is characterized in that step
In S1,
The SVM classifier is trained using cross validation method for the training sample set of input, and utilizes grid search
Method finds the optimized parameter collection { C, γ } of SVM.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510350814.2A CN104933414B (en) | 2015-06-23 | 2015-06-23 | A kind of living body faces detection method based on WLD-TOP |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510350814.2A CN104933414B (en) | 2015-06-23 | 2015-06-23 | A kind of living body faces detection method based on WLD-TOP |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104933414A CN104933414A (en) | 2015-09-23 |
CN104933414B true CN104933414B (en) | 2018-06-05 |
Family
ID=54120574
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510350814.2A Active CN104933414B (en) | 2015-06-23 | 2015-06-23 | A kind of living body faces detection method based on WLD-TOP |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104933414B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105320950A (en) * | 2015-11-23 | 2016-02-10 | 天津大学 | A video human face living body detection method |
CN105701495B (en) * | 2016-01-05 | 2022-08-16 | 贵州大学 | Image texture feature extraction method |
CN105740775B (en) * | 2016-01-25 | 2020-08-28 | 北京眼神智能科技有限公司 | Three-dimensional face living body identification method and device |
CN105975926B (en) * | 2016-04-29 | 2019-06-21 | 中山大学 | Human face in-vivo detection method based on light-field camera |
CN106228129B (en) * | 2016-07-18 | 2019-09-10 | 中山大学 | A kind of human face in-vivo detection method based on MATV feature |
CN106874857B (en) * | 2017-01-19 | 2020-12-01 | 腾讯科技(上海)有限公司 | Living body distinguishing method and system based on video analysis |
CN107122709B (en) * | 2017-03-17 | 2020-12-04 | 上海云从企业发展有限公司 | Living body detection method and device |
CN108875464A (en) * | 2017-05-16 | 2018-11-23 | 南京农业大学 | A kind of light music control system and control method based on three-dimensional face Emotion identification |
CN107563283B (en) * | 2017-07-26 | 2023-01-06 | 百度在线网络技术(北京)有限公司 | Method, device, equipment and storage medium for generating attack sample |
CN107808115A (en) * | 2017-09-27 | 2018-03-16 | 联想(北京)有限公司 | A kind of biopsy method, device and storage medium |
CN108021892B (en) * | 2017-12-06 | 2021-11-19 | 上海师范大学 | Human face living body detection method based on extremely short video |
CN107992842B (en) * | 2017-12-13 | 2020-08-11 | 深圳励飞科技有限公司 | Living body detection method, computer device, and computer-readable storage medium |
CN108596041B (en) * | 2018-03-28 | 2019-05-14 | 中科博宏(北京)科技有限公司 | A kind of human face in-vivo detection method based on video |
CN111259792B (en) * | 2020-01-15 | 2023-05-12 | 暨南大学 | DWT-LBP-DCT feature-based human face living body detection method |
CN111797702A (en) * | 2020-06-11 | 2020-10-20 | 南京信息工程大学 | Face counterfeit video detection method based on spatial local binary pattern and optical flow gradient |
CN112001429B (en) * | 2020-08-06 | 2023-07-11 | 中山大学 | Depth fake video detection method based on texture features |
CN112132000B (en) * | 2020-09-18 | 2024-01-23 | 睿云联(厦门)网络通讯技术有限公司 | Living body detection method, living body detection device, computer readable medium and electronic equipment |
CN114596638A (en) * | 2020-11-30 | 2022-06-07 | 华为技术有限公司 | Face living body detection method, device and storage medium |
CN112507847B (en) * | 2020-12-03 | 2022-11-08 | 江苏科技大学 | Face anti-fraud method based on neighborhood pixel difference weighting mode |
CN114120452A (en) * | 2021-09-02 | 2022-03-01 | 北京百度网讯科技有限公司 | Living body detection model training method and device, electronic equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040048753A (en) * | 2002-12-04 | 2004-06-10 | 삼성전자주식회사 | Apparatus and method for distinguishing photograph in face recognition system |
CN101216887A (en) * | 2008-01-04 | 2008-07-09 | 浙江大学 | An automatic computer authentication method for photographic faces and living faces |
-
2015
- 2015-06-23 CN CN201510350814.2A patent/CN104933414B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040048753A (en) * | 2002-12-04 | 2004-06-10 | 삼성전자주식회사 | Apparatus and method for distinguishing photograph in face recognition system |
CN101216887A (en) * | 2008-01-04 | 2008-07-09 | 浙江大学 | An automatic computer authentication method for photographic faces and living faces |
Non-Patent Citations (2)
Title |
---|
LBP-TOP based countermeasure against face spoofing attacks;Tiago de Freitas Pereira et al;《ACCVWorkshops 2012》;20131231;1-12 * |
WLD: A Robust Local Image Descriptor;Chu He et al;《IEEE Transactions on Software Engineering》;20140630;1-17 * |
Also Published As
Publication number | Publication date |
---|---|
CN104933414A (en) | 2015-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104933414B (en) | A kind of living body faces detection method based on WLD-TOP | |
JP7266106B2 (en) | Image coordinate system transformation method and its device, equipment and computer program | |
Zhang et al. | Spatio-temporal fusion for macro-and micro-expression spotting in long video sequences | |
CN103116763B (en) | A kind of living body faces detection method based on hsv color Spatial Statistical Character | |
CN109598242B (en) | Living body detection method | |
CN103810490B (en) | A kind of method and apparatus for the attribute for determining facial image | |
CN108021892B (en) | Human face living body detection method based on extremely short video | |
CN102004899B (en) | Human face identifying system and method | |
US8855363B2 (en) | Efficient method for tracking people | |
CN105631430A (en) | Matching method and apparatus for face image | |
JP6112801B2 (en) | Image recognition apparatus and image recognition method | |
CN110458063B (en) | Human face living body detection method for preventing video and photo cheating | |
CN105243376A (en) | Living body detection method and device | |
CN111383244B (en) | Target detection tracking method | |
Cheng et al. | Person re-identification by articulated appearance matching | |
Bouma et al. | Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination | |
KR101451854B1 (en) | Apparatus for recongnizing face expression and method thereof | |
CN115797970B (en) | Dense pedestrian target detection method and system based on YOLOv5 model | |
Liu et al. | A fast adaptive spatio-temporal 3d feature for video-based person re-identification | |
CN115862113A (en) | Stranger abnormity identification method, device, equipment and storage medium | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
Benlamoudi et al. | Face spoofing detection using multi-level local phase quantization (ML-LPQ) | |
Kroneman et al. | Accurate pedestrian localization in overhead depth images via Height-Augmented HOG | |
CN111626212B (en) | Method and device for identifying object in picture, storage medium and electronic device | |
JP2013218605A (en) | Image recognition device, image recognition method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
OL01 | Intention to license declared | ||
OL01 | Intention to license declared |