Nothing Special   »   [go: up one dir, main page]

CN103268484A - A Classifier Design Method for High Accuracy Face Recognition - Google Patents

A Classifier Design Method for High Accuracy Face Recognition Download PDF

Info

Publication number
CN103268484A
CN103268484A CN2013102267116A CN201310226711A CN103268484A CN 103268484 A CN103268484 A CN 103268484A CN 2013102267116 A CN2013102267116 A CN 2013102267116A CN 201310226711 A CN201310226711 A CN 201310226711A CN 103268484 A CN103268484 A CN 103268484A
Authority
CN
China
Prior art keywords
face image
samples
sample
face
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013102267116A
Other languages
Chinese (zh)
Inventor
张笑钦
樊明宇
王迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou University
Original Assignee
Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou University filed Critical Wenzhou University
Priority to CN2013102267116A priority Critical patent/CN103268484A/en
Publication of CN103268484A publication Critical patent/CN103268484A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a design method of a classifier for high-precision face recognition. The design method comprises the following steps of: (1) inputting a vector type face image data set for standardized processing to obtain a face image sample set, wherein the human face image data set must contain the known classes of face image samples and can contain unknown classes of face image samples; (2) utilizing the L1-minimization algorithm to calculate the sparse representation or sparse coding of each face image sample reconstructed by face image samples except the sample; and (3) utilizing the sparse representation or sparse coding of face image samples and the classification information of the classified face image samples to build the optimal model of the classifier, and solving the regularization optimization problem to obtain a classification function. The design method provided by the invention has an explicit way of expression, so that the real-time performance in face recognition application is obviously improved, and the high-precision face recognition is realized under the condition that an image has a great number of noise pixels.

Description

Classifier design method for high-precision recognition of face
Technical field
The invention belongs to field of machine vision, and in particular to a kind of to be used for the classifier design method of high-precision recognition of face.
Background technology
Recognition of face is one of most important research topic in pattern identification research field, is all unusual active research direction both at home and abroad.It is used as a kind of typical biometrics identification technology, it has special advantage in terms of availability, naturality, cost and has been obtained for extensive approval, such as People's Bank of China provides that all national treasury security protections will install face identification system, and the middle Ke Aosen face identification systems of Chinese Academy of Sciences's plum green grass or young crops group research and development are used for Opening Ceremony of the Games etc..
Performance best face identifying system compares cooperation in user in the world at present, and the requirement being normally applied can be met in the case that IMAQ condition is more stable.It is ripe that this is not meant to that face recognition technology has developed into, just the opposite, the need in terms of development and public safety with closed monitor system (CCTV), a greater amount of face recognition application is needed extensive face database, image capture environment be uncontrollable, user it is ill-matched in the case of use, hydraulic performance decline is very fast in this case for existing identifying system, it is impossible to reach realistic scale.The reason for causing this phenomenon mainly has:1. it is difficult to accurate detection and locating human face under uncontrollable environmental condition, 2. it is decreased obviously for the face specimen discerning performance being not covered with training set, 3. complicated illumination causes the apparent violent change of image, 4. the attitudes vibration of identified target, 5. low-quality image problem, the quality of human face image obtained in special occasions (field such as anti-terrorism, criminal investigation) is often excessively poor, in terms of showing as obscuring, there is low noise jamming, resolution ratio, image missing and block, high dimensional data classification problem of 6. magnanimity etc..
Traditional semisupervised classification method can using a small number of samples with classification information and much the samples without classification information come structural classification device (referring to M Belkin, P Niyogi, and V Sindhwani.Manifold Regularization:A Geometric Framework for Learning from Labeled and Unlabeled Examples.Journal of Machine Learning Research, 2006,7:2399-2434).User is when using the grader, only need to specify some positive classification samples and negative classification sample, then obtaining many nothings by computer random sampling specifies the sample of classification information to carry out the construction of grader, and at the same time, the effect of grader can be significantly improved.But existing sorting technique based on the manifold to data distribution it is assumed that validity of the hypothesis on facial recognition data collection does not obtain testing fully confirming.Recently, represent that the algorithm for carrying out face shows excellent Stability and veracity based on Sparse, and the influence to picture noise shows that preferably robustness is (referring to J.Wright, A.Yang, A.Ganesh, S.S.Sastry, Y.Ma, Robust Face Recognition via Sparse Representation, IEEE Transaction on Pattern Analysis and Machine Intelligence 31 (2009):210-227. and Wright, Y.Ma, J.Mairal, GSpairo, T.Huang, S.Yan, Sparse Representation for Computer Vision and Pattern Recognition, submitted to the Proceedings of the IEEE (2009)).But, represented that algorithm was all offline computational methods based on Sparse in the past, its optimization that complexity will be carried out to each data point is calculated, and it is impossible to meet the requirement of visual information identification in terms of real-time.
The content of the invention
It is used for the classifier design method of high-precision recognition of face the invention provides a kind of, it is therefore intended that the geological information of classified human face data and unfiled human face data can not be utilized simultaneously well by solving existing face recognition technology;Existing rarefaction face recognition algorithms do not have dominant expression simultaneously, it is impossible to the problem of carrying out online recognition of face in real time.
The technical solution adopted by the present invention is:
It is a kind of to be used for the classifier design method of high-precision recognition of face, it is characterised in that to comprise the following steps:
(1) the face image data collection of input vector form, and each facial image sample is standardized, obtain facial image sample set;The face image data, which is concentrated, must include the facial image sample of known class, can include the facial image sample of unknown classification;
(2) L1- norm minimum algorithms are utilized, rarefaction representation or sparse coding that each facial image sample is reconstructed by the facial image sample in addition to the sample is calculated;
(3) optimal model on grader is set up using the rarefaction representation or sparse coding of facial image sample and the classification information for facial image sample of having classified, classification function f (x) is obtained by solving regularization optimization problem;
(4) to the facial image of arbitrary unknown classification information, facial image is converted into vector form with step (1) identical method first, and be standardized, classified using the obtained dominant grader g (x) of classification function f (x), wherein
Figure BDA00003310880800031
C is the class number in facial image data set.
Further, step (1) includes following sub-step:
(2.1) for each facial image sample, by its corresponding digital image matrix, in the way of unified row pixel is piled up or row pixel is piled up, a column vector being made up of the pixel value of image is converted into, and the sample vector is subjected to the unitization of mould;
(2.2) data set that the facial image of multiple known category informations is constituted, D × (l+u) rank matrix X=[x for obtaining being made up of image pattern vector are handled by previous step1..., xl, xl+1..., xl+u], wherein D represents the number of pixels of single image in set, and l represents classification samples number, l > 0;U represents non-classified sample number, u >=0, xiRepresent the sample vector of some image.
Further, the step (2) is realized by following steps:
Make αi=(α1i... αi-1i0, αi+1i..., αU+l, i)TCorrespond to sample x for be askediBy the reconstruction coefficient vector of other sample rarefaction representations, α is calculated using following L1- norm minimums methodi α i * = arg min α i , e i { | | α i | | 1 + | | e i | | 1 } , s.t.xi=X αi+Iei, (1) wherein I is unit matrix, note
Figure BDA00003310880800033
Sample x is calculated for formula (1)iBy the reconstruction coefficient vector of other sample rarefaction representations, then the matrix that the rarefaction representation coefficient vector of training sample is constituted can be designated as A = [ α 1 * , α 2 * , · · · , α u + l * ] T .
Further, the step (3) includes following sub-step:
(3.1) assume that recognition of face classification function has general expression-form
f ( x ) = Σ i = 1 u + l b i k σ ( x , x i ) + be , - - - ( 2 )
Wherein bi=(b1i..., bC, i)TIt is coefficient vector to be asked, b is departure to be asked, e=(1 ..., 1)TIt is the column vector of a C dimension, kσ(x, y) is kernel function, and σ is the parametric variable in kernel function;
(3.2) classification function coefficient matrix B=[b to be asked are remembered1, b2..., bu+l], set up following optimization problem B:
B * = arg min B ∈ R C × ( u + l ) { 1 l Σ i = 1 l ( f ( x i ) - y i ) 2 + γ A | | f | | K 2 + γ S | | f | | S 2 } - - - ( 3 )
Wherein
Figure BDA00003310880800042
It is complexity metrics of the function f (x) in function space,
Figure BDA00003310880800043
It is margin of error when function f (x) keeps the rarefaction representation of training sample, γAAnd γSAll it is previously given arithmetic number, it is B to obtain the solution that discriminant function coefficient table reaches*, then calculate amount of deflection b*, finally trying to achieve classification function f (x) is f ( x ) = Σ i = 1 u + l b i * k σ ( x , x i ) + b * e .
The present invention has the advantages that:
(1) present invention is because used the method for sparse coding to describe relation between sample, and the present invention needed the contiguous range parameter of manual intervention algorithm to select unlike former manifold learning;
(2) reconstruction coefficients are calculated present invention utilizes the rarefaction representation of digital image training sample, and grader has been given a definition to reconstructing sparse holding regular terms in Regularization Theory framework so that the discriminant information included in face image data rarefaction representation is more efficiently utilized in our algorithm;
(3) present invention is compared with the classifier methods represented before using Sparse, our invention has dominant expression way, so that its real-time in face recognition application is significantly improved, and high-precision recognition of face can be realized in the case where image has much noise pixel.
Brief description of the drawings
Fig. 1 is used for the flow chart of the classifier design method of high-precision recognition of face for the present invention;
Fig. 2 is some sample images in YaleB face recognition databases;
Fig. 3 is experimental result compares figure;
Fig. 4 a are the facial images of noise-less pollution;
Fig. 4 b are the facial image samples after the noise pollution that variance is 0.05 σ;
Fig. 4 c are the facial image samples after the noise pollution that variance is 0.1 σ.
Embodiment
Inscribed in technical scheme that the invention will now be described in detail with reference to the accompanying drawings between each involved details.It should be noted that described embodiment is intended merely to facilitate the understanding of the present invention, and any restriction effect is not played to it.
As shown in figure 1, being used for the classifier design method of high-precision recognition of face the invention provides a kind of, comprise the following steps:
(1) input face image data and face image data is standardized.It is required that the facial image sample of known identities must be included in this face image data, the facial image sample of substantial amounts of unknown identity can be included;
It is specific as follows:
Input data.
Figure BDA00003310880800051
Arbitrary sample xiOr xl+jIt is the data of the column vector form changed into by digital picture, yiRepresent sample xiClassification information.Wherein l (can not be 0) represents the number of classification samples, and u (can be 0) represents the number of unfiled sample.Sample data xiWith corresponding categorization vector yiConcrete form be explained as follows:
For the digital picture of face, a resolution ratio includes m × n pixel for m × n digital picture, represents color comprising 3 numerical informations again on each pixel (the green B of the red G of R are blue).By extracting each row of image array (according to from left to right, the order blue green B of the red G of R) successively, and all row head and the tail tiling methods are constituted vectorial.Size can change into a column vector for having the element of 3 × m × n for m × n color digital image sample.To every by picture pile up after sample xi, we also have the standardisation process for carrying out that to it mould is 1, i.e. assignment againConversion.So the sample point x used in our gradersiForm be all to have 3 × m × n element, and the column vector of mould a length of 1.Remember the input dimension that D=3 × m × n is data.
Assuming that the class number in face image data is includes some facial images of C known identities in C classes, i.e. data, as sample xiWhen belonging to pth class, classification information yi=(0 ... 0,1,0 ... 0)TIt is to have C element, p-th of element value is 1, remaining is all 0 column vector.
(2) L1- norm minimum algorithms are utilized, the rarefaction representation that each facial image sample is reconstructed by the facial image sample in addition to the sample, or sparse coding is calculated;
It is specific as follows:
For each sample xi, make Di=[x1..., xi-1, 0, xi+1..., xu+l, I], wherein I is the unit matrix of a D rank, then DiIt is the data matrix of D rows (u+l+D) row;αi=(α1i... αi-1i, 0, αi+1i..., αu+l)TIt is xiRarefaction linear expression to be asked;Make vectorial θ to be solvedi=(α1i... αi-1I0, αi+1i..., αu+li, e1i..., eDi)T, then for solving sample data xiThe L1- norm minimum problems of rarefaction representation be configured to
s.t.xi=Diθi(specific solution visible document E.Candes and J.Romberg, L1-magic:Recovery of sparse signals via convex programming, http://www.acm.caltech.edu/11magic/, 2005) try to achieve
Figure BDA00003310880800062
Its preceding u+l row element is sample data xiRarefaction representation α i * = ( α 1 i * , · · · α i - 1 i * , 0 , α i + 1 i * · · · , α u + l * ) T , Remember matrix A = [ α 1 * , α 2 * , · · · , α u + l * ] T .
(3) utilize the rarefaction representation of training data and the classification information of classification based training data sets up the optimal model on grader, classification function f (x) is obtained by solving regularization optimization problem;
It is specific as follows:
Face image data collection with classification information (i.e. identity information) is obtained by step (1)
Figure BDA00003310880800065
The rarefaction for obtaining training sample data by step (2) represents sparse vector
Figure BDA00003310880800066
Following process is exactly to represent to construct discriminant function f (x) using these sample datas and rarefaction.
Assuming that our discriminant function has general expression-form
Figure BDA00003310880800067
Wherein bi=(b1i..., bC, i)TIt is coefficient to be asked, b is departure to be asked, e=(1 ..., 1)TIt is the column vector of a C dimension, kσ(x1, x2)=exp (- | | x1-x2||22) it is given gaussian kernel function, σ is empirical parameter variable manually given in the kernel function, and the selection of its occurrence can also use well-known cross validation method.Remember B=[b1..., bu+l] it is grader sparse matrix to be asked.
In order to utilize the face sample with classification information, while the grader of a high recognition performance is constructed using the rarefaction representation information without classification information face sample, it is proposed that solving following problem carrys out structural classification device
B * = arg min B ∈ R C × ( u + l ) { 1 l Σ i = 1 l ( f ( x i ) - y i ) 2 + γ A | | f | | K 2 + γ S | | f | | S 2 } - - - ( 1 )
Wherein
Figure BDA00003310880800072
It is complexity metrics of the function f (x) in functional Hilbert spaces,It is the error metrics that function f (x) keeps producing during the rarefaction representation of training sample, γAAnd γSAll it is the artificial arithmetic number empirically given, the selection of its occurrence can also use well-known cross validation method.
Known K is that the element on (l+u) × (l+u) matrix, the i-th row j column positions is kσ(xi, xj);
Figure BDA00003310880800074
It is rarefaction representation matrix;Y=[y1..., y1, 0 ..., 0] and it is the classification information matrix that C rows (u+l) are arranged;J=diag (1 ..., 1,0 ..., 0) is the diagonal matrix that size is (u+l) × (u+l), and its preceding l diagonal element value is 1, and remaining is 0.We define rarefaction coding keep error metrics be
| | f | | S 2 = 1 ( u + l ) 2 Σ i = 1 u + l | | f ( x i ) - Σ j = 1 u + l α ji * f ( x j ) | | 2
Figure 1
Theoretical according to reproducing kernel Hilbert space, the complexity metric of function in space is
| | f | | K 2 = tr ( BKB T ) - - - ( 3 )
Then the optimization problem that our discriminant function is solved, formula (1) is converted into following matrix form problem
B * = arg min B ∈ R C × ( u + l ) { 1 l tr ( ( Y - JKB ) ( Y - JKB ) T )
+ γ K tr ( BKB T ) + γ S ( u + l ) 2 tr ( BK ( I - A ) T ( I - A ) KB T ) } - - - ( 4 )
Problem (4) can be by object function, seeking B local derviation and making function after its derivation be equal to 0, you can obtains the least square solution of optimization problem (4)
B * = Y ( KJ + γ K lI + γ S l ( u + l ) 2 K ( I - A ) T ( I - A ) ) - 1 - - - ( 5 )
Then the amount of deflection is calculated
Figure BDA00003310880800084
The wherein F of matrix2Then the quadratic sum of norm elder generation calculating matrix all elements carries out evolution to it.The discriminant function finally tried to achieve is expressed as
f ( x ) = Σ i = 1 u + l b i * k σ ( x , x i ) + b * e .
(4) to the facial image sample of arbitrary unknown identity information, first with and the first step identical method image is turned into vector form and be standardized, classified using the obtained dominant grader g (x) of function f (x).
It is specific as follows:
Obtain after discriminant function f (x), for the facial image sample x of arbitrary unknown identity information, the method for being first according to the first step is translated into vector form, and sample vector then is carried out into mould standardization
Figure BDA00003310880800086
Sample x is substituted into discriminant function f (x), the output of discriminant function is a multivalue vector, might as well set f (x)=(f1(x) ..., fC(x))T, then x classification g (x) judge can obtain with the following method
g ( x ) = arg max i = 1 , · · · , C { f i ( x ) }
Can be widely used in using the dynamic vision system of the inventive method needs the place of recognition of face, intelligent transportation, military, customs, bank, hotel, enterprise, and department gateway such as government etc. needs to carry out the place of Automatic face recognition.Especially in the case that the quality of human face image of acquisition is low, the effect of identification is especially pronounced compared to analogous algorithms.
We used YaleB face recognition databases.Such as Fig. 2, the database contains the 2114 width human face photos of 38 people, and each width human face photo is all under different illumination conditions 32 × 32 gray level image.Everyone probably correspond to 60 human face photos in the database.We are random from database to select 15% data as sightless test data during training, in remaining 85% training data, we give m sample class information (i.e. identity information) in every class data, and remaining is used as the training data without classification information (identity information).Specific classifier parameters of the present invention are set to, but are not limited to, kernel function kσ(x1, x2)=exp (- | | x1-x2||22) in σ=0.5, regularization parameter λA=0.005 and λS=0.01.
What Fig. 3 was provided is the experimental result of the invention with face recognition algorithms leading in the world on the database, and wherein abscissa represents the number for having classification information facial image in every class, and ordinate represents accuracy rate of the algorithm in training stage sightless test data.In figure, our algorithmic notation is S-RSLC, general arest neighbors sorting technique in 1-NN algorithmic notation recognitions of face, GFHF is document X.Zhu, Z.Ghahramani, J.Lafferty, Semi-supervised learning using Gaussian fields and harmonic functions, in:The 20th International Conference on Machine Leaming (ICML), 2003, pp.912-919. the method in, SRC is document J.Wright, Y.Ma, J.Mairal, G.Spairo, T.Huang, S.Yan, Sparse Representation for Computer Vision and Pattern Recognition, method in submitted to the Proceedings of the IEEE (2009), LapRLSC is document M Belkin, P Niyogi, and V Sindhwani.Manifold Regularization:A Geometric Framework for Learning from Labeled and Unlabeled Examples.Journal of Machine Learning Research, 2006,7:Method in 2399-2434..
For stability of the verification algorithm to image pixel noise, all pixels in each image in database are all added independent identically distributed Gaussian noise by us.We generate two noise databases with two kinds of noises.One storehouse is that, by adding a variance for 0.05 σ, average produces for 0 Gaussian noise to pixel;One storehouse is that, by adding a variance for 0.1 σ, average produces for 0 Gaussian noise to pixel, and wherein σ is the variance yields of distance between all samples.Fig. 4 a are the facial images of noise-less pollution, and Fig. 4 b are the facial image samples after the noise pollution that variance is 0.05 σ, and Fig. 4 c are the facial image samples after the noise pollution that variance is 0.1 σ.As can be seen that people can hardly recognize the facial image of Noise.But the conclusion that Tables 1 and 2 is provided is shown, our method can very stably recognize the identity of Noise facial image.This is significantly to recognition of face problem.
Table 1 is recognition of face effect of all comparison algorithms on noise contaminated data collection (without classification results on classification information sample in training set)
Recognition of face effect (training stage sightless test data in face recognition result) of all comparison algorithms of table 2 on noise contaminated data collection
Figure BDA00003310880800111
It is described above; embodiment only in the present invention; but protection scope of the present invention is not limited thereto; it is any be familiar with the people of the technology disclosed herein technical scope in; it is appreciated that the conversion or replacement expected; it should all cover within the scope of the present invention, therefore, protection scope of the present invention should be defined by the protection domain of claims.

Claims (4)

1.一种用于高精度人脸识别的分类器设计方法,其特征在于,包括如下步骤: 1. a classifier design method for high precision face recognition, characterized in that, comprising the steps: (1)输入向量形式的人脸图像数据集,并对每个人脸图像样本进行标准化处理,得到人脸图像样本集;所述人脸图像数据集中必须包含已知类别的人脸图像样本,可以包含未知类别的人脸图像样本; (1) Input a face image dataset in the form of a vector, and standardize each face image sample to obtain a face image sample set; the face image dataset must contain face image samples of known categories, which can be Contains face image samples of unknown categories; (2)利用                                               
Figure 2013102267116100001DEST_PATH_IMAGE002
-范数最小化算法,计算每个人脸图像样本被除该样本之外的人脸图像样本所重构的稀疏表示或稀疏编码;
(2) use
Figure 2013102267116100001DEST_PATH_IMAGE002
-a norm minimization algorithm, calculating the sparse representation or sparse coding of each face image sample reconstructed by face image samples other than the sample;
(3)利用人脸图像样本的稀疏表示或稀疏编码和已分类人脸图像样本的类别信息建立关于分类器的最优化模型,通过求解正则化优化问题得到分类函数(3) Use the sparse representation or sparse coding of the face image samples and the category information of the classified face image samples to establish an optimal model for the classifier, and obtain the classification function by solving the regularization optimization problem ; (4)对任意的未知类别信息的人脸图像,首先用和步骤(1)相同的方法将人脸图像转化为向量形式,并进行标准化处理,利用分类函数
Figure 804607DEST_PATH_IMAGE004
得到的显性分类器
Figure 2013102267116100001DEST_PATH_IMAGE006
进行分类,其中,
Figure 2013102267116100001DEST_PATH_IMAGE008
Figure 2013102267116100001DEST_PATH_IMAGE010
为人脸图像数据集中的类别数目。
(4) For any face image with unknown category information, first use the same method as step (1) to convert the face image into a vector form, and perform standardized processing, using the classification function
Figure 804607DEST_PATH_IMAGE004
The resulting explicit classifier
Figure 2013102267116100001DEST_PATH_IMAGE006
to classify, among them,
Figure 2013102267116100001DEST_PATH_IMAGE008
,
Figure 2013102267116100001DEST_PATH_IMAGE010
is the number of categories in the face image dataset.
2.根据权利要求1所述的用于高精度人脸识别的分类器设计方法,其特征在于,步骤(1)包括以下子步骤: 2. The classifier design method for high-precision face recognition according to claim 1, wherein step (1) includes the following sub-steps: (2.1)对于每个人脸图像样本,将其对应的数字图像矩阵,按照统一的行像素堆砌或者列像素堆砌的方式,转化为一个由图像的像素值构成的列向量,并且将该样本向量进行模的单位化; (2.1) For each face image sample, convert its corresponding digital image matrix into a column vector composed of the pixel values of the image according to the uniform row pixel stacking or column pixel stacking method, and perform this sample vector Unitization of modules; (2.2)多个已知类别信息的人脸图像构成的数据集,通过上一步处理得到由图像样本向量构成的
Figure 2013102267116100001DEST_PATH_IMAGE012
阶矩阵
Figure 2013102267116100001DEST_PATH_IMAGE014
,其中表示集合中单个图像的像素个数,
Figure 2013102267116100001DEST_PATH_IMAGE018
表示已分类样本数,
Figure 2013102267116100001DEST_PATH_IMAGE020
; 表示未分类的样本数,
Figure 2013102267116100001DEST_PATH_IMAGE024
, 表示某个图像的样本向量。
(2.2) A data set composed of multiple face images with known category information, obtained through the previous step processing and composed of image sample vectors
Figure 2013102267116100001DEST_PATH_IMAGE012
order matrix
Figure 2013102267116100001DEST_PATH_IMAGE014
,in Indicates the number of pixels of a single image in the collection,
Figure 2013102267116100001DEST_PATH_IMAGE018
Indicates the number of classified samples,
Figure 2013102267116100001DEST_PATH_IMAGE020
; Indicates the number of unclassified samples,
Figure 2013102267116100001DEST_PATH_IMAGE024
, A vector of samples representing an image.
3.根据权利要求2所述的用于高精度人脸识别的分类器设计方法,其特征在于,所述步骤(2)由如下步骤实现: 3. The classifier design method for high-precision face recognition according to claim 2, wherein said step (2) is realized by the following steps:
Figure 2013102267116100001DEST_PATH_IMAGE028
为待求的对应于样本被其它样本稀疏表示的重构系数向量,采用如下的-范数最小化方法计算
Figure 2013102267116100001DEST_PATH_IMAGE030
:    s.t.  
Figure 2013102267116100001DEST_PATH_IMAGE034
,                    (1)
make
Figure 2013102267116100001DEST_PATH_IMAGE028
corresponding to the samples to be sought The reconstructed coefficient vector sparsely represented by other samples adopts the following - Norm minimization method calculation
Figure 2013102267116100001DEST_PATH_IMAGE030
: st
Figure 2013102267116100001DEST_PATH_IMAGE034
, (1)
其中
Figure 2013102267116100001DEST_PATH_IMAGE036
是单位矩阵,记
Figure 2013102267116100001DEST_PATH_IMAGE038
为公式(1)计算得样本
Figure 104242DEST_PATH_IMAGE026
被其它样本稀疏表示的重构系数向量,则训练样本的稀疏表示系数向量构成的矩阵可记为
in
Figure 2013102267116100001DEST_PATH_IMAGE036
is the identity matrix, remember
Figure 2013102267116100001DEST_PATH_IMAGE038
The sample calculated for formula (1)
Figure 104242DEST_PATH_IMAGE026
The reconstructed coefficient vectors are sparsely represented by other samples, then the matrix formed by the sparsely represented coefficient vectors of the training samples can be written as .
4.根据权利要求3所述的用于高精度人脸识别的分类器设计方法,其特征在于,所述步骤(3)包括以下子步骤: 4. The classifier design method for high-precision face recognition according to claim 3, wherein the step (3) includes the following sub-steps: (3.1)假设人脸识别分类函数具有一般的表达形式 (3.1) Assume that the face recognition classification function has a general expression form     
Figure 2013102267116100001DEST_PATH_IMAGE042
,                                    (2)
Figure 2013102267116100001DEST_PATH_IMAGE042
, (2)
其中
Figure 2013102267116100001DEST_PATH_IMAGE044
是待求的系数向量,
Figure 2013102267116100001DEST_PATH_IMAGE046
是待求偏差量,
Figure 2013102267116100001DEST_PATH_IMAGE048
  是一个
Figure 701446DEST_PATH_IMAGE010
维的列向量,
Figure 2013102267116100001DEST_PATH_IMAGE050
是核函数,是核函数中的参数变量;
in
Figure 2013102267116100001DEST_PATH_IMAGE044
is the coefficient vector to be sought,
Figure 2013102267116100001DEST_PATH_IMAGE046
is the desired deviation,
Figure 2013102267116100001DEST_PATH_IMAGE048
Is a
Figure 701446DEST_PATH_IMAGE010
column vector of dimensions,
Figure 2013102267116100001DEST_PATH_IMAGE050
is the kernel function, is the parameter variable in the kernel function;
(3.2)记待求的分类函数系数矩阵
Figure 2013102267116100001DEST_PATH_IMAGE054
,建立如下的优化问题求解
(3.2) Remember the coefficient matrix of the classification function to be found
Figure 2013102267116100001DEST_PATH_IMAGE054
, establish the following optimization problem to solve :
Figure 2013102267116100001DEST_PATH_IMAGE058
                     (3)
Figure 2013102267116100001DEST_PATH_IMAGE058
(3)
    其中   
Figure 2013102267116100001DEST_PATH_IMAGE060
是函数
Figure 2013102267116100001DEST_PATH_IMAGE062
在函数空间中的复杂度度量,是函数
Figure 2013102267116100001DEST_PATH_IMAGE066
保持训练样本的稀疏表示时的误差量,
Figure 2013102267116100001DEST_PATH_IMAGE070
都是预先给定的正实数,得到判别函数系数表达的解为
Figure 2013102267116100001DEST_PATH_IMAGE072
,然后计算偏差数值,最后求得分类函数
in
Figure 2013102267116100001DEST_PATH_IMAGE060
is a function
Figure 2013102267116100001DEST_PATH_IMAGE062
A complexity measure in function space, is a function
Figure 2013102267116100001DEST_PATH_IMAGE066
The amount of error when maintaining a sparse representation of the training samples, and
Figure 2013102267116100001DEST_PATH_IMAGE070
are all pre-given positive real numbers, and the solution expressed by the coefficients of the discriminant function is
Figure 2013102267116100001DEST_PATH_IMAGE072
, and then calculate the deviation value , and finally get the classification function for .
CN2013102267116A 2013-06-06 2013-06-06 A Classifier Design Method for High Accuracy Face Recognition Pending CN103268484A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013102267116A CN103268484A (en) 2013-06-06 2013-06-06 A Classifier Design Method for High Accuracy Face Recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013102267116A CN103268484A (en) 2013-06-06 2013-06-06 A Classifier Design Method for High Accuracy Face Recognition

Publications (1)

Publication Number Publication Date
CN103268484A true CN103268484A (en) 2013-08-28

Family

ID=49012111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013102267116A Pending CN103268484A (en) 2013-06-06 2013-06-06 A Classifier Design Method for High Accuracy Face Recognition

Country Status (1)

Country Link
CN (1) CN103268484A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632138A (en) * 2013-11-20 2014-03-12 南京信息工程大学 Low-rank partitioning sparse representation human face identifying method
CN105069424A (en) * 2014-08-01 2015-11-18 Tcl集团股份有限公司 Quick recognition system and method for face
CN106127246A (en) * 2016-06-23 2016-11-16 金维度信息科技(北京)有限公司 The information fusion of a kind of general related sparse space of matrices and adaptive matching method
CN109214307A (en) * 2018-08-15 2019-01-15 长安大学 Low resolution illumination robust human face recognition methods based on probability rarefaction representation
CN111667400A (en) * 2020-05-30 2020-09-15 温州大学大数据与信息技术研究院 Human face contour feature stylization generation method based on unsupervised learning
CN112368708A (en) * 2018-07-02 2021-02-12 斯托瓦斯医学研究所 Facial image recognition using pseudo-images
CN112417986A (en) * 2020-10-30 2021-02-26 四川天翼网络服务有限公司 Semi-supervised online face recognition method and system based on deep neural network model
CN113111780A (en) * 2021-04-13 2021-07-13 谢爱菊 Regional alarm monitoring system and method based on block chain

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081053A1 (en) * 2009-10-02 2011-04-07 Qualcomm Incorporated Methods and systems for occlusion tolerant face recognition
CN102129570A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Method for designing manifold based regularization based semi-supervised classifier for dynamic vision
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110081053A1 (en) * 2009-10-02 2011-04-07 Qualcomm Incorporated Methods and systems for occlusion tolerant face recognition
CN102129570A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Method for designing manifold based regularization based semi-supervised classifier for dynamic vision
CN102799870A (en) * 2012-07-13 2012-11-28 复旦大学 Single-training sample face recognition method based on blocking consistency LBP (Local Binary Pattern) and sparse coding

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
MINGYU FAN 等: "Sparse regularization for semi-supervised classification", 《PATTERN RECOGNITION》 *
MINGYU FAN 等: "Sparse regularization for semi-supervised classification", 《PATTERN RECOGNITION》, 22 February 2011 (2011-02-22), pages 1777 - 1782 *
WEI LI 等: "Occlusion Handling with l1-Regularized Sparse Reconstruction", 《ACCV》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632138A (en) * 2013-11-20 2014-03-12 南京信息工程大学 Low-rank partitioning sparse representation human face identifying method
CN103632138B (en) * 2013-11-20 2016-09-28 南京信息工程大学 A kind of face identification method of low-rank piecemeal rarefaction representation
CN105069424A (en) * 2014-08-01 2015-11-18 Tcl集团股份有限公司 Quick recognition system and method for face
CN106127246A (en) * 2016-06-23 2016-11-16 金维度信息科技(北京)有限公司 The information fusion of a kind of general related sparse space of matrices and adaptive matching method
CN106127246B (en) * 2016-06-23 2021-05-28 金维度信息科技(北京)有限公司 Information fusion and self-adaptive matching method for general correlation sparse matrix space
CN112368708A (en) * 2018-07-02 2021-02-12 斯托瓦斯医学研究所 Facial image recognition using pseudo-images
CN112368708B (en) * 2018-07-02 2024-04-30 斯托瓦斯医学研究所 Facial image recognition using pseudo-images
CN109214307A (en) * 2018-08-15 2019-01-15 长安大学 Low resolution illumination robust human face recognition methods based on probability rarefaction representation
CN111667400A (en) * 2020-05-30 2020-09-15 温州大学大数据与信息技术研究院 Human face contour feature stylization generation method based on unsupervised learning
CN112417986A (en) * 2020-10-30 2021-02-26 四川天翼网络服务有限公司 Semi-supervised online face recognition method and system based on deep neural network model
CN112417986B (en) * 2020-10-30 2023-03-10 四川天翼网络股份有限公司 Semi-supervised online face recognition method and system based on deep neural network model
CN113111780A (en) * 2021-04-13 2021-07-13 谢爱菊 Regional alarm monitoring system and method based on block chain
CN113111780B (en) * 2021-04-13 2023-03-28 苏州鱼得水电气科技有限公司 Regional alarm supervision system and method based on block chain

Similar Documents

Publication Publication Date Title
Zhu et al. A deep learning approach to patch-based image inpainting forensics
CN103268484A (en) A Classifier Design Method for High Accuracy Face Recognition
Chen et al. Convolutional neural network based dem super resolution
CN108537742A (en) A kind of panchromatic sharpening method of remote sensing images based on generation confrontation network
CN105138973A (en) Face authentication method and device
CN103699874B (en) Crowd abnormal behavior identification method based on SURF (Speed-Up Robust Feature) stream and LLE (Locally Linear Embedding) sparse representation
CN111738044A (en) A school violence assessment method based on deep learning behavior recognition
CN107977661A (en) The region of interest area detecting method decomposed based on full convolutional neural networks and low-rank sparse
CN117292274B (en) Hyperspectral wetland image classification method based on zero-shot deep semantic dictionary learning
CN118799619A (en) A method for batch recognition and automatic classification and archiving of image content
CN110210321A (en) Deficient sample face recognition method based on multi-dimentional scale converting network Yu divided group method
Fu et al. Personality trait detection based on ASM localization and deep learning
CN111652265A (en) A Robust Semi-Supervised Sparse Feature Selection Method Based on Self-Adjusting Graphs
Yao [Retracted] Application of Higher Education Management in Colleges and Universities by Deep Learning
CN105787045A (en) Precision enhancing method for visual media semantic indexing
CN106203368A (en) A kind of traffic video frequency vehicle recognition methods based on SRC and SVM assembled classifier
CN117409206B (en) Small sample image segmentation method based on self-adaptive prototype aggregation network
CN107085700A (en) A Face Recognition Method Based on the Combination of Sparse Representation and Single Hidden Layer Neural Network Technology
CN106960225A (en) A kind of sparse image classification method supervised based on low-rank
Wu et al. Research and application of Lasso regression model based on prior coefficient framework
CN117218494A (en) Abnormal flow detection method based on loop generation countermeasure network and multi-head self-attention mechanism
You et al. A novel trajectory-vlad based action recognition algorithm for video analysis
Xi et al. Identifying local useful information for attribute graph anomaly detection
Lv et al. Non-local sparse attention based swin transformer V2 for image super-resolution
Lei et al. Student action recognition based on multiple features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130828