Nothing Special   »   [go: up one dir, main page]

CN108681725A - A kind of weighting sparse representation face identification method - Google Patents

A kind of weighting sparse representation face identification method Download PDF

Info

Publication number
CN108681725A
CN108681725A CN201810549661.8A CN201810549661A CN108681725A CN 108681725 A CN108681725 A CN 108681725A CN 201810549661 A CN201810549661 A CN 201810549661A CN 108681725 A CN108681725 A CN 108681725A
Authority
CN
China
Prior art keywords
face image
formula
training
tested
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810549661.8A
Other languages
Chinese (zh)
Inventor
王林
邓芳娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201810549661.8A priority Critical patent/CN108681725A/en
Publication of CN108681725A publication Critical patent/CN108681725A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/513Sparse representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of weighting sparse representation face identification methods, specifically follow the steps below:Step 1, training facial image is inputted, dictionary matrix A is obtained;Step 2, Feature Dimension Reduction is carried out using Principal Component Analysis to dictionary matrix A and facial image y to be measured, and makes dictionary matrix A and facial image y to be measured that there is l into the normalized of ranks2Norm;Step 3, training facial image weight w is calculated using gaussian kernel functioni,j;Step 4, the training facial image weight w being introduced into step 3i,j, construction weighting training dictionary matrix A ';Step 5, sparse coefficient x is solved, obtains and reconstructs facial image y to be measured*;Step 6, according to the reconstruct of step 5 facial image y to be measured*, calculate the corresponding residual error per class people of facial image to be measured;Step 7, it exports:The classification of facial image y to be measured is judged by formula (15), realizes recognition of face.The method of the present invention improves face recognition algorithms in posture, expression etc. compared with the accuracy of identification and robustness under changing environment in major class.

Description

一种加权稀疏表示人脸识别方法A Weighted Sparse Representation Method for Face Recognition

技术领域technical field

本发明属于图像处理及模式识别的技术领域,具体涉及一种加权稀疏表示人脸识别方法。The invention belongs to the technical field of image processing and pattern recognition, and in particular relates to a face recognition method with weighted sparse representation.

背景技术Background technique

人脸别技术作为计算机视觉和模式识别领域研究的热点课题之一,因其强大适应性、高安全性和智能互动性被广泛用于身份证信息系统、银行监控、海关出入境检查、犯罪嫌疑人追逃、校园安全、门禁系统等领域,具有潜在的应用前景。As one of the hot topics in the field of computer vision and pattern recognition, face recognition technology is widely used in ID card information systems, bank monitoring, customs entry-exit inspections, and criminal suspects due to its strong adaptability, high security, and intelligent interaction. It has potential application prospects in fields such as people chasing and fleeing, campus security, and access control systems.

人脸识别系统通常包括人脸检测、特征提取和识别算法三个部分。传统的人脸识别技术研究侧重于特征提取和识别算法,并已形成了一些经典的方法,例如,主成分分析、线性鉴别分析、弹性匹配、神经网络等方法。现实中很多事物都具有稀疏性这一普遍特点,并且在人脸识别领域中,若每一类人脸图像样本比较充足,则这些人脸样本可张成一个人脸子空间,此类人脸的每一幅图像都能由这个子空间线性表示或逼近。基于此思想,2009年,Wright等人认为图像在图像模式中的稀疏性不仅存在于图像内部,也存在于图像模式之间,继而提出了基于稀疏表示(Sparse Representation Classification,SRC)的人脸识别方法,利用同类人脸图像的线性相关性,将所有训练样本构造成超完备字典并逐一对测试样本进行稀疏重建,最后根据稀疏重构误差进行分类判别,该方法有效地解决了人脸识别鲁棒性差的问题。随后,一系列基于SRC方法的研究取得了很大的进展,具有代表性的包括稀疏重构算法的优化策略、超完备字典的构造以及SRC算法与其他算法的结合等等。虽然SRC在人脸识别领域中取得了很优异的成果,并引起了学者们的广泛研究,但目前仍然存在一些技术难点。首先,所采集的人脸图像是在不可控的自然环境下进行,常常含有姿态、光照、表情等类内变化;此外,SRC算法耗时高,无法满足实际应用中对实时性的要求。基于此,如何在含有较大类内变化的人脸识别问题中高效地取得良好的识别结果,便成了人脸识别研究所关心的问题。A face recognition system usually includes three parts: face detection, feature extraction and recognition algorithm. Traditional face recognition technology research focuses on feature extraction and recognition algorithms, and some classic methods have been formed, such as principal component analysis, linear discriminant analysis, elastic matching, neural network and other methods. Many things in reality have the general characteristic of sparsity, and in the field of face recognition, if there are enough face image samples of each type, these face samples can be expanded into a face subspace, and the number of such face images Every image can be linearly represented or approximated by this subspace. Based on this idea, in 2009, Wright et al. believed that the sparsity of the image in the image mode not only exists within the image, but also exists between the image modes, and then proposed face recognition based on Sparse Representation Classification (SRC) method, using the linear correlation of similar face images, constructing all training samples into a super-complete dictionary and performing sparse reconstruction on test samples one by one, and finally classifying and discriminating according to the sparse reconstruction error. The problem of poor stickiness. Subsequently, a series of researches based on the SRC method have made great progress, representatively including the optimization strategy of the sparse reconstruction algorithm, the construction of the over-complete dictionary, and the combination of the SRC algorithm and other algorithms, etc. Although SRC has achieved excellent results in the field of face recognition and has attracted extensive research by scholars, there are still some technical difficulties. First of all, the collected face images are carried out in an uncontrollable natural environment, and often contain intra-class changes such as posture, illumination, and expression; in addition, the SRC algorithm is time-consuming and cannot meet the real-time requirements in practical applications. Based on this, how to efficiently obtain good recognition results in face recognition problems with large intra-class changes has become a concern of face recognition research.

发明内容Contents of the invention

本发明的目的是提供一种加权稀疏表示人脸识别方法,解决了现有人脸识别方法的人脸图像处于姿态、表情等较大类内变化情况时表现不足的问题。The purpose of the present invention is to provide a weighted sparse representation face recognition method, which solves the problem of insufficient performance when the face image of the existing face recognition method is in a large category of changes such as posture and expression.

本发明所采用的技术方案是,一种加权稀疏表示人脸识别方法,其特征在于,具体按照以下步骤进行:The technical solution adopted in the present invention is a method for face recognition with weighted sparse representation, which is characterized in that it is specifically carried out in accordance with the following steps:

步骤1,输入训练人脸图像,获得字典矩阵A;Step 1, input the training face image to obtain the dictionary matrix A;

步骤2,对字典矩阵A和待测人脸图像y采用主成分分析法进行特征降维,并且进行列的归一化处理使字典矩阵A和待测人脸图像y具有l2范数;Step 2, adopt principal component analysis method to carry out feature dimensionality reduction to dictionary matrix A and human face image y to be tested, and carry out the normalization processing of column so that dictionary matrix A and human face image y to be tested have 12 norm;

步骤3,采用高斯核函数计算每张训练人脸图像和待测人脸图像y之间的距离或相似度,即训练人脸图像权重wi,jStep 3, using the Gaussian kernel function to calculate the distance or similarity between each training face image and the test face image y, that is, the training face image weight w i,j ;

步骤4,引入步骤3中的训练人脸图像权重wi,j,构造加权训练字典矩阵A':Step 4, introduce the training face image weight w i,j in step 3, and construct the weighted training dictionary matrix A':

式(4)中,表示加权后第k类样本的第nk个图像;In formula (4), Represents the n kth image of the kth class sample after weighting;

步骤5,求解稀疏系数,获得重构待测人脸图像;Step 5, solving the sparse coefficient to obtain the reconstructed face image to be tested;

步骤6,根据步骤5获得的重构待测人脸图像y*,计算待测人脸图像对应的每类人的残差:Step 6, according to the reconstructed test face image y * obtained in step 5, calculate the residual error of each type of person corresponding to the test face image:

ri(y)=||y-y*||2i=1,2,…,k (14);r i (y)=||yy * || 2 i=1,2,...,k (14);

步骤7,输出:求解待测人脸图像y的类别,将待测人脸图像y的类别与训练人脸图像的类别进行比对,待两者一致时,则实现人脸识别;Step 7, output: solve the category of the face image y to be tested, compare the category of the face image y to be tested with the category of the training face image, and when the two are consistent, realize face recognition;

待测人脸图像y类别的表达式为:The expression of the y category of the face image to be tested is:

identity(y)=argmin ri(y) (15)。identity(y) = argmin ri(y) (15).

本发明的特点还在于,The present invention is also characterized in that,

在步骤1中,获得字典矩阵A按照以下步骤实施:In step 1, the dictionary matrix A is obtained according to the following steps:

假定有k类训练人脸图像,每类由ni张训练人脸图像组成,则共有张训练人脸图像;Assume that there are k classes of training face images, and each class consists of n i training face images, then there are A training face image;

设每张人脸图像的像素为w×h,将该张人脸图像堆积成一个维数为m=w*h的列向量v,则vi,j∈Rm表示第i类的第j张训练人脸图像,其中m为特征向量的维数;Let the pixel of each face image be w×h, and accumulate the face images into a column vector v whose dimension is m=w*h, then v i,j ∈ R m represents the jth A training face image, where m is the dimension of the feature vector;

将第i类训练人脸图像的所有列向量可合并成样本集再将k个类别的样本集Ai组合起来可得到字典矩阵A:All the column vectors of the i-th training face image can be combined into a sample set Then combine the sample sets A i of k categories to get the dictionary matrix A:

A=[A1,A2,…,Ak]∈Rm×N (1)。A=[A 1 , A 2 , . . . , A k ]∈R m×N (1).

在步骤2中,采用的降维处理方法是将原始的所有训练人脸图像和待测人脸图像处理成384维的特征向量。In step 2, the dimensionality reduction processing method adopted is to process all the original training face images and test face images into 384-dimensional feature vectors.

步骤3中,训练人脸图像权重wi,j通过式(2)计算:In step 3, the training face image weight w i,j is calculated by formula (2):

式(2)中,vi,j表示第i类的第j张训练人脸图像,y表示待测人脸图像,σ是高斯核函数的宽度参数,该宽度参数是所有训练人脸图像之间的平均欧式距离,即:In formula (2), v i,j represent the jth training face image of the i class, y represents the face image to be tested, and σ is the width parameter of the Gaussian kernel function, which is the width parameter of all training face images The average Euclidean distance between , namely:

式(3)中,M为所有样本之间的欧氏距离数目。In formula (3), M is the number of Euclidean distances between all samples.

在步骤5中,获得重构待测人脸图像具体按照以下步骤实施:In step 5, obtaining the reconstructed face image to be tested is specifically implemented according to the following steps:

步骤5.1,求解l0最小化问题:Step 5.1, solving the l 0 minimization problem:

式(5)中,x为稀疏系数;In formula (5), x is a sparse coefficient;

步骤5.2,采用对偶增广拉格朗日乘子法求解式(5),则式(5)对应的拉格朗日乘子函数为:Step 5.2, using the dual augmented Lagrangian multiplier method to solve equation (5), then the Lagrangian multiplier function corresponding to equation (5) is:

式(6)中,μ>0,μ为一个常数且表示将等式约束转化为无约束问题的补偿因子,γ为求出的拉格朗日乘子矢量;In formula (6), μ>0, μ is a constant and represents the compensation factor for transforming equality constraints into unconstrained problems, and γ is the obtained Lagrangian multiplier vector;

若γ*为拉格朗日乘子矢量,且满足优化问题的二阶充分条件,那么,在补偿因子μ足够大的情况下,稀疏系数优化问题可通过式(7)求出,即:If γ * is a Lagrangian multiplier vector and satisfies the second-order sufficient condition of the optimization problem, then, when the compensation factor μ is large enough, the optimization problem of sparse coefficients can be solved by formula (7), namely:

通过式(7)可知,求解稀疏系数x,需要确定拉格朗日乘子矢量γ*和补偿因子μ的取值,则通过迭代方法来同时计算x和γ的取值,即:It can be seen from formula (7) that to solve the sparse coefficient x, it is necessary to determine the values of the Lagrangian multiplier vector γ * and the compensation factor μ, and then calculate the values of x and γ at the same time through an iterative method, namely:

式(8)中,{μl}为正的单调递增序列,l表示迭代的次数。In formula (8), {μ l } is a positive monotonically increasing sequence, and l represents the number of iterations.

步骤5.3,为了精确重构待测人脸图像y,将ALM算法运用在对偶问题上,即DALM算法,则式(5)变换为式(9):In step 5.3, in order to accurately reconstruct the face image y to be tested, the ALM algorithm is applied to the dual problem, that is, the DALM algorithm, then formula (5) is transformed into formula (9):

式(9)中,稀疏系数x的取值区域为则式(9)的拉格朗日函数形式的问题可表示为:In formula (9), the value range of the sparse coefficient x is Then the problem in the Lagrange function form of formula (9) can be expressed as:

式(10)中,β为大于零的常数且为约束转为等式的补偿因子,z为重构过程中获得的稀疏系数;In formula (10), β is a constant greater than zero and is the compensation factor for converting constraints into equations, and z is the sparse coefficient obtained during the reconstruction process;

步骤5.4,采用分步迭代更新方法求解初始化问题x、对偶问题变量y*及z,令x=xl,y*=yl,由此结果,将zl更新为zl+1,即:Step 5.4, using the step-by-step iterative update method to solve the initialization problem x, the dual problem variables y * and z, let x=x l , y * =y l , and from this result, update z l to z l+1 , namely:

式(11)中,为投影到上的算子,若确定x=xl,y*=yl,则y*可由下式计算,即:In formula (11), for projection to For the above operator, if it is determined that x=x l , y * =y l , then y * can be calculated by the following formula, namely:

βAATy*=βAzl+1-(Axl-y) (12)βAA T y * =βAz l+1 -(Ax l -y) (12)

则,DALM算法可表示为:Then, the DALM algorithm can be expressed as:

通过式(13)求出重构待测人脸图像y*Calculate the reconstructed face image y * to be tested by formula (13).

本发明的有益效果是,The beneficial effect of the present invention is,

(1)本发明一种加权稀疏表示人脸识别方法,将加权训练字典和DALM算法相结合的稀疏表示分类算法WSRC_DALM,该加权训练字典用于描述所有的训练人脸图像与待测人脸图像在较大类内变化下的差异,进而提高人脸识别算法在姿态、表情等较大类内变化环境下的识别精度和鲁棒性;(1) A kind of weighted sparse representation face recognition method of the present invention, the sparse representation classification algorithm WSRC_DALM that combines weighted training dictionary and DALM algorithm, this weighted training dictionary is used for describing all training face images and the human face image to be tested Differences under larger intra-category changes, thereby improving the recognition accuracy and robustness of face recognition algorithms in environments with large intra-category changes such as poses and expressions;

(2)本发明一种加权稀疏表示人脸识别方法,所采用的DALM算法可以有效地降低WSRC算法的时间复杂度,并且实现测试样本的精确重构,获取鲁棒性的识别效果。(2) A weighted sparse representation face recognition method of the present invention, the DALM algorithm adopted can effectively reduce the time complexity of the WSRC algorithm, and realize accurate reconstruction of test samples, and obtain robust recognition effect.

附图说明Description of drawings

图1是本发明一种加权稀疏表示人脸识别方法的流程图;Fig. 1 is the flow chart of a kind of weighted sparse representation face recognition method of the present invention;

图2是本发明一种加权稀疏表示人脸识别方法实施例中FEI人脸数据库的部分人脸图像。Fig. 2 is a partial face image of the FEI face database in an embodiment of a weighted sparse representation face recognition method of the present invention.

具体实施方式Detailed ways

下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments.

本发明一种加权稀疏表示人脸识别方法,具体按照以下步骤进行:A kind of weighted sparse expression face recognition method of the present invention, specifically carry out according to the following steps:

步骤1,输入训练人脸图像,获得字典矩阵A:Step 1, input the training face image to obtain the dictionary matrix A:

假定有k类训练人脸图像,每类由ni张训练人脸图像组成,则共有张训练人脸图像;Assume that there are k classes of training face images, and each class consists of n i training face images, then there are A training face image;

设每张人脸图像的像素为w×h(w为宽度,h为高度),将该张人脸图像堆积成一个维数为m=w*h的列向量v,则vi,j∈Rm表示第i类的第j张训练人脸图像,其中m为特征向量的维数;Let the pixel of each face image be w×h (w is the width, h is the height), and accumulate the face images into a column vector v whose dimension is m=w*h, then v i,j ∈ R m represents the j-th training face image of the i-th category, where m is the dimension of the feature vector;

将第i类训练人脸图像的所有列向量可合并成样本集再将k个类别的样本集Ai组合起来可得到字典矩阵A:All the column vectors of the i-th training face image can be combined into a sample set Then combine the sample sets A i of k categories to get the dictionary matrix A:

A=[A1,A2,…,Ak]∈Rm×N (1);A=[A 1 ,A 2 ,...,A k ]∈R m×N (1);

步骤2,对字典矩阵A和待测人脸图像y采用主成分分析法进行特征降维,并且进行列的归一化处理使字典矩阵A和待测人脸图像y具有l2范数;所采用的降维处理方法是将原始的所有训练人脸图像和待测人脸图像处理成384维的特征向量;而传统的WSRC算法主要对图像进行缩放处理。Step 2, use the principal component analysis method to perform feature dimensionality reduction on the dictionary matrix A and the face image y to be tested, and perform column normalization so that the dictionary matrix A and the face image y to be tested have l2 norm; The dimensionality reduction processing method used is to process all the original training face images and test face images into 384-dimensional feature vectors; while the traditional WSRC algorithm mainly performs image scaling processing.

步骤3,采用高斯核函数计算每张训练人脸图像和待测人脸图像y之间的距离,即训练人脸图像权重wi,jStep 3, use the Gaussian kernel function to calculate the distance between each training face image and the test face image y, that is, the training face image weight w i,j :

式(2)中,vi,j表示第i类的第j张训练人脸图像,y表示待测人脸图像,σ是高斯核函数的宽度参数,该宽度参数是所有训练人脸图像之间的平均欧式距离,即:In formula (2), v i,j represent the jth training face image of the i class, y represents the face image to be tested, and σ is the width parameter of the Gaussian kernel function, which is the width parameter of all training face images The average Euclidean distance between , namely:

式(3)中,M为所有样本之间的欧氏距离数目;In formula (3), M is the number of Euclidean distances between all samples;

步骤4,引入步骤3中的训练人脸图像权重wi,j,构造加权训练字典矩阵A':Step 4, introduce the training face image weight w i,j in step 3, and construct the weighted training dictionary matrix A':

式(4)中,表示加权后第k类样本的第nk个图像;In formula (4), Represents the n kth image of the kth class sample after weighting;

步骤5,求解稀疏系数,获得重构待测人脸图像;Step 5, solving the sparse coefficient to obtain the reconstructed face image to be tested;

步骤5.1,求解l0最小化问题:Step 5.1, solving the l 0 minimization problem:

式(5)中,x为稀疏系数;In formula (5), x is a sparse coefficient;

步骤5.2,由于l1范数问题是一个NP-hard问题,通常可转换为l1范数的凸问题,即采用对偶增广拉格朗日乘子法求解式(5),则式(5)对应的拉格朗日乘子函数为:In step 5.2, since the l 1 norm problem is an NP-hard problem, it can usually be transformed into a convex problem of l 1 norm, that is, the dual augmented Lagrange multiplier method is used to solve formula (5), then formula (5 ) corresponding to the Lagrange multiplier function is:

式(6)中,μ>0,μ为一个常数且表示将等式约束转化为无约束问题的补偿因子,γ为求出的拉格朗日乘子矢量;In formula (6), μ>0, μ is a constant and represents the compensation factor for transforming equality constraints into unconstrained problems, and γ is the obtained Lagrangian multiplier vector;

若γ*为拉格朗日乘子矢量,且满足优化问题的二阶充分条件,那么,在补偿因子μ足够大的情况下,稀疏系数优化问题可通过式(7)求出,即:If γ * is a Lagrangian multiplier vector and satisfies the second-order sufficient condition of the optimization problem, then, when the compensation factor μ is large enough, the optimization problem of sparse coefficients can be solved by formula (7), namely:

通过式(7)可知,求解稀疏系数x,需要确定拉格朗日乘子矢量γ*和补偿因子μ的取值,则通过迭代方法来同时计算x和γ的取值,即:It can be seen from formula (7) that to solve the sparse coefficient x, it is necessary to determine the values of the Lagrangian multiplier vector γ * and the compensation factor μ, and then calculate the values of x and γ at the same time through an iterative method, namely:

式(8)中,{μl}为正的单调递增序列,l表示迭代次数,In formula (8), {μ l } is a positive monotonically increasing sequence, l represents the number of iterations,

步骤5.3,为了精确重构待测人脸图像y,将ALM算法运用在对偶问题上,即DALM算法,则式(5)变换为式(9):In step 5.3, in order to accurately reconstruct the face image y to be tested, the ALM algorithm is applied to the dual problem, that is, the DALM algorithm, then formula (5) is transformed into formula (9):

式(9)中,稀疏系数x的取值区域为则式(9)的拉格朗日函数形式的问题可表示为:In formula (9), the value range of the sparse coefficient x is Then the problem in the Lagrange function form of formula (9) can be expressed as:

式(10)中,β为大于零的常数且为约束转为等式的补偿因子,z为重构过程中获得的稀疏系数;In formula (10), β is a constant greater than zero and is the compensation factor for converting constraints into equations, and z is the sparse coefficient obtained during the reconstruction process;

步骤5.4,采用分步迭代更新方法求解初始化问题x、对偶问题变量y*及z,令x=xl,y*=yl,由此结果,将zl更新为zl+1,即:Step 5.4, using the step-by-step iterative update method to solve the initialization problem x, the dual problem variables y * and z, let x=x l , y * =y l , and from this result, update z l to z l+1 , namely:

式(11)中,为投影到上的算子,若确定x=xl,y*=yl,则y*可由下式计算,即:In formula (11), for projection to For the above operator, if it is determined that x=x l , y * =y l , then y * can be calculated by the following formula, namely:

βAATy*=βAzl+1-(Axl-y) (12)βAA T y * =βAz l+1 -(Ax l -y) (12)

则,DALM算法可表示为:Then, the DALM algorithm can be expressed as:

由式(13)能够精确求出重构待测人脸图像y*,且能够保证该对偶算法的收敛性。Equation (13) can accurately calculate the reconstructed face image y * to be tested, and can guarantee the convergence of the dual algorithm.

步骤6,根据步骤5获得的重构待测人脸图像y*,计算待测人脸图像对应的每类人的残差:Step 6, according to the reconstructed test face image y * obtained in step 5, calculate the residual error of each type of person corresponding to the test face image:

ri(y)=||y-y*||2i=1,2,…,k (14);r i (y)=||yy * || 2 i=1,2,...,k (14);

步骤7,输出:求解待测人脸图像y的类别,将待测人脸图像y的类别与训练人脸图像的类别进行比对,两者一致时,判断待测人脸图像y属于k类训练人脸图像中的哪类,则实现人脸识别;Step 7, output: Find the category of the face image y to be tested, compare the category of the face image y to be tested with the category of the training face image, and when the two are consistent, determine that the face image y to be tested belongs to category k Which type in the face image is trained to realize face recognition;

待测人脸图像y类别的表达式为:The expression of the y category of the face image to be tested is:

identity(y)=argmin ri(y) (15)。identity(y) = argmin ri(y) (15).

人脸图像的类内变化是指同一人在不同视角下图像所呈现的差异;训练字典(字典矩阵)是描述所有的训练样本在姿态干扰因素下的人脸图像的集合,其矩阵的每一列都描述的是同一类人的图像。The intra-class variation of face images refers to the differences in images presented by the same person under different viewing angles; the training dictionary (dictionary matrix) is a collection of face images describing all training samples under the posture interference factor, and each column of the matrix Both depict images of the same kind of people.

同一个人的训练人脸图像大小相同,仅存在姿态偏转的变化,无光照、遮挡等因素的影响。The size of the training face image of the same person is the same, and there is only a change in posture deflection, without the influence of factors such as illumination and occlusion.

实施例的仿真情况和效果如下:The simulation situation and effect of the embodiment are as follows:

本实施例的实验所使用的人脸图像来自FEI人脸数据库,该数据库包含200个人的2800张彩色图像,其中每人的图像涵盖姿态和光照的变化。本发明实施例从FEI人脸数据库中随机选取100个人,每人选择其11张姿态各异的图像,每幅图像的大小是480 640。The face images used in the experiment of this embodiment come from the FEI face database, which contains 2800 color images of 200 people, where the images of each person cover changes in posture and illumination. In the embodiment of the present invention, 100 people are randomly selected from the FEI face database, and each person selects 11 images with different poses, and the size of each image is 480640.

如图2所示,仿真实验是首先对所有图像进行灰度处理,其次随机选择每个人7幅图像构造训练字典矩阵A,剩下的图像作为待测人脸图像,然后采用PCA方法对训练字典矩阵和测试样本降维处理成384维,再计算每个训练样本对于测试样本的权重,构造加权训练字典矩阵。接着根据具体实施步骤5的l1范数最小化方法(l1_ls方法)和步骤6(DALM方法)求解测试图像稀疏系数,最后通过计算原测试图像y和重构测试图像y’之间的残差ri(y)来进行判定类别。本仿真实验的软件平台为MATLAB 7.0。As shown in Figure 2, the simulation experiment is to firstly process all the images in gray scale, and then randomly select 7 images of each person to construct the training dictionary matrix A, and the remaining images are used as the face images to be tested, and then use the PCA method to analyze the training dictionary matrix A. The matrix and test samples are dimensionally reduced to 384 dimensions, and then the weight of each training sample to the test sample is calculated to construct a weighted training dictionary matrix. Then according to the l1 norm minimization method (l1_ls method) of step 5 and step 6 (DALM method) to solve the sparse coefficient of the test image, finally by calculating the residual ri between the original test image y and the reconstructed test image y' (y) to determine the category. The software platform of this simulation experiment is MATLAB 7.0.

仿真实施例的实验比较了本发明改进的加权稀疏表示人脸识别方法和经典的加权稀疏表示方法之间识别效果和鲁棒性,实验结果如表1所示:The experiments of the simulation embodiment compared the recognition effect and robustness between the improved weighted sparse representation face recognition method of the present invention and the classic weighted sparse representation method, and the experimental results are as shown in Table 1:

从表1中可以看出:本发明改进的加权稀疏表示人脸识别方法可将经典的加权稀疏表示方法的识别率提高15%左右,同时对于较大姿态变化的识别效果很显著,具有很好的推广应用前景。It can be seen from Table 1 that the improved weighted sparse representation face recognition method of the present invention can increase the recognition rate of the classic weighted sparse representation method by about 15%. promotion and application prospects.

本发明一种加权稀疏表示人脸识别方法,将加权训练字典和DALM算法相结合的稀疏表示分类算法WSRC_DALM,该加权训练字典用于描述所有的训练人脸图像与待测人脸图像在较大类内变化下的差异,进而提高人脸识别算法在姿态、表情等较大类内变化环境下的识别精度和鲁棒性;所采用的DALM算法可以有效地降低WSRC算法的时间复杂度,并且实现测试样本的精确重构,获取鲁棒性的识别效果。The present invention is a weighted sparse representation face recognition method, a sparse representation classification algorithm WSRC_DALM combining the weighted training dictionary and the DALM algorithm, the weighted training dictionary is used to describe all training face images and the face images to be tested The differences under intra-class changes can improve the recognition accuracy and robustness of the face recognition algorithm in the context of large intra-class changes such as posture and expression; the DALM algorithm used can effectively reduce the time complexity of the WSRC algorithm, and Realize accurate reconstruction of test samples and obtain robust recognition results.

Claims (5)

1.一种加权稀疏表示人脸识别方法,其特征在于,具体按照以下步骤进行:1. A kind of weighted sparse representation face recognition method, is characterized in that, specifically carries out according to the following steps: 步骤1,输入训练人脸图像,获得字典矩阵A;Step 1, input the training face image to obtain the dictionary matrix A; 步骤2,对字典矩阵A和待测人脸图像y采用主成分分析法进行特征降维,并且进行列的归一化处理使字典矩阵A和待测人脸图像y具有l2范数;Step 2, adopt principal component analysis method to carry out feature dimensionality reduction to dictionary matrix A and human face image y to be tested, and carry out the normalization processing of column so that dictionary matrix A and human face image y to be tested have 12 norm; 步骤3,采用高斯核函数计算经步骤2处理后的每张训练人脸图像和待测人脸图像y之间的距离,即训练人脸图像权重wi,jStep 3, using the Gaussian kernel function to calculate the distance between each training face image processed in step 2 and the face image y to be tested, that is, the training face image weight w i,j ; 步骤4,引入步骤3中的训练人脸图像权重wi,j,构造加权训练字典矩阵A':Step 4, introduce the training face image weight w i,j in step 3, and construct the weighted training dictionary matrix A': 式(4)中,表示加权后第k类样本的第nk个图像;In formula (4), Represents the n kth image of the kth class sample after weighting; 步骤5,求解稀疏系数,获得重构待测人脸图;Step 5, solve the sparse coefficient, and obtain the reconstructed face image to be tested; 步骤6,根据步骤5获得的重构待测人脸图像y*,计算待测人脸图像对应的每类人的残差:Step 6, according to the reconstructed test face image y * obtained in step 5, calculate the residual error of each type of person corresponding to the test face image: ri(y)=||y-y*||2i=1,2,…,k (14);r i (y)=||yy * || 2 i=1,2,...,k (14); 步骤7,输出:求解待测人脸图像y的类别,将待测人脸图像y的类别与训练人脸图像的类别进行比对,待两者一致时,则实现人脸识别;Step 7, output: solve the category of the face image y to be tested, compare the category of the face image y to be tested with the category of the training face image, and when the two are consistent, realize face recognition; 待测人脸图像y类别的表达式为:The expression of the y category of the face image to be tested is: identity(y)=argmin ri(y) (15)。identity(y) = argmin ri(y) (15). 2.根据权利要求1所述的一种加权稀疏表示人脸识别方法,其特征在于,在步骤1中,获得字典矩阵A具体按照以下步骤实施:2. a kind of weighted sparse representation face recognition method according to claim 1, is characterized in that, in step 1, obtains dictionary matrix A and specifically implements according to the following steps: 假定有k类训练人脸图像,每类由ni张训练人脸图像组成,则共有张训练人脸图像;Assume that there are k classes of training face images, and each class consists of n i training face images, then there are A training face image; 设每张人脸图像的像素为w×h,将该张人脸图像堆积成一个维数为m=w*h的列向量v,则vi,j∈Rm表示第i类的第j张训练人脸图像,其中m为特征向量的维数;Let the pixel of each face image be w×h, and accumulate the face images into a column vector v whose dimension is m=w*h, then v i,j ∈ R m represents the jth A training face image, where m is the dimension of the feature vector; 将第i类训练人脸图像的所有列向量合并成样本集再将k个类别的样本集Ai组合起来得到字典矩阵A:Merge all column vectors of the i-th training face image into a sample set Then combine the sample sets A i of k categories to obtain the dictionary matrix A: A=[A1,A2,…,Ak]∈Rm×N (1)。A=[A 1 , A 2 , . . . , A k ]∈R m×N (1). 3.根据权利要求1所述的一种加权稀疏表示人脸识别方法,其特征在于,在步骤2中,采用的降维处理方法是将原始的所有训练人脸图像和待测人脸图像处理成384维的特征向量。3. A kind of weighted sparse representation face recognition method according to claim 1, characterized in that, in step 2, the dimensionality reduction processing method adopted is to process all original training face images and the face images to be tested into a 384-dimensional feature vector. 4.根据权利要求1所述的一种加权稀疏表示人脸识别方法,其特征在于,步骤3中,训练人脸图像权重wi,j通过式(2)计算:4. a kind of weighted sparse representation face recognition method according to claim 1, is characterized in that, in step 3, training face image weight w i, j is calculated by formula (2): 式(2)中,vi,j表示第i类的第j张训练人脸图像,y表示待测人脸图像,σ是高斯核函数的宽度参数,该宽度参数是所有训练人脸图像之间的平均欧式距离,即:In formula (2), v i,j represent the jth training face image of the i class, y represents the face image to be tested, and σ is the width parameter of the Gaussian kernel function, which is the width parameter of all training face images The average Euclidean distance between , namely: 式(3)中,M为所有样本之间的欧氏距离数目。In formula (3), M is the number of Euclidean distances between all samples. 5.根据权利要求1所述的一种加权稀疏表示人脸识别方法,其特征在于,在步骤5中,获得重构待测人脸图像具体按照以下步骤实施:5. A kind of weighted sparse representation human face recognition method according to claim 1, is characterized in that, in step 5, obtains and reconstructs the human face image to be tested and specifically implements according to the following steps: 步骤5.1,求解l0最小化问题:Step 5.1, solving the l 0 minimization problem: 式(5)中,x为稀疏系数;In formula (5), x is a sparse coefficient; 步骤5.2,采用对偶增广拉格朗日乘子法求解式(5),则式(5)对应的拉格朗日乘子函数为:Step 5.2, using the dual augmented Lagrangian multiplier method to solve equation (5), then the Lagrangian multiplier function corresponding to equation (5) is: 式(6)中,μ>0,μ为一个常数且表示将等式约束转化为无约束问题的补偿因子,γ为求出的拉格朗日乘子矢量;In formula (6), μ>0, μ is a constant and represents the compensation factor for transforming equality constraints into unconstrained problems, and γ is the obtained Lagrangian multiplier vector; 若γ*为拉格朗日乘子矢量,且满足优化问题的二阶充分条件,那么,在补偿因子μ足够大的情况下,稀疏系数x优化问题可通过式(7)求出,即:If γ * is a Lagrangian multiplier vector and satisfies the second-order sufficient condition of the optimization problem, then, when the compensation factor μ is large enough, the sparse coefficient x optimization problem can be obtained by formula (7), namely: 通过式(7)可知,求解稀疏系数x,需要确定拉格朗日乘子矢量γ*和补偿因子μ的取值,则通过迭代方法来同时计算x和γ的取值,即:It can be seen from formula (7) that to solve the sparse coefficient x, it is necessary to determine the values of the Lagrangian multiplier vector γ * and the compensation factor μ, and then calculate the values of x and γ at the same time through an iterative method, namely: 式(8)中,{μl}为正的单调递增序列,l表示迭代次数,In formula (8), {μ l } is a positive monotonically increasing sequence, l represents the number of iterations, 步骤5.3,为了精确重构待测人脸图像y,将ALM算法运用在对偶问题上,即DALM算法,则式(5)变换为式(9):In step 5.3, in order to accurately reconstruct the face image y to be tested, the ALM algorithm is applied to the dual problem, that is, the DALM algorithm, then formula (5) is transformed into formula (9): 式(9)中,稀疏系数x的取值区域为则式(9)的拉格朗日函数形式的问题可表示为:In formula (9), the value range of the sparse coefficient x is Then the problem in the Lagrange function form of formula (9) can be expressed as: 式(10)中,β为大于零的常数且为约束转为等式的补偿因子,z为重构过程中获得的稀疏系数;In formula (10), β is a constant greater than zero and is the compensation factor for converting constraints into equations, and z is the sparse coefficient obtained during the reconstruction process; 步骤5.4,采用分步迭代更新方法求解初始化问题x、对偶问题变量y*及z,令x=xl,y*=yl,由此结果,将zl更新为zl+1,即:Step 5.4, using the step-by-step iterative update method to solve the initialization problem x, the dual problem variables y * and z, let x=x l , y * =y l , and from this result, update z l to z l+1 , namely: 式(11)中,为投影到上的算子,若确定x=xl,y*=yl,则y*可由下式计算,即:In formula (11), for projection to For the above operator, if it is determined that x=x l , y * =y l , then y * can be calculated by the following formula, namely: βAATy*=βAzl+1-(Axl-y) (12)βAA T y * =βAz l+1 -(Ax l -y) (12) 则,DALM算法可表示为:Then, the DALM algorithm can be expressed as: 通过式(13)求解出重构待测人脸图像y*The reconstructed face image y * to be tested is obtained by solving formula (13).
CN201810549661.8A 2018-05-31 2018-05-31 A kind of weighting sparse representation face identification method Pending CN108681725A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810549661.8A CN108681725A (en) 2018-05-31 2018-05-31 A kind of weighting sparse representation face identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810549661.8A CN108681725A (en) 2018-05-31 2018-05-31 A kind of weighting sparse representation face identification method

Publications (1)

Publication Number Publication Date
CN108681725A true CN108681725A (en) 2018-10-19

Family

ID=63809430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810549661.8A Pending CN108681725A (en) 2018-05-31 2018-05-31 A kind of weighting sparse representation face identification method

Country Status (1)

Country Link
CN (1) CN108681725A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635860A (en) * 2018-12-04 2019-04-16 科大讯飞股份有限公司 Image classification method and system
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 An Algorithm for Occlusion Expression Recognition Combined with Double Dictionary and Error Matrix
CN109766810A (en) * 2018-12-31 2019-05-17 陕西师范大学 A face recognition classification method based on collaborative representation and pooling and fusion
CN110188718A (en) * 2019-06-04 2019-08-30 南京大学 An Unconstrained Face Recognition Method Based on Keyframes and Joint Sparse Representation
CN111325162A (en) * 2020-02-25 2020-06-23 湖南大学 Face recognition method based on weight sparse representation of virtual sample and residual fusion
CN111523404A (en) * 2020-04-08 2020-08-11 华东师范大学 Partial face recognition method based on convolutional neural network and sparse representation
CN111723759A (en) * 2020-06-28 2020-09-29 南京工程学院 Unconstrained face recognition method based on weighted tensor sparse graph mapping
CN111931665A (en) * 2020-08-13 2020-11-13 重庆邮电大学 Under-sampling face recognition method based on intra-class variation dictionary modeling
CN112966554A (en) * 2021-02-02 2021-06-15 重庆邮电大学 Robust face recognition method and system based on local continuity
CN113657259A (en) * 2021-08-16 2021-11-16 西安航空学院 Single-Sample Face Recognition Method Based on Robust Feature Extraction
CN114049668A (en) * 2021-11-15 2022-02-15 北京计算机技术及应用研究所 Face recognition method
CN114863530A (en) * 2022-05-09 2022-08-05 浙江工业大学 Face Recognition Method Based on Multi-feature Hybrid Dictionary and Probabilistic Prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120063689A1 (en) * 2010-09-15 2012-03-15 The Johns Hopkins University Object recognition in an image
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120063689A1 (en) * 2010-09-15 2012-03-15 The Johns Hopkins University Object recognition in an image
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王林等: "改进的加权稀疏表示人脸识别算法", 《HTTP://WWW.C-S-A.ORG.CN/1003-3254/6385.HTML》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635860A (en) * 2018-12-04 2019-04-16 科大讯飞股份有限公司 Image classification method and system
CN109635860B (en) * 2018-12-04 2023-04-07 科大讯飞股份有限公司 Image classification method and system
CN109711283B (en) * 2018-12-10 2022-11-15 广东工业大学 Occlusion expression recognition method combining double dictionaries and error matrix
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 An Algorithm for Occlusion Expression Recognition Combined with Double Dictionary and Error Matrix
CN109766810A (en) * 2018-12-31 2019-05-17 陕西师范大学 A face recognition classification method based on collaborative representation and pooling and fusion
CN110188718A (en) * 2019-06-04 2019-08-30 南京大学 An Unconstrained Face Recognition Method Based on Keyframes and Joint Sparse Representation
CN111325162A (en) * 2020-02-25 2020-06-23 湖南大学 Face recognition method based on weight sparse representation of virtual sample and residual fusion
CN111523404A (en) * 2020-04-08 2020-08-11 华东师范大学 Partial face recognition method based on convolutional neural network and sparse representation
CN111723759A (en) * 2020-06-28 2020-09-29 南京工程学院 Unconstrained face recognition method based on weighted tensor sparse graph mapping
CN111723759B (en) * 2020-06-28 2023-05-02 南京工程学院 Unconstrained Face Recognition Method Based on Weighted Tensor Sparse Graph Mapping
CN111931665A (en) * 2020-08-13 2020-11-13 重庆邮电大学 Under-sampling face recognition method based on intra-class variation dictionary modeling
CN111931665B (en) * 2020-08-13 2023-02-21 重庆邮电大学 Under-sampling face recognition method based on intra-class variation dictionary modeling
CN112966554A (en) * 2021-02-02 2021-06-15 重庆邮电大学 Robust face recognition method and system based on local continuity
CN113657259A (en) * 2021-08-16 2021-11-16 西安航空学院 Single-Sample Face Recognition Method Based on Robust Feature Extraction
CN113657259B (en) * 2021-08-16 2023-07-21 西安航空学院 A Single Sample Face Recognition Method Based on Robust Feature Extraction
CN114049668A (en) * 2021-11-15 2022-02-15 北京计算机技术及应用研究所 Face recognition method
CN114049668B (en) * 2021-11-15 2024-04-09 北京计算机技术及应用研究所 Face recognition method
CN114863530A (en) * 2022-05-09 2022-08-05 浙江工业大学 Face Recognition Method Based on Multi-feature Hybrid Dictionary and Probabilistic Prediction
CN114863530B (en) * 2022-05-09 2025-01-17 浙江工业大学 Face recognition method based on multi-feature hybrid dictionary and probability prediction

Similar Documents

Publication Publication Date Title
CN108681725A (en) A kind of weighting sparse representation face identification method
Li et al. Overview of principal component analysis algorithm
Li et al. A comprehensive survey on 3D face recognition methods
Hu Enhanced gabor feature based classification using a regularized locally tensor discriminant model for multiview gait recognition
Zeng et al. Towards resolution invariant face recognition in uncontrolled scenarios
CN107977661B (en) Region-of-interest detection method based on FCN and low-rank sparse decomposition
CN112307995A (en) A semi-supervised person re-identification method based on feature decoupling learning
CN108875459A (en) One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system
CN108256486A (en) A kind of image-recognizing method and device based on non-negative low-rank and semi-supervised learning
CN111695456A (en) Low-resolution face recognition method based on active discriminability cross-domain alignment
CN106096517A (en) A kind of face identification method based on low-rank matrix Yu eigenface
CN111582223B (en) A three-dimensional face recognition method
Zhang et al. Pose-robust feature learning for facial expression recognition
CN104715266B (en) The image characteristic extracting method being combined based on SRC DP with LDA
Wang et al. Multiple manifolds metric learning with application to image set classification
Liang et al. Region-aware scattering convolution networks for facial beauty prediction
CN108664941B (en) Nuclear sparse description face recognition method based on geodesic mapping analysis
Zarbakhsh et al. Low-rank sparse coding and region of interest pooling for dynamic 3D facial expression recognition
Akbar et al. Face recognition using hybrid feature space in conjunction with support vector machine
CN110458064B (en) Combining data-driven and knowledge-driven low-altitude target detection and recognition methods
CN105740838A (en) Recognition method in allusion to facial images with different dimensions
CN111950429A (en) Face recognition method based on weighted collaborative representation
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
De la Torre et al. Filtered component analysis to increase robustness to local minima in appearance models
CN103942572A (en) Method and device for extracting facial expression features based on bidirectional compressed data space dimension reduction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181019