CN108681725A - A kind of weighting sparse representation face identification method - Google Patents
A kind of weighting sparse representation face identification method Download PDFInfo
- Publication number
- CN108681725A CN108681725A CN201810549661.8A CN201810549661A CN108681725A CN 108681725 A CN108681725 A CN 108681725A CN 201810549661 A CN201810549661 A CN 201810549661A CN 108681725 A CN108681725 A CN 108681725A
- Authority
- CN
- China
- Prior art keywords
- facial image
- formula
- measured
- training
- weighting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of weighting sparse representation face identification methods, specifically follow the steps below:Step 1, training facial image is inputted, dictionary matrix A is obtained;Step 2, Feature Dimension Reduction is carried out using Principal Component Analysis to dictionary matrix A and facial image y to be measured, and makes dictionary matrix A and facial image y to be measured that there is l into the normalized of ranks2Norm;Step 3, training facial image weight w is calculated using gaussian kernel functioni,j;Step 4, the training facial image weight w being introduced into step 3i,j, construction weighting training dictionary matrix A ';Step 5, sparse coefficient x is solved, obtains and reconstructs facial image y to be measured*;Step 6, according to the reconstruct of step 5 facial image y to be measured*, calculate the corresponding residual error per class people of facial image to be measured;Step 7, it exports:The classification of facial image y to be measured is judged by formula (15), realizes recognition of face.The method of the present invention improves face recognition algorithms in posture, expression etc. compared with the accuracy of identification and robustness under changing environment in major class.
Description
Technical field
The invention belongs to the technical fields of Image Processing and Pattern Recognition, and in particular to a kind of weighting rarefaction representation face knowledge
Other method.
Background technology
One of the hot subject that the other technology of face is studied as computer vision and area of pattern recognition, because of its powerful adaptation
Property, high security and intelligent interaction are widely used in ID card information system, bank monitoring, customs's exit and entry control, crime
The fields such as suspect pursues and captures an escaped prisoner, Campus Security, access control system have potential application prospect.
Face identification system generally includes three Face datection, feature extraction and recognizer parts.Traditional face is known
Other technical research lays particular emphasis on feature extraction and recognizer, and has formd some classical methods, for example, principal component analysis,
The methods of linear discriminant analysis, Elastic Matching, neural network.Many things all have this popular feature of sparsity in reality,
And in field of face identification, if more sufficient per one kind facial image sample, these face samples can be turned into a people
Face space, every piece image of such face by this sub-spaces linear expression or can approach.Based on this thought, 2009
Year, Wright et al. thinks that sparsity of the image in image model is not only present in inside image, exists in image model
Between, it then proposes the face based on rarefaction representation (Sparse Representation Classification, SRC) and knows
All training samples are configured to super complete dictionary and one by one to surveying by other method using the linear dependence of similar facial image
The sparse reconstruction of this progress of sample, finally carries out discriminant classification according to sparse reconstructed error, and this method efficiently solves face knowledge
The problem of other poor robustness.Then, a series of researchs based on SRC methods make great progress, representative packet
Include the combination etc. of the optimisation strategy of sparse restructing algorithm, the construction of super complete dictionary and SRC algorithms and other algorithms.Although
SRC achieves very excellent achievement in field of face identification, and causes the extensive research of scholars, but still has at present
Some technological difficulties.First, the facial image acquired is carried out under uncontrollable natural environment, and posture, light are usually contained
According to changing in the classes such as, expression;In addition, SRC algorithms take height, the requirement to real-time in practical application cannot be satisfied.Based on this,
How in containing compared with the recognition of face problem changed in major class good recognition result is efficiently obtained, has become recognition of face
Research institute's concern.
Invention content
The object of the present invention is to provide a kind of weighting sparse representation face identification methods, solve existing face identification method
Facial image be in posture, expression etc. compared with showing insufficient problem in major class when situation of change.
The technical solution adopted in the present invention is a kind of weighting sparse representation face identification method, which is characterized in that specific
It follows the steps below:
Step 1, training facial image is inputted, dictionary matrix A is obtained;
Step 2, Feature Dimension Reduction is carried out using Principal Component Analysis to dictionary matrix A and facial image y to be measured, and carried out
The normalized of row makes dictionary matrix A and facial image y to be measured have l2Norm;
Step 3, the distance between every trained facial image and facial image y to be measured or phase are calculated using gaussian kernel function
Like degree, that is, train facial image weight wi,j;
Step 4, the training facial image weight w being introduced into step 3i,j, construction weighting training dictionary matrix A ':
In formula (4),Indicate n-th of kth class sample after weightingkA image;
Step 5, sparse coefficient is solved, obtains and reconstructs facial image to be measured;
Step 6, the reconstruct facial image y to be measured obtained according to step 5*, it is corresponding per class people to calculate facial image to be measured
Residual error:
ri(y)=| | y-y*||2I=1,2 ..., k (14);
Step 7, it exports:The classification for solving facial image y to be measured, by the classification of facial image y to be measured and training face figure
The classification of picture is compared, and when the two is consistent, then realizes recognition of face;
The expression formula of facial image y classifications to be measured is:
Identity (y)=argmin ri(y) (15)。
The features of the present invention also characterized in that
In step 1, dictionary matrix A is obtained according to the following steps to implement:
K classes training facial image is suppose there is, per class by niZhang Xunlian facial images form, then shareZhang Xunlian
Facial image;
If the pixel of every facial image be w × h, by this facial image be piled into a dimension be m=w*h row to
V is measured, then vi,j∈RmIndicate that the jth Zhang Xunlian facial images of the i-th class, wherein m are the dimension of feature vector;
Train all column vectors of facial image that can be merged into sample set the i-th class
Again by the sample set A of k classificationiIt combines and dictionary matrix A can be obtained:
A=[A1,A2,…,Ak]∈Rm×N (1)。
In step 2, the dimension-reduction treatment method of use is by original all trained facial images and facial image to be measured
It is processed into the feature vector of 384 dimensions.
In step 3, training facial image weight wi,jIt is calculated by formula (2):
In formula (2), vi,jIndicate that the jth Zhang Xunlian facial images of the i-th class, y indicate facial image to be measured, σ is Gaussian kernel letter
Several width parameters, the width parameter are the average Euclidean distances between all trained facial images, i.e.,:
In formula (3), Euclidean distance numbers of the M between all samples.
In steps of 5, reconstruct facial image to be measured is obtained to be specifically implemented according to the following steps:
Step 5.1, l is solved0Minimization problem:
In formula (5), x is sparse coefficient;
Step 5.2, formula (5) is solved using antithesis augmented vector approach, then the corresponding Lagrange multiplier of formula (5)
Function is:
In formula (6), μ > 0, μ are a constant and indicate the compensation factor for converting equality constraint to unconstrained problem, γ
For the Lagrange multiplier vector found out;
If γ*For Lagrange multiplier vector, and meet the second order sufficient condition of optimization problem, then, in compensation factor μ
In the case of sufficiently large, sparse coefficient optimization problem can be found out by formula (7), i.e.,:
By formula (7) it is found that solving sparse coefficient x, it is thus necessary to determine that Lagrange multiplier vector γ*With compensation factor μ's
Value then calculates the value of x and γ, i.e., simultaneously by alternative manner:
In formula (8), { μlIt is positive monotone-increasing sequence, l indicates the number of iteration.
Step 5.3, for Accurate Reconstruction facial image y to be measured, ALM algorithms are used in dual problem, i.e. DALM is calculated
Method, then formula (5) be transformed to formula (9):
In formula (9), the value region of sparse coefficient x isThe then Lagrange of formula (9)
The problem of functional form, is represented by:
In formula (10), β is the constant more than zero and is the compensation factor that constraint switchs to equation, and z is that reconstruct obtains in the process
Sparse coefficient;
Step 5.4, initialization matter x, dual problem variable y are solved using substep iteration update method*And z, enable x=xl,
y*=yl, thus as a result, by zlIt is updated to zl+1, i.e.,:
In formula (11),To project toOn operator, however, it is determined that x=xl, y*=yl, then y*It can be calculated by following formula,
I.e.:
βAATy*=β Azl+1-(Axl-y) (12)
Then, DALM algorithms are represented by:
Reconstruct facial image y to be measured is found out by formula (13)*。
The invention has the advantages that
(1) weighting training dictionary and DALM algorithms are combined by a kind of weighting sparse representation face identification method of the present invention
Rarefaction representation sorting algorithm WSRC_DALM, the weighting training dictionary is for describing all training facial images and people to be measured
Face image improves face recognition algorithms in posture, expression etc. compared with changing ring in major class compared with the difference under changing in major class
Accuracy of identification under border and robustness;
(2) a kind of weighting sparse representation face identification method of the present invention, used DALM algorithms can be effectively reduced
The time complexity of WSRC algorithms, and realize the Accurate Reconstruction of test sample, obtain the recognition effect of robustness.
Description of the drawings
Fig. 1 is a kind of flow chart of weighting sparse representation face identification method of the present invention;
Fig. 2 is the groups of people of FEI face databases in a kind of weighting sparse representation face identification method embodiment of the present invention
Face image.
Specific implementation mode
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
A kind of weighting sparse representation face identification method of the present invention, specifically follows the steps below:
Step 1, training facial image is inputted, dictionary matrix A is obtained:
K classes training facial image is suppose there is, per class by niZhang Xunlian facial images form, then shareOpen instruction
Practice facial image;
If the pixel of every facial image is w × h (w is width, and h is height), this facial image is piled into one
Dimension is the column vector v of m=w*h, then vi,j∈RmIndicate that the jth Zhang Xunlian facial images of the i-th class, wherein m are feature vector
Dimension;
Train all column vectors of facial image that can be merged into sample set the i-th class
Again by the sample set A of k classificationiIt combines and dictionary matrix A can be obtained:
A=[A1,A2,…,Ak]∈Rm×N(1);
Step 2, Feature Dimension Reduction is carried out using Principal Component Analysis to dictionary matrix A and facial image y to be measured, and carried out
The normalized of row makes dictionary matrix A and facial image y to be measured have l2Norm;Used dimension-reduction treatment method is will be former
The feature vector that all trained facial images and face image processing to be measured to begin are tieed up at 384;And traditional WSRC algorithms are main
Processing is zoomed in and out to image.
Step 3, the distance between every trained facial image and facial image y to be measured are calculated using gaussian kernel function, i.e.,
Training facial image weight wi,j:
In formula (2), vi,jIndicate that the jth Zhang Xunlian facial images of the i-th class, y indicate facial image to be measured, σ is Gaussian kernel letter
Several width parameters, the width parameter are the average Euclidean distances between all trained facial images, i.e.,:
In formula (3), Euclidean distance numbers of the M between all samples;
Step 4, the training facial image weight w being introduced into step 3i,j, construction weighting training dictionary matrix A ':
In formula (4),Indicate n-th of kth class sample after weightingkA image;
Step 5, sparse coefficient is solved, obtains and reconstructs facial image to be measured;
Step 5.1, l is solved0Minimization problem:
In formula (5), x is sparse coefficient;
Step 5.2, due to l1Norm problem is a NP-hard problem, usually can be exchanged into l1The convex problem of norm, i.e.,
Formula (5) is solved using antithesis augmented vector approach, then the corresponding Lagrange multiplier function of formula (5) is:
In formula (6), μ > 0, μ are a constant and indicate the compensation factor for converting equality constraint to unconstrained problem, γ
For the Lagrange multiplier vector found out;
If γ*For Lagrange multiplier vector, and meet the second order sufficient condition of optimization problem, then, in compensation factor μ
In the case of sufficiently large, sparse coefficient optimization problem can be found out by formula (7), i.e.,:
By formula (7) it is found that solving sparse coefficient x, it is thus necessary to determine that Lagrange multiplier vector γ*With compensation factor μ's
Value then calculates the value of x and γ, i.e., simultaneously by alternative manner:
In formula (8), { μlIt is positive monotone-increasing sequence, l indicates iterations,
Step 5.3, for Accurate Reconstruction facial image y to be measured, ALM algorithms are used in dual problem, i.e. DALM is calculated
Method, then formula (5) be transformed to formula (9):
In formula (9), the value region of sparse coefficient x isThe then Lagrange of formula (9)
The problem of functional form, is represented by:
In formula (10), β is the constant more than zero and is the compensation factor that constraint switchs to equation, and z is that reconstruct obtains in the process
Sparse coefficient;
Step 5.4, initialization matter x, dual problem variable y are solved using substep iteration update method*And z, enable x=xl,
y*=yl, thus as a result, by zlIt is updated to zl+1, i.e.,:
In formula (11),To project toOn operator, however, it is determined that x=xl, y*=yl, then y*It can be calculated by following formula,
I.e.:
βAATy*=β Azl+1-(Axl-y) (12)
Then, DALM algorithms are represented by:
Reconstruct facial image y to be measured can accurately be found out by formula (13)*, and can ensure the convergence of the Conjugate Search Algorithm.
Step 6, the reconstruct facial image y to be measured obtained according to step 5*, it is corresponding per class people to calculate facial image to be measured
Residual error:
ri(y)=| | y-y*||2I=1,2 ..., k (14);
Step 7, it exports:The classification for solving facial image y to be measured, by the classification of facial image y to be measured and training face figure
The classification of picture is compared, and when the two is consistent, judges which class that facial image y to be measured belongs in k classes training facial image, then real
Existing recognition of face;
The expression formula of facial image y classifications to be measured is:
Identity (y)=argmin ri(y) (15)。
Variation refers to the difference that same people is presented in different visual angles hypograph in the class of facial image;Training dictionary (word
Allusion quotation matrix) it is the set for describing facial image of all training samples under attitude disturbance factor, each row of matrix are all
The image of same class people is described.
The training facial image size of same person is identical, and there is only the variation of posture deflection, no light such as blocks at the factors
Influence.
The simulation scenarios and effect of embodiment are as follows:
Facial image used in the experiment of the present embodiment comes from FEI face databases, which includes 200 people
2800 coloured images, wherein everyone image cover the variation of posture and illumination.The embodiment of the present invention is from FEI face databases
In randomly select 100 people, the size of everyone image for selecting its 11 postures different, each image is 480 640.
As shown in Fig. 2, emulation experiment is to carry out gray proces to all images first, next randomly chooses everyone 7 width
Then image configuration training dictionary matrix A, remaining image use PCA methods to training dictionary square as facial image to be measured
Battle array and test sample dimension-reduction treatment are at 384 dimensions, then calculate weight of each training sample for test sample, construction weighting training
Dictionary matrix.Then according to the l1 norm minimums method (l1_ls methods) and step 6 (DALM methods) of specific implementation step 5
Test image sparse coefficient is solved, is come finally by the residual error ri (y) calculated between original test image y and reconstruct test image y '
Carry out judgement classification.The software platform of this emulation experiment is MATLAB 7.0.
The experiment of emulation embodiment compares the improved weighting sparse representation face identification method of the present invention and classical and adds
Recognition effect and robustness between power sparse representation method, experimental result are as shown in table 1:
As can be seen from Table 1:The improved weighting sparse representation face identification method of the present invention can be dilute by classical weighting
The discrimination raising 15% or so of representation method is dredged, it is very notable simultaneously for the recognition effect of larger attitudes vibration, have fine
Popularizing application prospect.
Weighting training dictionary and DALM algorithms are combined dilute by a kind of weighting sparse representation face identification method of the present invention
Presentation class algorithm WSRC_DALM is dredged, the weighting training dictionary is for describing all training facial images and face figure to be measured
As compared in major class change under difference, and then improve face recognition algorithms in posture, expression etc. compared under changing environment in major class
Accuracy of identification and robustness;Used DALM algorithms can be effectively reduced the time complexity of WSRC algorithms, and real
The Accurate Reconstruction of existing test sample, obtains the recognition effect of robustness.
Claims (5)
1. a kind of weighting sparse representation face identification method, which is characterized in that specifically follow the steps below:
Step 1, training facial image is inputted, dictionary matrix A is obtained;
Step 2, Feature Dimension Reduction carried out using Principal Component Analysis to dictionary matrix A and facial image y to be measured, and into ranks
Normalized makes dictionary matrix A and facial image y to be measured have l2Norm;
Step 3, using gaussian kernel function calculate through step 2 treated every trained facial image and facial image y to be measured it
Between distance, that is, train facial image weight wi,j;
Step 4, the training facial image weight w being introduced into step 3i,j, construction weighting training dictionary matrix A ':
In formula (4),Indicate n-th of kth class sample after weightingkA image;
Step 5, sparse coefficient is solved, obtains and reconstructs face figure to be measured;
Step 6, the reconstruct facial image y to be measured obtained according to step 5*, it is corresponding per the residual of class people to calculate facial image to be measured
Difference:
ri(y)=| | y-y*||2I=1,2 ..., k (14);
Step 7, it exports:The classification for solving facial image y to be measured, by the classification of facial image y to be measured and training facial image
Classification is compared, and when the two is consistent, then realizes recognition of face;
The expression formula of facial image y classifications to be measured is:
Identity (y)=argmin ri(y) (15)。
2. a kind of weighting sparse representation face identification method according to claim 1, which is characterized in that in step 1, obtain
Dictionary matrix A is obtained to be specifically implemented according to the following steps:
K classes training facial image is suppose there is, per class by niZhang Xunlian facial images form, then shareZhang Xunlian faces
Image;
If the pixel of every facial image is w × h, this facial image is piled into the column vector v that a dimension is m=w*h,
Then vi,j∈RmIndicate that the jth Zhang Xunlian facial images of the i-th class, wherein m are the dimension of feature vector;
All column vectors of facial image are trained to be merged into sample set the i-th classAgain
By the sample set A of k classificationiIt combines to obtain dictionary matrix A:
A=[A1,A2,…,Ak]∈Rm×N (1)。
3. a kind of weighting sparse representation face identification method according to claim 1, which is characterized in that in step 2, adopt
Dimension-reduction treatment method be by original all trained facial images and face image processing to be measured at 384 dimensions feature to
Amount.
4. a kind of weighting sparse representation face identification method according to claim 1, which is characterized in that in step 3, training
Facial image weight wi,jIt is calculated by formula (2):
In formula (2), vi,jIndicate that the jth Zhang Xunlian facial images of the i-th class, y indicate facial image to be measured, σ is gaussian kernel function
Width parameter, the width parameter are the average Euclidean distances between all trained facial images, i.e.,:
In formula (3), Euclidean distance numbers of the M between all samples.
5. a kind of weighting sparse representation face identification method according to claim 1, which is characterized in that in steps of 5, obtain
Facial image to be measured must be reconstructed to be specifically implemented according to the following steps:
Step 5.1, l is solved0Minimization problem:
In formula (5), x is sparse coefficient;
Step 5.2, formula (5) is solved using antithesis augmented vector approach, then the corresponding Lagrange multiplier function of formula (5)
For:
In formula (6), μ>0, μ is a constant and indicates that the compensation factor for converting equality constraint to unconstrained problem, γ are to ask
The Lagrange multiplier vector gone out;
If γ*For Lagrange multiplier vector, and meet the second order sufficient condition of optimization problem, then, it is enough in compensation factor μ
In the case of big, sparse coefficient x optimization problems can be found out by formula (7), i.e.,:
By formula (7) it is found that solving sparse coefficient x, it is thus necessary to determine that Lagrange multiplier vector γ*With the value of compensation factor μ,
Then calculate the value of x and γ simultaneously by alternative manner, i.e.,:
In formula (8), { μlIt is positive monotone-increasing sequence, l indicates iterations,
Step 5.3, for Accurate Reconstruction facial image y to be measured, ALM algorithms are used in dual problem, i.e. DALM algorithms, then
Formula (5) is transformed to formula (9):
In formula (9), the value region of sparse coefficient x isThe then Lagrangian shape of formula (9)
The problem of formula, is represented by:
In formula (10), β is constant more than zero and is the compensation factor that constraint switchs to equation, z be reconstruct obtain in the process it is dilute
Sparse coefficient;
Step 5.4, initialization matter x, dual problem variable y are solved using substep iteration update method*And z, enable x=xl, y*=
yl, thus as a result, by zlIt is updated to zl+1, i.e.,:
In formula (11),To project toOn operator, however, it is determined that x=xl, y*=yl, then y*It can be calculated by following formula, i.e.,:
βAATy*=β Azl+1-(Axl-y) (12)
Then, DALM algorithms are represented by:
Reconstruct facial image y to be measured is solved by formula (13)*。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810549661.8A CN108681725A (en) | 2018-05-31 | 2018-05-31 | A kind of weighting sparse representation face identification method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810549661.8A CN108681725A (en) | 2018-05-31 | 2018-05-31 | A kind of weighting sparse representation face identification method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108681725A true CN108681725A (en) | 2018-10-19 |
Family
ID=63809430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810549661.8A Pending CN108681725A (en) | 2018-05-31 | 2018-05-31 | A kind of weighting sparse representation face identification method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108681725A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635860A (en) * | 2018-12-04 | 2019-04-16 | 科大讯飞股份有限公司 | Image classification method and system |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
CN109766810A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | Recognition of face classification method based on collaboration expression and pond and fusion |
CN110188718A (en) * | 2019-06-04 | 2019-08-30 | 南京大学 | It is a kind of based on key frame and joint sparse indicate without constraint face identification method |
CN111325162A (en) * | 2020-02-25 | 2020-06-23 | 湖南大学 | Face recognition method based on weight sparse representation of virtual sample and residual fusion |
CN111523404A (en) * | 2020-04-08 | 2020-08-11 | 华东师范大学 | Partial face recognition method based on convolutional neural network and sparse representation |
CN111723759A (en) * | 2020-06-28 | 2020-09-29 | 南京工程学院 | Non-constrained face recognition method based on weighted tensor sparse graph mapping |
CN111931665A (en) * | 2020-08-13 | 2020-11-13 | 重庆邮电大学 | Under-sampling face recognition method based on intra-class variation dictionary modeling |
CN112966554A (en) * | 2021-02-02 | 2021-06-15 | 重庆邮电大学 | Robust face recognition method and system based on local continuity |
CN113657259A (en) * | 2021-08-16 | 2021-11-16 | 西安航空学院 | Single-sample face recognition method based on robust feature extraction |
CN114049668A (en) * | 2021-11-15 | 2022-02-15 | 北京计算机技术及应用研究所 | Face recognition method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063689A1 (en) * | 2010-09-15 | 2012-03-15 | The Johns Hopkins University | Object recognition in an image |
CN103413119A (en) * | 2013-07-24 | 2013-11-27 | 中山大学 | Single sample face recognition method based on face sparse descriptors |
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
-
2018
- 2018-05-31 CN CN201810549661.8A patent/CN108681725A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120063689A1 (en) * | 2010-09-15 | 2012-03-15 | The Johns Hopkins University | Object recognition in an image |
CN103413119A (en) * | 2013-07-24 | 2013-11-27 | 中山大学 | Single sample face recognition method based on face sparse descriptors |
CN104318261A (en) * | 2014-11-03 | 2015-01-28 | 河南大学 | Graph embedding low-rank sparse representation recovery sparse representation face recognition method |
Non-Patent Citations (1)
Title |
---|
王林等: "改进的加权稀疏表示人脸识别算法", 《HTTP://WWW.C-S-A.ORG.CN/1003-3254/6385.HTML》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635860B (en) * | 2018-12-04 | 2023-04-07 | 科大讯飞股份有限公司 | Image classification method and system |
CN109635860A (en) * | 2018-12-04 | 2019-04-16 | 科大讯飞股份有限公司 | Image classification method and system |
CN109711283B (en) * | 2018-12-10 | 2022-11-15 | 广东工业大学 | Occlusion expression recognition method combining double dictionaries and error matrix |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
CN109766810A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | Recognition of face classification method based on collaboration expression and pond and fusion |
CN110188718A (en) * | 2019-06-04 | 2019-08-30 | 南京大学 | It is a kind of based on key frame and joint sparse indicate without constraint face identification method |
CN111325162A (en) * | 2020-02-25 | 2020-06-23 | 湖南大学 | Face recognition method based on weight sparse representation of virtual sample and residual fusion |
CN111523404A (en) * | 2020-04-08 | 2020-08-11 | 华东师范大学 | Partial face recognition method based on convolutional neural network and sparse representation |
CN111723759A (en) * | 2020-06-28 | 2020-09-29 | 南京工程学院 | Non-constrained face recognition method based on weighted tensor sparse graph mapping |
CN111723759B (en) * | 2020-06-28 | 2023-05-02 | 南京工程学院 | Unconstrained face recognition method based on weighted tensor sparse graph mapping |
CN111931665B (en) * | 2020-08-13 | 2023-02-21 | 重庆邮电大学 | Under-sampling face recognition method based on intra-class variation dictionary modeling |
CN111931665A (en) * | 2020-08-13 | 2020-11-13 | 重庆邮电大学 | Under-sampling face recognition method based on intra-class variation dictionary modeling |
CN112966554A (en) * | 2021-02-02 | 2021-06-15 | 重庆邮电大学 | Robust face recognition method and system based on local continuity |
CN113657259A (en) * | 2021-08-16 | 2021-11-16 | 西安航空学院 | Single-sample face recognition method based on robust feature extraction |
CN113657259B (en) * | 2021-08-16 | 2023-07-21 | 西安航空学院 | Single-sample face recognition method based on robust feature extraction |
CN114049668A (en) * | 2021-11-15 | 2022-02-15 | 北京计算机技术及应用研究所 | Face recognition method |
CN114049668B (en) * | 2021-11-15 | 2024-04-09 | 北京计算机技术及应用研究所 | Face recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108681725A (en) | A kind of weighting sparse representation face identification method | |
CN106682598B (en) | Multi-pose face feature point detection method based on cascade regression | |
Al Bashish et al. | A framework for detection and classification of plant leaf and stem diseases | |
Ramanathan et al. | Face verification across age progression | |
Chai et al. | Locally linear regression for pose-invariant face recognition | |
Zhao et al. | Learning from normalized local and global discriminative information for semi-supervised regression and dimensionality reduction | |
Liu et al. | Feature disentangling machine-a novel approach of feature selection and disentangling in facial expression analysis | |
CN104700076B (en) | Facial image virtual sample generation method | |
CN110751098A (en) | Face recognition method for generating confrontation network based on illumination and posture | |
Zeng et al. | Towards resolution invariant face recognition in uncontrolled scenarios | |
CN109389045A (en) | Micro- expression recognition method and device based on mixing space-time convolution model | |
CN106169073A (en) | A kind of expression recognition method and system | |
Pratama et al. | Face recognition for presence system by using residual networks-50 architecture | |
CN108154133A (en) | Human face portrait based on asymmetric combination learning-photo array method | |
Beksi et al. | Object classification using dictionary learning and rgb-d covariance descriptors | |
CN106778708A (en) | A kind of expression shape change recognition methods of the active appearance models based on tensor | |
Seyyedsalehi et al. | Simultaneous learning of nonlinear manifolds based on the bottleneck neural network | |
CN108564061A (en) | A kind of image-recognizing method and system based on two-dimensional principal component analysis | |
Zhang et al. | Adaptive gabor convolutional neural networks for finger-vein recognition | |
Kekre et al. | Performance Comparison for Face Recognition using PCA, DCT &WalshTransform of Row Mean and Column Mean | |
CN106971176A (en) | Tracking infrared human body target method based on rarefaction representation | |
De la Torre et al. | Filtered component analysis to increase robustness to local minima in appearance models | |
Fernández-Martínez et al. | Exploring the uncertainty space of ensemble classifiers in face recognition | |
CN106570459A (en) | Face image processing method | |
CN110334677A (en) | A kind of recognition methods again of the pedestrian based on skeleton critical point detection and unequal subregion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181019 |
|
RJ01 | Rejection of invention patent application after publication |