CN107657201A - NEXT series of products characteristics of image identifying systems and its recognition methods - Google Patents
NEXT series of products characteristics of image identifying systems and its recognition methods Download PDFInfo
- Publication number
- CN107657201A CN107657201A CN201610582842.1A CN201610582842A CN107657201A CN 107657201 A CN107657201 A CN 107657201A CN 201610582842 A CN201610582842 A CN 201610582842A CN 107657201 A CN107657201 A CN 107657201A
- Authority
- CN
- China
- Prior art keywords
- face
- image
- module
- unit
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses NEXT series of products characteristics of image identifying systems and its recognition methods, belong to field of image recognition, the identifying system includes controller, face database, image standardization module, characteristic extracting module, grader, output module, camera module, face detection module and image pre-processing module;The recognition methods includes feature extraction algorithm, category feature classification realizes that algorithm and assembled classifier realize algorithm, recognition methods proposed by the present invention, which not only increases front face, discrimination in the case of the anglec of rotation, while the illumination to varying strength has extraordinary robustness.Identifying system proposed by the present invention and its recognition methods possess real-time and reliability, can be widely used for the fields such as authentication, image recognition.
Description
Technical field
The invention belongs to field of image recognition, more particularly to NEXT series of products characteristics of image identifying systems and its identification side
Method.
Background technology
With society and economic fast development, increasing public safety, access control, man-machine interaction, information peace
Congruent aspect is required for fast and effectively verifying the identity of client, and the input of a large amount of manpower and materials of this need of work.By
This is visible, and traditional identity identifying method can not already meet in some fields, thus this field of identification there is an urgent need to
Weed out the old and bring forth the new, need a kind of identity identifying method effectively, quick, safe badly.
At present, face identification system has the strict constraints of comparison to photo environment mostly, such as positive face, straight without daylight
Penetrate, intensity of illumination is substantially stationary, face is naturally amimia, glasses-free or jewelry etc..In the case that these substantially meet, system
Just have relatively good performance.
(1)The recognition of face problem of complex background.The premise of recognition of face is to detect face and how to calibrate face,
If the image for identification contains excessive background information, and background is complicated and changeable, and it is accurate that this certainly will influence identification
Degree.
(2)The problem of facial image is imaged.Illumination when shooting, the posture and expression of face, have when being caught in
Difference without some shelters, for the face age in the face and database of identification etc..These changes of face in image
All it is huge challenge for computer.
(3)Recognition of face is multidisciplinary research problem, and the theory mostly for the use of statistics is studied at present, other
Section is also critically important for the research of recognition of face, how in depth in Other subjects research and a problem.
The content of the invention
In view of the above-mentioned problems, it is an object of the invention to provide NEXT series of products characteristics of image identifying systems and its identification
Method, the identifying system and its recognition methods that the invention proposes possess real-time and reliability, can be widely used for authentication, figure
As fields such as identifications.
In order to realize said system, the present invention adopts the technical scheme that:
NEXT series of products characteristics of image identifying systems, it is characterised in that the system includes controller, face database, image
Standardized module, characteristic extracting module, grader, output module, camera module, face detection module and image preprocessing mould
Block;The control module respectively with the face database, image standardization module, characteristic extracting module, grader, output mould
Block, camera module, face detection module connect with image pre-processing module;The face database passes sequentially through graphics standard
Change module, characteristic extracting module and grader to be connected with output module;The camera module pass sequentially through face detection module,
Image pre-processing module is connected with characteristic extracting module;The face detection module is connected with the grader;
The face detection module includes Face datection unit, human eye positioning unit and face and cuts unit, the Face datection
Unit is arranged between camera module and human eye positioning unit, and the face cuts unit and is arranged on human eye positioning unit and figure
As between pretreatment module;
Described image pretreatment module includes gray processing unit, denoising unit, sharpens unit, histogram equalization unit and image
Size normalised unit, the gray processing unit are arranged on the face and cut between unit and denoising unit, the denoising list
Member passes sequentially through sharpening unit, histogram equalization unit, picture size Standardisation Cell and is connected with the characteristic extracting module.
Further, the workflow of the camera module comprises the following steps:
The video information in the real-time acquisition monitoring region of camera module;
Camera module judges to whether there is mobile object in video by transportable frame, it is determined that mobile object in video be present
When, start face detection module, and gather each frame video image.
Further, the workflow of the face detection module comprises the following steps:
After starting face detection module, the video image gathered by camera module is received;
Load classification device module, it is inside OpenCV by haarcascade_frontalface_alt_tree.xml file translations
Form CvHaarClassifierCascade, start to detect face;
Determine whether to detect face;When it is determined that not detecting face, continue to load the video image;
When it is determined that detecting face, eyes are positioned using block-based human-eye positioning method, are determined in the video image
Face whether there is eyes;
When it is determined that to state the face in video image be that eyes are not present, cast out the video image, and continue from shooting head mould
Block loads video image;
When it is determined that to state the face in video image be eyes be present, then regarding for human face region is intercepted according to the position of the eyes
Frequency image, the video and graphic after interception is facial image, preserves the facial image and facial image is passed into described image
Pretreatment module.
Further, the workflow of described image pretreatment module comprises the following steps:
The gray processing unit carries out gray scale using facial image described in cvCvtColor () function pair in OpenCV function libraries
Change is handled;
The denoising unit has the face of Gaussian noise and salt-pepper noise using cvSmooth () function pair in OpenCV function libraries
Image carries out average filter median filtering method and is smoothed;
The sharpening unit is sharpened processing using Laplace operator to the facial image;
The histogram equalization unit uses face figure described in cvEqualizeHist () function pair in OpenCV function libraries
As carrying out histogram equalization processing;
The size of the facial image is uniformly processed as 92*112 pixels described image size normalised unit.
The beneficial effect of NEXT series of products characteristics of image identifying systems of the present invention is:
NEXT series of products characteristics of image identifying systems, the identifying system include controller, face database, image standardization mould
Block, characteristic extracting module, grader, output module, camera module, face detection module and image pre-processing module, this hair
The identifying system of bright proposition, which not only increases front face, discrimination in the case of the anglec of rotation, while to the light of varying strength
According to there is extraordinary robustness.
NEXT series of products characteristics of image recognition methods, it is characterised in that including feature extraction algorithm, the feature extraction
Algorithm combines PCA(PCA)With Fisher face(LDA)Carry out face characteristic extraction, including following step
Suddenly:
By test imagef(x,y)Illumination compensation is carried out, is obtainedf'(x,y);
All samples in face database are trained, obtain databaseW PCA WithW LDA , all samples all existW COM Do projective transformation,
Obtain the characteristic vector of each sampleF ij , wherein,W PCA Face space is characterized,W LDA For best projection matrix,W COM To project twice
Transformation matrix;
Willf'(x,y) W COM Projective transformation is done, obtains characteristic vectorF;
Seek characteristic vectorFWith the characteristic vector of each sample in databaseF ij Euclidean distance, closest is its affiliated class
Not.
Further, in addition to category feature sorting algorithm, the category feature sorting algorithm comprise the following steps:
By formula, the category feature of all categories is calculated, wherein, xIt is sample to be tested, is included in face databasenIndividual class
Not, have per classmIndividual sample,class[i] represent that sample to be tested does variance with every a kind of training sample in face database, should
Variance is exactly category feature;
By formula, minimum category feature is calculated, classification corresponding to minimum category feature is exactly the classification of sample to be tested.
Further, in addition to assembled classifier realizes algorithm, and the assembled classifier realizes that algorithm comprises the following steps:
Calculate in databaselEuclidean distance in individual class between sample, the maximum Euclidean distance in each of which class ared i ,i=
1,2,…,l, sample to be testedXWith each training sample in databaseA[i][j] Euclidean distance bed ij Ifd ij >d i Then judge
X is not the people in database, otherwise rightXUsingkNearest neighbour method identification classification, judges that it belongs toGClass;
ForXClassification and Identification is identified using category feature sorting algorithm as claimed in claim 6, it is describedXBelong toHClass;
IfH=GThen sample to be tested belongs toH, algorithm termination;
IfH≠G, then by test samplexThe grader is inputted, is determined againXAffiliated classification, with the defeated of SVMs
Go out to be defined.
The beneficial effect of NEXT series of products characteristics of image recognition methods of the present invention is:
NEXT series of products characteristics of image recognition methods, the recognition methods includes feature extraction algorithm, category feature classification is realized
Algorithm and assembled classifier realize algorithm, and recognition methods proposed by the present invention, which not only increases front face, anglec of rotation situation
Under discrimination, while the illumination to varying strength has extraordinary robustness.
Brief description of the drawings
The present invention is further explained below in conjunction with the drawings and specific embodiments.
Fig. 1 is the overall framework figure of NEXT series of products characteristics of image identifying systems;
Fig. 2 is the Face datection flow chart of NEXT series of products characteristics of image recognition methods;
Fig. 3 is the recognition of face flow chart of NEXT series of products characteristics of image recognition methods;
Fig. 4 is the training flow chart of NEXT series of products characteristics of image recognition methods.
Embodiment
The present invention embodiment be:NEXT series of products characteristics of image identifying system includes controller, face number
According to storehouse, image standardization module, characteristic extracting module, grader, output module, camera module, face detection module and figure
As pretreatment module;The control module respectively with the face database, image standardization module, characteristic extracting module, point
Class device, output module, camera module, face detection module connect with image pre-processing module;The face database is successively
It is connected by image standardization module, characteristic extracting module and grader with output module;The camera module passes sequentially through
Face detection module, image pre-processing module are connected with characteristic extracting module;The face detection module and the grader phase
Even;The face detection module includes Face datection unit, human eye positioning unit and face and cuts unit, the Face datection list
Member is arranged between camera module and human eye positioning unit, and the face cuts unit and is arranged on human eye positioning unit and image
Between pretreatment module;Described image pretreatment module includes gray processing unit, denoising unit, sharpens unit, histogram equalization
Change unit and picture size Standardisation Cell, the gray processing unit be arranged on the face cut unit and denoising unit it
Between, the denoising unit, which passes sequentially through, sharpens unit, histogram equalization unit, picture size Standardisation Cell and the feature
Extraction module connects.
Fig. 1 is the overall framework figure of NEXT series of products characteristics of image identifying systems, wherein, the workflow of camera module
Journey comprises the following steps:
The video information in the real-time acquisition monitoring region of camera module;
Camera module judges to whether there is mobile object in video by transportable frame, it is determined that mobile object in video be present
When, start face detection module, and gather each frame video image.
Wherein, the workflow of face detection module comprises the following steps:
After starting face detection module, the video image gathered by camera module is received;
Load classification device module, it is inside OpenCV by haarcascade_frontalface_alt_tree.xml file translations
Form CvHaarClassifierCascade, start to detect face;
Determine whether to detect face;When it is determined that not detecting face, continue to load the video image;
When it is determined that detecting face, eyes are positioned using block-based human-eye positioning method, are determined in the video image
Face whether there is eyes;
When it is determined that to state the face in video image be that eyes are not present, cast out the video image, and continue from shooting head mould
Block loads video image;
When it is determined that to state the face in video image be eyes be present, then regarding for human face region is intercepted according to the position of the eyes
Frequency image, the video and graphic after interception is facial image, preserves the facial image and facial image is passed into described image
Pretreatment module.
Wherein, the workflow of image pre-processing module comprises the following steps:
The gray processing unit carries out gray scale using facial image described in cvCvtColor () function pair in OpenCV function libraries
Change is handled;
The denoising unit has the face of Gaussian noise and salt-pepper noise using cvSmooth () function pair in OpenCV function libraries
Image carries out average filter median filtering method and is smoothed;
The sharpening unit is sharpened processing using Laplace operator to the facial image;
The histogram equalization unit uses face figure described in cvEqualizeHist () function pair in OpenCV function libraries
As carrying out histogram equalization processing;
The size of the facial image is uniformly processed as 92*112 pixels described image size normalised unit.
Specifically, the image in face database is done standardization, including gray proces, size rule by image standardization module
Lattice processing etc..
Fig. 2 is the Face datection flow chart of NEXT series of products characteristics of image recognition methods, and system does not have to monitor area
Object movement be that will not start face detection module, when have mobile object occur in video, system can by mobile detection come
Judge, if it is determined that there is mobile object then to obtain frame of video, and Face datection and human eye detection are done to each frame.Face datection mould
The flow of block is frame of video to be inputted into detection module, then load classification device, by haarcascade_frontalface_alt_
Tree.xml file translations are OpenCV internal form CvHaarClassifierCascade, start to detect face, if do not examined
Face is measured, then continues to load frame of video;If detecting face, eyes are positioned using block-based human-eye positioning method,
Judge either with or without eyes, it is bigger without the general anglec of rotation of face of eyes, such picture recognition get up it is also relatively difficult,
Therefore frame of video as systems abandon, if eyes, then the image of human face region is intercepted according to eyes position, is finally preserved
Facial image simultaneously passes to identification module, transmits and completes, and release preserves the internal memory of facial image.
Fig. 3 is the recognition of face flow chart of NEXT series of products characteristics of image recognition methods, the photo by pretreatment
It is loaded into feature extraction program, the process according to above-mentioned training is equally done one time, obtains characteristic value eigenValMat, feature
Vectorial eigenVectArr, the average image pAvgTrainImg for training face collection, mapping face
projectedTrainFaceMat.And preserve file.Identification division is the characteristics of image obtained for the second time and trained before
The process that each sample inside good database is classified, is classified using assembled classifier.
By first by each characteristic vector in the characteristic vector eigenVectArr of sample to be tested and database
EigenVectArr seeks distance variance, obtains all category features, according to the principle of minimal eigenvalue, determines its classification J.On
State the step of being category feature method, k-nearest neighbor is also with its characteristic vector, determines that classification L, last L are equal to J then output results, L
Sample input SVMs is then done into last judgement not equal to J, is finally defined by SVMs.
Fig. 4 is the training flow chart of NEXT series of products characteristics of image recognition methods.The training of system on human face database
Process.Training image information is loaded first by LoadFaceImgArray (), before this, it is necessary to which database face sample is believed
Cease in typing " train.txt " file, secondary file includes the title of face database sample, path, classification information.By face database
Each sample be stored in global variable faceImgArr.Then PCA is called(PCA)With linear discriminant
Analytic approach(LDA)Method feature extraction function PCALDA () obtains the average face of face database sample and the feature of each sample
Value is corresponding with characteristic vector to be saved in pAvgTrainImg, eigenValMat and eigenVectArr.Then, recall
Function cvEigenDecomposite (), all samples in face database are projected into PCA skies by k-l transformation matrixs
Between in, realize the dimensionality reduction of face sample.Finally using the file that the newly-built forms of cvOpenFileStorage () are XML
" facedata.xml ", storeTrainingData () function is called to be saved in all data of training sample
In " facedata.xml ", wherein eigenValMat, eigenVectArr, pAvgTrainImg comprising above-mentioned sample and
Train face matrix projectedTrainFaceMat.Complete training.
In practical work process, the present invention is using block-based human-eye positioning method positioning eyes, in the gray-scale map of face
As in, the gray value of eyes is less than other most of regions of face;Similar with the gray value of human eye has hair, eyebrow, nose
Hole etc., the facial image of binaryzation can mark off many blocks by setting certain threshold value, if the human eye of shooting has centre
White portion, computer can be handled by morphologic corrosion, for the front face image detected, the rotation of picture
Turn that eyes will not be caused to be in image edge, and eyes have the shape attribute of uniqueness, according to feature mentioned above substantially may be used
To remove the block of non-ocular pair.Then, being cut to face based on human eye positioning, concretely comprises the following steps:Selection is suitable first
Threshold value by image binaryzation, the image after binaryzation can attend a series of white blocks, then utilize the attribute of two eyes
Exclusive PCR block, the correctness of eyes pair is finally verified, finally determine the coordinate of eye center, the coordinate according to eyes determines double
The angle of eye and trunnion axis, human face region is cut according to face morphology after correction, the facial image after cutting is just
Face does not have the picture of the anglec of rotation, and such picture is necessarily than there is the picture recognition rate of the anglec of rotation high, because the database of training
In face picture be all no anglec of rotation.It is effective to solve the problems, such as that human face posture identification is difficult in recognition of face, improves
Face identification rate under different postures.
It should be noted that the present invention proposes category feature method on the basis of the thinking of nearest neighbor classification(Method
Of Class Feature), abbreviation MCF.If sample to be tested is possible to be utilized the different classifications method of distance function
If correct classification, then the distance variance of the training sample of its affiliated class in sample to be tested and face database should be most
Small.Because other classes are possible to occur and the very similar training sample of sample to be tested, but sample all in impossible class
All believe, and belong to of a sort training sample with sample to be tested, although because illumination or other factors cause difference,
It is substantially or the same.Specifically, category feature sorting algorithm comprises the following steps:
By formula, the category feature of all categories is calculated, wherein, xIt is sample to be tested, is included in face databasenIndividual class
Not, have per classmIndividual sample,class[i] represent that sample to be tested does variance with every a kind of training sample in face database, should
Variance is exactly category feature;
By formula, minimum category feature is calculated, classification corresponding to minimum category feature is exactly the classification of sample to be tested.
Category feature method not only calculates simply, and thinking understands, also by means of classification information, is carried relative to k nearest neighbor classification method
High discrimination.
Meanwhile KNN method majority of cases are useful, when face is very common, recognition effect is unsatisfactory.
Category feature method, make use of classification information, but big for itself image difference in class, very big feelings also occur in variance
Condition, also result in classification failure.SVMs can construct optimal classification surface to classify, and unfortunate amount of calculation is huge, take too
It is long, and cannot distinguish between the sample outside database.The advantages of how that utilizes these graders, keeps away its shortcoming, this is to connect
Get off the assembled classifier method to be studied.The main thought of assembled classifier is first treated by KNN graders and category feature method
Test sample originally judges, and judges that inconsistent SVMs of giving makes a decision.Specifically, assembled classifier realizes that algorithm is included such as
Lower step:
Calculate in databaselEuclidean distance in individual class between sample, the maximum Euclidean distance in each of which class ared i ,i=
1,2,…,l, sample to be testedXWith each training sample in databaseA[i][j] Euclidean distance bed ij Ifd ij >d i Then judge
X is not the people in database, otherwise rightXUsingkNearest neighbour method identification classification, judges that it belongs toGClass;
ForXClassification and Identification is identified using category feature sorting algorithm as claimed in claim 6, it is describedXBelong toHClass;
IfH=GThen sample to be tested belongs toH, algorithm termination;
IfH≠G, then by test samplexThe grader is inputted, is determined againXAffiliated classification, with the defeated of SVMs
Go out to be defined.
Assembled classifier realizes that algorithm solves to be not easy the sample classified using SVM, but not every sample to be detected
Svm classifier is required for, thus is averagely got off, greatly reduces the SVM classification time.
NEXT series of products characteristics of image identifying systems and its recognition methods, the identifying system include controller, face number
According to storehouse, image standardization module, characteristic extracting module, grader, output module, camera module, face detection module and figure
As pretreatment module;The recognition methods includes feature extraction algorithm, category feature classification realizes that algorithm and assembled classifier are realized
Algorithm, recognition methods proposed by the present invention, which not only increases front face, discrimination in the case of the anglec of rotation, while to not
Illumination with intensity has extraordinary robustness.Identifying system proposed by the present invention and its recognition methods possess real-time and reliable
Property, it can be widely used for the fields such as authentication, image recognition.
General technical staff of the technical field of the invention also will readily appreciate that in addition to the foregoing, illustrates herein and schemes
The specific embodiment shown can further change combination.Although the present invention gives diagram with regard to its preferred embodiment and illustrated,
But person skilled in the art is, it is recognized that in the spirit and scope of the present invention limited in the attached claims,
A variety of changes and variation can be also made to the present invention.
Claims (7)
1.NEXT series of products characteristics of image identifying systems, it is characterised in that the system includes controller, face database, figure
As standardized module, characteristic extracting module, grader, output module, camera module, face detection module and image preprocessing
Module;The control module respectively with the face database, image standardization module, characteristic extracting module, grader, output
Module, camera module, face detection module connect with image pre-processing module;The face database passes sequentially through image mark
Standardization module, characteristic extracting module and grader are connected with output module;The camera module passes sequentially through Face datection mould
Block, image pre-processing module are connected with characteristic extracting module;The face detection module is connected with the grader;
The face detection module includes Face datection unit, human eye positioning unit and face and cuts unit, the Face datection
Unit is arranged between camera module and human eye positioning unit, and the face cuts unit and is arranged on human eye positioning unit and figure
As between pretreatment module;
Described image pretreatment module includes gray processing unit, denoising unit, sharpens unit, histogram equalization unit and image
Size normalised unit, the gray processing unit are arranged on the face and cut between unit and denoising unit, the denoising list
Member passes sequentially through sharpening unit, histogram equalization unit, picture size Standardisation Cell and is connected with the characteristic extracting module.
2. characteristics of image identifying system as claimed in claim 1, it is characterised in that the workflow bag of the camera module
Include following steps:
The video information in the real-time acquisition monitoring region of camera module;
Camera module judges to whether there is mobile object in video by transportable frame, it is determined that mobile object in video be present
When, start face detection module, and gather each frame video image.
3. characteristics of image identifying system as claimed in claim 2, it is characterised in that the workflow of the face detection module
Comprise the following steps:
After starting face detection module, the video image gathered by camera module is received;
Load classification device module, it is inside OpenCV by haarcascade_frontalface_alt_tree.xml file translations
Form CvHaarClassifierCascade, start to detect face;
Determine whether to detect face;When it is determined that not detecting face, continue to load the video image;
When it is determined that detecting face, eyes are positioned using block-based human-eye positioning method, are determined in the video image
Face whether there is eyes;
When it is determined that to state the face in video image be that eyes are not present, cast out the video image, and continue from shooting head mould
Block loads video image;
When it is determined that to state the face in video image be eyes be present, then regarding for human face region is intercepted according to the position of the eyes
Frequency image, the video and graphic after interception is facial image, preserves the facial image and facial image is passed into described image
Pretreatment module.
4. characteristics of image identifying system as claimed in claim 3, it is characterised in that the workflow of described image pretreatment module
Journey comprises the following steps:
The gray processing unit carries out gray scale using facial image described in cvCvtColor () function pair in OpenCV function libraries
Change is handled;
The denoising unit has the face of Gaussian noise and salt-pepper noise using cvSmooth () function pair in OpenCV function libraries
Image carries out average filter median filtering method and is smoothed;
The sharpening unit is sharpened processing using Laplace operator to the facial image;
The histogram equalization unit uses face figure described in cvEqualizeHist () function pair in OpenCV function libraries
As carrying out histogram equalization processing;
The size of the facial image is uniformly processed as 92*112 pixels described image size normalised unit.
5.NEXT series of products characteristics of image recognition methods, it is characterised in that including feature extraction algorithm, the feature extraction is calculated
Method combines PCA(PCA)With Fisher face(LDA)Face characteristic extraction is carried out, is comprised the following steps:
By test imagef(x,y)Illumination compensation is carried out, is obtainedf'(x,y);
All samples in face database are trained, obtain databaseW PCA WithW LDA , all samples all existW COM Do projective transformation,
Obtain the characteristic vector of each sampleF ij , wherein,W PCA Face space is characterized,W LDA For best projection matrix,W COM To project twice
Transformation matrix;
Willf'(x,y) W COM Projective transformation is done, obtains characteristic vectorF;
Seek characteristic vectorFWith the characteristic vector of each sample in databaseF ij Euclidean distance, closest is its affiliated class
Not.
6.NEXT series of products characteristics of image recognition methods, it is characterised in that also including category feature sorting algorithm, the classification
Tagsort algorithm comprises the following steps:
By formula, the category feature of all categories is calculated, wherein, xIt is sample to be tested, is included in face databasenIndividual class
Not, have per classmIndividual sample,class[i] represent that sample to be tested does variance with every a kind of training sample in face database, should
Variance is exactly category feature;
By formula, minimum category feature is calculated, classification corresponding to minimum category feature is exactly the classification of sample to be tested.
7.NEXT series of products characteristics of image recognition methods, it is characterised in that also realize algorithm including assembled classifier, described group
Close grader and realize that algorithm comprises the following steps:
Calculate in databaselEuclidean distance in individual class between sample, the maximum Euclidean distance in each of which class ared i ,i=
1,2,…,l, sample to be testedXWith each training sample in databaseA[i][j] Euclidean distance bed ij Ifd ij >d i Then judge
X is not the people in database, otherwise rightXUsingkNearest neighbour method identification classification, judges that it belongs toGClass;
ForXClassification and Identification is identified using category feature sorting algorithm as claimed in claim 6, it is describedXBelong toHClass;
IfH=GThen sample to be tested belongs toH, algorithm termination;
IfH≠G, then by test samplexThe grader is inputted, is determined againXAffiliated classification, with the defeated of SVMs
Go out to be defined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610582842.1A CN107657201A (en) | 2016-07-23 | 2016-07-23 | NEXT series of products characteristics of image identifying systems and its recognition methods |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610582842.1A CN107657201A (en) | 2016-07-23 | 2016-07-23 | NEXT series of products characteristics of image identifying systems and its recognition methods |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107657201A true CN107657201A (en) | 2018-02-02 |
Family
ID=61126223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610582842.1A Pending CN107657201A (en) | 2016-07-23 | 2016-07-23 | NEXT series of products characteristics of image identifying systems and its recognition methods |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107657201A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188684A (en) * | 2019-05-30 | 2019-08-30 | 湖南城市学院 | A kind of face identification device and method |
CN113052546A (en) * | 2020-12-28 | 2021-06-29 | 浙江易桥软件开发有限公司 | Government affair system design method and government affair system |
CN113705487A (en) * | 2021-08-31 | 2021-11-26 | 西南交通大学 | Precise workpiece identification and process parameter correlation system and identification method |
-
2016
- 2016-07-23 CN CN201610582842.1A patent/CN107657201A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110188684A (en) * | 2019-05-30 | 2019-08-30 | 湖南城市学院 | A kind of face identification device and method |
CN110188684B (en) * | 2019-05-30 | 2021-04-06 | 湖南城市学院 | Face recognition device and method |
CN113052546A (en) * | 2020-12-28 | 2021-06-29 | 浙江易桥软件开发有限公司 | Government affair system design method and government affair system |
CN113705487A (en) * | 2021-08-31 | 2021-11-26 | 西南交通大学 | Precise workpiece identification and process parameter correlation system and identification method |
CN113705487B (en) * | 2021-08-31 | 2023-08-08 | 西南交通大学 | Precision workpiece identification and technological parameter association system and identification method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104091147B (en) | A kind of near-infrared eyes positioning and eye state identification method | |
US9633269B2 (en) | Image-based liveness detection for ultrasonic fingerprints | |
CN109145742B (en) | Pedestrian identification method and system | |
Yoon et al. | LFIQ: Latent fingerprint image quality | |
CN109410026A (en) | Identity identifying method, device, equipment and storage medium based on recognition of face | |
CN106250821A (en) | The face identification method that a kind of cluster is classified again | |
EP3680794B1 (en) | Device and method for user authentication on basis of iris recognition | |
CN103902978B (en) | Face datection and recognition methods | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
EP2858007B1 (en) | Sift feature bag based bovine iris image recognition method | |
CN103632147A (en) | System and method for implementing standardized semantic description of facial features | |
CN104123543A (en) | Eyeball movement identification method based on face identification | |
CN105138968A (en) | Face authentication method and device | |
CN104036254A (en) | Face recognition method | |
CN103530648A (en) | Face recognition method based on multi-frame images | |
CN106650574A (en) | Face identification method based on PCANet | |
WO2022213396A1 (en) | Cat face recognition apparatus and method, computer device, and storage medium | |
CN105160331A (en) | Hidden Markov model based face geometrical feature identification method | |
CN106650623A (en) | Face detection-based method for verifying personnel and identity document for exit and entry | |
Li et al. | Person-independent head pose estimation based on random forest regression | |
Lee et al. | Robust iris recognition baseline for the grand challenge | |
CN107657201A (en) | NEXT series of products characteristics of image identifying systems and its recognition methods | |
CN104050456A (en) | Driver eye state monitoring method based on invariant moment | |
KR101686246B1 (en) | System, method and program product for camera-based object analysis | |
CN108596057B (en) | Information security management system based on face recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180202 |