CN102024145B - Layered recognition method and system for disguised face - Google Patents
Layered recognition method and system for disguised face Download PDFInfo
- Publication number
- CN102024145B CN102024145B CN2010105673226A CN201010567322A CN102024145B CN 102024145 B CN102024145 B CN 102024145B CN 2010105673226 A CN2010105673226 A CN 2010105673226A CN 201010567322 A CN201010567322 A CN 201010567322A CN 102024145 B CN102024145 B CN 102024145B
- Authority
- CN
- China
- Prior art keywords
- face
- camouflage
- recognition
- sorter
- people
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims description 23
- 238000013528 artificial neural network Methods 0.000 claims description 63
- 230000001815 facial effect Effects 0.000 claims description 9
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000012706 support-vector machine Methods 0.000 claims description 3
- 239000002537 cosmetic Substances 0.000 claims 1
- 230000008878 coupling Effects 0.000 claims 1
- 238000010168 coupling process Methods 0.000 claims 1
- 238000005859 coupling reaction Methods 0.000 claims 1
- 238000005516 engineering process Methods 0.000 abstract description 7
- 239000010410 layer Substances 0.000 description 39
- 230000010354 integration Effects 0.000 description 18
- 210000000887 face Anatomy 0.000 description 15
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 239000011521 glass Substances 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 210000002569 neuron Anatomy 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000002356 single layer Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Images
Landscapes
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
本发明所提出的人脸识别技术包括两层分类器,通过对大量伪装人脸样本的学习,建立了可识别是否化伪装的第一层分类器,再在第二层两个分类器分别识别无伪装的人脸和伪装人脸;其结合可见光传感器和红外传感器分别获取识别对象的人脸可见光图像和红外图像,传送到计算机进行分层识别,得出对象是否是数据库中的某人或不在数据库中的结论。此技术有效地解决了传统人脸识别技术无法识别伪装人脸或者识别率低的问题,具有广泛的应用前景。
The face recognition technology proposed by the present invention includes two layers of classifiers. Through the study of a large number of fake face samples, a first layer classifier that can identify whether it is camouflaged or not is established, and then the two classifiers on the second layer identify Face without camouflage and camouflaged face; it combines visible light sensor and infrared sensor to obtain the visible light image and infrared image of the recognized object's face respectively, and transmits it to the computer for hierarchical recognition to determine whether the object is someone in the database or not conclusions in the database. This technology effectively solves the problem that traditional face recognition technology cannot recognize fake faces or has a low recognition rate, and has broad application prospects.
Description
技术领域 technical field
本发明涉及一种生物特征识别技术,特别是一种伪装人脸分层识别方法及系统。 The invention relates to a biometric feature recognition technology, in particular to a method and system for layered recognition of disguised human faces.
背景技术 Background technique
人脸识别是根据识别对象的人脸图像特征判断该对象是否与数据库中记录的某人为同一人,这是一种新兴的生物特征识别技术,人脸识别因具有采集方便以及不需要识别对象配合的优点而具有广泛的应用前景,比如用于在机场中识别记录在案的恐怖或者犯罪分子。然而,识别对象通常会在脸部进行伪装来遮蔽人脸某些区域的本来特征,这就对人脸识别系统提出很高的要求。伪装人脸通常指戴上眼镜或有胡须、化妆、过长遮挡了部分面部的头发、或遮挡了部分面部的围巾等故意或非故意的改变后的人脸。现有人脸识别系统对于此类带有伪装的人脸识别效率非常低,不能满足实际使用的需要。 Face recognition is to judge whether the object is the same person as the person recorded in the database according to the face image characteristics of the recognition object. It has broad application prospects due to its advantages, such as identifying recorded terrorists or criminals in airports. However, the recognition object usually camouflages the face to cover the original features of some areas of the face, which puts high demands on the face recognition system. A fake face usually refers to a face that has been altered intentionally or unintentionally, such as wearing glasses or having a beard, makeup, excessively long hair that covers part of the face, or a scarf that covers part of the face. The existing face recognition system is very inefficient for this kind of face recognition with camouflage, and cannot meet the needs of actual use.
发明内容 Contents of the invention
针对以上情况,本发明提供了一种能有效地识别伪装人脸的方法,并连带提出了一种应用此方法的系统。 In view of the above situation, the present invention provides a method capable of effectively identifying fake human faces, and jointly proposes a system for applying the method.
本发明采用的技术方案可以描述为: The technical scheme adopted in the present invention can be described as:
一种伪装人脸分层识别方法,其特征在于:采用两层分类器对识别对象脸部进行分层识别,其中第一层分类器为用于区分人脸有无伪装的通用伪装特征识别分类器,第二层分类器为用于识别对象身份的个体人脸特征识别分类器,所述第二层分类器进一步包括针对无伪装人脸的传统人脸识别分类器以及针对有伪装人脸的伪装人脸识别分类器,所述方法包含以下步骤: A kind of camouflage human face layered recognition method, it is characterized in that: adopt two-layer classifier to carry out layered recognition to the recognition object face, wherein the first layer of classifier is the general camouflage feature recognition classification for distinguishing whether there is camouflage in human face The second layer classifier is an individual face feature recognition classifier for identifying the identity of the object, and the second layer classifier further includes a traditional face recognition classifier for a face without camouflage and a face recognition classifier for a face with camouflage Masquerading as a face recognition classifier, the method includes the following steps:
1.通过可见光传感器采集识别对象的人脸图像特征数据; 1. Collect the face image feature data of the identified object through the visible light sensor;
2.在第一层分类器中进行通用伪装特征识别,使用可见光传感器所采集的人脸图像特征数据与数据库中伪装人脸的通用特征数据进行比较,判断对象人脸有无伪装,并对伪装类型进行分类; 2. Carry out general camouflage feature recognition in the first layer classifier, compare the feature data of the face image collected by the visible light sensor with the general feature data of the camouflage face in the database, judge whether the target face is camouflaged, and detect the camouflage type to classify;
3.在第二层分类器中对人脸进行个体特征识别,对于无伪装人脸直接调用传统人脸识别分类器对可见光传感器所采集的人脸特征数据进行识别,判断对象身份并输出结果,识别结束;对于有伪装人脸则进入步骤4; 3. Carry out individual feature recognition on the face in the second layer classifier, and directly call the traditional face recognition classifier to recognize the face feature data collected by the visible light sensor for the face without camouflage, judge the identity of the object and output the result, Recognition ends; for faces with camouflage, go to step 4;
4.根据步骤2对伪装类型的分类,使用红外线传感器对识别对象人脸有伪装部分进行进一步图像特征采集,并结合可见光传感器所采集的人脸的无伪装部分的图像特征数据,调用伪装人脸识别分类器进行识别,判断对象身份并输出结果,识别结束。 4. According to the classification of the camouflage type in step 2, use the infrared sensor to further collect the image features of the camouflaged part of the face of the recognition object, and combine the image feature data of the uncamouflaged part of the face collected by the visible light sensor to call the camouflaged face The recognition classifier performs recognition, judges the identity of the object and outputs the result, and the recognition ends.
作为以上技术方案的一种改进,所述第一层分类器采用前向神经网络实现对人脸是否有伪装的识别判断以及伪装类型的分类。 As an improvement of the above technical solution, the first layer classifier uses a feed-forward neural network to realize the recognition and judgment of whether the human face is disguised and the classification of the type of camouflage.
作为以上技术方案的一种改进,第一层分类器中采用的人脸图像特征包括:用于区分有无眼镜、胡须的区域水平投影特征,用于区分有无墨镜、胡须的区域平均灰度特征,用于判断有无局部遮挡的对称相关性系数和不变矩特征,以及用于判断有无化妆的眼睛和嘴部的局部特征。 As an improvement of the above technical solution, the face image features used in the first layer classifier include: regional horizontal projection features for distinguishing whether there are glasses and beards, and regional average gray levels for distinguishing whether there are sunglasses and beards Features, the symmetric correlation coefficient and invariant moment feature for judging whether there is partial occlusion, and the local features of eyes and mouth for judging whether there is makeup or not.
作为以上技术方案的一种改进,所述第二层分类器中的传统人脸识别分类器采用的识别方法包括主特征分析法、神经网络、小波分析和支持向量机算法。 As an improvement of the above technical solution, the recognition methods adopted by the traditional face recognition classifier in the second layer classifier include principal feature analysis, neural network, wavelet analysis and support vector machine algorithm.
作为以上技术方案的一种改进,第二层分类器中的伪装人脸识别分类器采用神经网络集成算法。 As an improvement of the above technical solution, the camouflaged face recognition classifier in the second layer classifier adopts a neural network integration algorithm.
本发明还提出了一种伪装人脸分层识别系统,其包括用于采集对象脸部图像特征的图像采集单元,以及储存并运行所述分层识别分类器和数据库以对所采集特征数据进行分析识别的计算机主机,其中所述图像采集单元包括并行的可见光传感器和红外传感器,在主机上运行的分类器包括两层分类器,其中第一层分类器为用于区分人脸有无伪装的伪装识别分类器,其使用可见光传感器所采集的人脸特征数据与数据库中伪装人脸的通用特征数据进行比较匹配,判断对象人脸有无伪装,并对伪装类型进行分类;第二层分类器为用于识别对象身份的个体人脸特征识别分类器,其进一步包括针对无伪装人脸的传统人脸识别分类器以及针对有伪装人脸的伪装人脸识别分类器,对于无伪装人脸直接调用传统人脸识别分类器对可见光传感器所采集的人脸图像特征数据进行识别,对于有伪装人脸则调用伪装人脸识别分类器对红外线传感器结合可见光传感器采集的人脸图像特征数据进行识别以判断对象身份。 The present invention also proposes a camouflage human face layered recognition system, which includes an image acquisition unit for collecting the features of the face image of the object, and stores and operates the layered recognition classifier and database to carry out the collected feature data A host computer for analyzing and identifying, wherein the image acquisition unit includes parallel visible light sensors and infrared sensors, and the classifiers running on the host include two layers of classifiers, wherein the first layer of classifiers is used to distinguish whether there is camouflage on the face Masquerade recognition classifier, which uses the facial feature data collected by the visible light sensor to compare and match the general feature data of the camouflaged face in the database, judges whether the target face is camouflaged, and classifies the camouflage type; the second layer classifier It is an individual face feature recognition classifier for identifying the identity of an object, which further includes a traditional face recognition classifier for an undisguised face and a camouflaged face recognition classifier for a camouflaged face, and for a non-disguised face directly Call the traditional face recognition classifier to identify the face image feature data collected by the visible light sensor, and call the fake face recognition classifier to identify the face image feature data collected by the infrared sensor combined with the visible light sensor for a fake face. Determine the identity of the object.
作为以上技术方案的一种改进,所述可见光传感器(11)为CCD或CMOS图像传感器。 As an improvement of the above technical solution, the visible light sensor (11) is a CCD or CMOS image sensor.
本发明的有益效果是: The beneficial effects of the present invention are:
本发明所提出的技术包括两层分类器,通过对大量伪装人脸样本的学习,建立了可识别是否化伪装的第一层分类器,再在第二层两个分类器分别识别无伪装的人脸和伪装人脸;其结合可见光传感器和红外传感器分别获取识别对象的人脸可见光图像和红外图像,传送到计算机进行分层识别,得出对象是否是数据库中的某人或不在数据库中的结论。此技术有效地解决了传统人脸识别技术无法识别伪装人脸或者识别率低的问题,具有广泛的应用前景。 The technology proposed by the present invention includes two layers of classifiers. By studying a large number of fake face samples, a first layer classifier that can identify whether it is camouflaged or not is established, and then the two classifiers on the second layer identify the faces without camouflage. Human face and camouflaged face; it combines visible light sensor and infrared sensor to obtain the visible light image and infrared image of the recognized object respectively, and transmits it to the computer for layered recognition to find out whether the object is someone in the database or not in the database in conclusion. This technology effectively solves the problem that traditional face recognition technology cannot recognize fake faces or has a low recognition rate, and has broad application prospects.
附图说明 Description of drawings
图1为伪装人脸分层识别算法逻辑框图; Fig. 1 is a logic block diagram of the camouflaged face layered recognition algorithm;
图2为第一层分类器逻辑框图; Fig. 2 is a logic block diagram of the first layer classifier;
图3为单层感知器神经网络结构示意图; Fig. 3 is a schematic diagram of a single-layer perceptron neural network structure;
图4为神经网络集成的结构示意图; Fig. 4 is the structural representation of neural network integration;
图5为伪装人脸识别系统结构框图。 Figure 5 is a structural block diagram of the fake face recognition system.
具体实施方式 Detailed ways
一种伪装人脸分层识别方法,如图1伪装人脸分层识别算法逻辑框图所示,采用两层分类器对识别对象脸部进行分层识别,其中第一层分类器为用于区分人脸有无伪装的通用伪装特征识别分类器,第二层分类器为用于识别对象身份的个体人脸特征识别分类器,所述第二层分类器进一步包括针对无伪装人脸的传统人脸识别分类器以及针对有伪装人脸的伪装人脸识别分类器,所述方法包含以下步骤: A layered recognition method for disguised faces, as shown in the logic block diagram of the layered recognition algorithm for disguised faces in Figure 1, adopts two layers of classifiers to perform layered recognition on the faces of the recognition objects, wherein the first layer of classifiers is used to distinguish There is a general camouflage feature recognition classifier with or without camouflage on the face, and the second layer classifier is an individual face feature recognition classifier for identifying the identity of the object, and the second layer classifier further includes traditional human face recognition classifiers for faces without camouflage. A face recognition classifier and a camouflaged face recognition classifier for a camouflaged face, the method includes the following steps:
1.通过可见光传感器采集识别对象的人脸图像特征数据; 1. Collect the face image feature data of the identified object through the visible light sensor;
2.在第一层分类器中进行通用伪装特征识别,使用可见光传感器所采集的人脸图像特征数据与数据库中伪装人脸的通用特征数据进行比较,判断对象人脸有无伪装,并对伪装类型进行分类; 2. Carry out general camouflage feature recognition in the first layer classifier, compare the feature data of the face image collected by the visible light sensor with the general feature data of the camouflage face in the database, judge whether the target face is camouflaged, and detect the camouflage type to classify;
3.在第二层分类器中对人脸进行个体特征识别,对于无伪装人脸直接调用传统人脸识别分类器对可见光传感器所采集的人脸特征数据进行识别,判断对象身份并输出结果,识别结束;对于有伪装人脸则进入步骤4; 3. Carry out individual feature recognition on the face in the second layer classifier, and directly call the traditional face recognition classifier to recognize the face feature data collected by the visible light sensor for the face without camouflage, judge the identity of the object and output the result, Recognition ends; for faces with camouflage, go to step 4;
4.根据步骤2对伪装类型的分类,使用红外线传感器对识别对象人脸有伪装部分进行进一步图像特征采集,并结合可见光传感器所采集的人脸的无伪装部分的图像特征数据,调用伪装人脸识别分类器进行识别,判断对象身份并输出结果,识别结束。 4. According to the classification of the camouflage type in step 2, use the infrared sensor to further collect the image features of the camouflaged part of the face of the recognition object, and combine the image feature data of the uncamouflaged part of the face collected by the visible light sensor to call the camouflaged face The recognition classifier performs recognition, judges the identity of the object and outputs the result, and the recognition ends.
第一层分类器采用前向神经网络实现对人脸是否有伪装及伪装类型的分类,其结构如图2所示。其建立过程是通过对大量的人脸图像、伪装人脸图像进行机器学习。伪装人脸包括戴眼镜、有胡须、化妆和部分遮挡面部等。建立起与具体人无关的能区分人脸有无伪装的分类器,分类器目标是区分人脸有无伪装。采用的人脸图像特征包括:区域水平投影特征用于区分有无眼镜、胡须,区域平均灰度特征用于区分有无墨镜、胡须,对称相关性系数和不变矩特征用于判断有无局部遮挡,眼睛和嘴部的局部特征用于判断有无化妆。第一层分类器的具体实现通过三层前向神经网络来实现,在图2中x为输入的人脸图像特征,选取9个特征(x1-x9):眼部区域水平投影特征、嘴部区域水平投影特征、眼部区域平均灰度、嘴部区域平均灰度、对称相关性系数、不变矩特征、主特征、眼部局部不变矩特征和和嘴部局部不变矩特征。神经网络的连接权重wij和偏置bij通过机器学习得到。第一层节点的激发函数Φ1i采用Sigmoid函数,第二层节点的激发函数Φ2j采用阈值函数。 The first layer classifier uses the feedforward neural network to classify whether the face is camouflaged and the type of camouflage. Its structure is shown in Figure 2. Its establishment process is through machine learning on a large number of face images and camouflaged face images. Face camouflage includes wearing glasses, having beards, makeup, and partially covering the face. Establish a classifier that can distinguish whether the face is camouflaged or not, which has nothing to do with the specific person. The goal of the classifier is to distinguish whether the face is camouflaged or not. The face image features used include: regional horizontal projection features are used to distinguish whether there are glasses and beards, regional average grayscale features are used to distinguish whether there are sunglasses and beards, symmetric correlation coefficients and invariant moment features are used to judge whether there are local Occlusions, local features of eyes and mouth are used to judge the presence or absence of makeup. The specific implementation of the first layer classifier is realized through a three-layer forward neural network. In Figure 2, x is the input face image feature, and nine features (x 1 -x 9 ) are selected: the horizontal projection feature of the eye region, Mouth area horizontal projection feature, eye area average gray level, mouth area average gray level, symmetric correlation coefficient, moment invariant feature, main feature, eye local moment invariant feature and mouth local moment invariant feature . The connection weight w ij and bias b ij of the neural network are obtained through machine learning. The activation function Φ 1i of the nodes in the first layer adopts the Sigmoid function, and the activation function Φ 2j of the nodes in the second layer adopts the threshold function.
第二层中对无伪装人脸的识别的分类器采用主特征分析的特征脸方法来进行识别,也可采用其它各种传统的识别方法,如神经网络、小波分析、支持向量机等。 In the second layer, the classifier for the recognition of undisguised faces adopts the eigenface method of principal feature analysis, and various other traditional recognition methods can also be used, such as neural networks, wavelet analysis, support vector machines, etc.
第二层中对伪装人脸识别的分类器采用未受伪装影响的人脸特征结合插值恢复受伪装影响的小面积人脸特征,对于较大面积遮挡的人脸特征则结合红外图像特征进行识别。可见光传感器分辨率高,采集的图像信息量大,但是受伪装影响大。红外传感器受伪装影响很小,是基于人脸的温度差异产生不同强度的红外线成像,但是分辨率比较低,图像信息量较小。将二者结合可以较好地识别伪装人脸。 In the second layer, the classifier for camouflage face recognition uses face features not affected by camouflage combined with interpolation to recover small-area face features affected by camouflage, and recognizes face features with larger areas of occlusion combined with infrared image features. . The visible light sensor has a high resolution and collects a large amount of image information, but it is greatly affected by camouflage. Infrared sensors are less affected by camouflage, and generate infrared images of different intensities based on the temperature difference of the face, but the resolution is relatively low and the amount of image information is small. Combining the two can better identify fake faces.
第二层中的伪装人脸分类器采用神经网络集成的方法来实现。神经网络集成是将多个解决同一问题的单个神经网络,按设计的方式组合成为一个新的整体,称为神经网络集成。单个神经网络均采用单层感知器网络,其结构如图3所示。根据伪装类型的不同选择不同人脸图像的特征,包括可见光传感器采集的图像特征和红外传感器采集的特征。 The camouflage face classifier in the second layer is implemented by neural network ensemble. Neural network integration is to combine multiple single neural networks that solve the same problem into a new whole according to the design method, which is called neural network integration. A single neural network uses a single-layer perceptron network, and its structure is shown in Figure 3. According to the different types of camouflage, the features of different face images are selected, including the image features collected by the visible light sensor and the features collected by the infrared sensor.
神经网络集成的结构如图4所示,其中x为各神经网络的输入特征矢量,αi为各神经网络线性组合构成神经网络集成的连接权重,参与神经网络集成的各神经网络的输出为hi, 。神经网络集成的最后总体输出为H,集成的激发函数F为:,其中表示各类别,,表示取使S 为最大的Ct值。这实际上是各神经网络输出的加权多数表决结果。 The structure of neural network integration is shown in Figure 4, where x is the input feature vector of each neural network, α i is the connection weight of each neural network linear combination to form neural network integration, and the output of each neural network participating in neural network integration is h i , . The final overall output of neural network integration is H, and the integrated activation function F is: ,in represent each category, , Indicates to take the value of C t that makes S the largest. This is actually a weighted majority vote of the outputs of each neural network.
神经网络集成的核心包括神经网络的训练和分类权重两部分: The core of neural network integration includes two parts: neural network training and classification weights:
(1)参与集成的各神经网络依次进行训练,后面训练的神经网络利用前面已经训练的神经网络的训练结果:给各训练样本以权重,前面神经网络训练中分类错误的样本权重加大,而已被分类正确的样本权重减小。各神经网络的训练优化目标是使分类错误的样本权重和为最小,这样后面的神经网络将更加关注被分类错误的困难样本,从而使总体分类错误减少; (1) Each neural network participating in the integration is trained in turn, and the neural network trained later uses the training results of the previously trained neural network: each training sample is given a weight, and the weight of the misclassified sample in the previous neural network training is increased. The weight of samples that are correctly classified is reduced. The training optimization goal of each neural network is to minimize the sum of misclassified sample weights, so that the subsequent neural network will pay more attention to misclassified difficult samples, thereby reducing the overall classification error;
(2)集成的权重系数αi与各神经网络的分类能力相关,对训练样本分类错误的样本权重和越小的神经网络,其集成权重系数αi越大。 (2) The integrated weight coefficient α i is related to the classification ability of each neural network. The smaller the weight sum of the training samples and the smaller the neural network, the greater the integrated weight coefficient α i .
这样的神经网络集成通过多个结构简单、精度较低的神经网络加权组合得到了分类精度的提高。更重要的是这种组合还可以抵消过多神经网络集成的过拟合问题。 Such neural network ensemble improves the classification accuracy through the weighted combination of multiple neural network with simple structure and low precision. More importantly, this combination can also counteract the overfitting problem of too many neural network integrations.
神经网络集成从结构形式上看类似于一个有一隐层多隐层节点的复杂神经网络,但两者有本质上的不同:两层神经网络是作为一个神经网络整体进行机器学习的,因此在结构设计上要复杂、机器学习需要的学习样本数量多、学习收敛慢、易陷于局部最优。神经网络集成的各个神经网络都是独立设计和训练的,各神经网络结构相对简单,因此训练也相对容易。然后对结构简单的多个神经网络进行合适的集成后可以使整体性能大大提升。而且可以理论证明其泛化误差只与单个神经网络的结构复杂度有关而与参与集成的神经网络数量无关,是有界的。 Neural network integration is similar to a complex neural network with one hidden layer and multiple hidden layer nodes in terms of structure, but the two are fundamentally different: the two-layer neural network is machine learning as a whole neural network, so in the structure The design is complex, the number of learning samples required for machine learning is large, the learning convergence is slow, and it is easy to fall into local optimum. Each neural network of the neural network integration is designed and trained independently, and the structure of each neural network is relatively simple, so the training is relatively easy. Then proper integration of multiple neural networks with a simple structure can greatly improve the overall performance. And it can be theoretically proved that its generalization error is only related to the structural complexity of a single neural network and has nothing to do with the number of neural networks participating in the integration, which is bounded.
神经网络集成的权重系数αi的计算和训练样本权重的调整算法如下表: The calculation of the weight coefficient α i of the neural network integration and the training sample weight The adjustment algorithm is as follows:
表1:神经网络集成的多分类器训练优化算法 Table 1: Multi-classifier training optimization algorithm for neural network ensemble
基本思想是各神经网络依次训练,后面训练的神经网络的训练样本权重根据已训练的神经网络的训练结果进行调整:加大被错误分类的样本权重、减小已被正确分类的样本权重。这样更多关注被错误分类的困难样本。而各神经网络在最后集成中的权重系数则与各自神经网络的分类正确率有关:分类正确率高的权重系数大。 The basic idea is that each neural network is trained sequentially, and the training sample weights of the neural network trained later are adjusted according to the training results of the trained neural network: increase the weight of samples that are misclassified, and decrease the weight of samples that have been correctly classified. This way more attention is paid to difficult samples that are misclassified. The weight coefficients of each neural network in the final integration are related to the classification accuracy of each neural network: the weight coefficient with a higher classification accuracy is larger.
设训练集为:m个训练样本,为样本i的输入特征,为样本i的标签,为样本i在第t个神经网络进行训练时的权重,为第t个神经网络的输出,为第t个神经网络对样本i的输出,为第t个神经网络在最后神经网络集成中的权重系数,为第t个神经网络在训练中分类错误的训练样本的权重和,、为第t个神经网络的内部神经元间的连接权重系数和神经元的阈值参数。为样本权重规范化因子,以使对每个神经网络的样本权重均和为1。 Let the training set be: m training samples, is the input feature of sample i, is the label of sample i, is the weight of sample i when training the tth neural network, is the output of the tth neural network, is the output of the tth neural network for sample i, is the weight coefficient of the tth neural network in the final neural network integration, is the sum of the weights of the training samples misclassified by the t-th neural network during training, , is the connection weight coefficient between internal neurons of the t-th neural network and the threshold parameter of neurons. A normalization factor for the sample weights so that the sample weights for each neural network sum to 1.
本发明还在上述方法的基础上提出了一种伪装人脸识别系统,由用于采集对象脸部图像特征的图像采集单元1以及承载并运行所述分层识别分类器和数据库对所采集特征数据进行分析识别的计算机主机2两部分构成,如图5所示。其中图像采集单元包括并行的可见光传感器11和红外传感器12,在主机上2运行的分类器包括两层分类器,其中第一层分类器为用于区分人脸有无伪装的伪装识别分类器,在此层中进行通用伪装特征识别,使用可见光传感器11所采集的人脸特征数据与数据库中伪装人脸的通用特征数据进行比较匹配,判断对象人脸有无伪装,并对伪装类型进行分类;第二层分类器为用于识别对象身份的个体人脸特征识别分类器,其包括针对无伪装人脸的传统人脸识别分类器以及针对有伪装人脸的伪装人脸识别分类器,对于无伪装人脸直接调用传统人脸识别分类器对可见光传感器11所采集的人脸特征数据进行识别。对于在第一层分类器被判断为有伪装的人脸,则使用红外线传感器12对识别对象人脸有伪装部分进行进一步图像特征采集,并结合可见光传感器所采集的人脸的无伪装部分的图像特征数据,调用伪装人脸识别分类器进行识别,判断对象身份并输出结果。
The present invention also proposes a camouflage face recognition system on the basis of the above method, the image acquisition unit 1 for collecting the face image features of the object and carrying and running the layered recognition classifier and the database for the collected features The host computer 2 for data analysis and identification is composed of two parts, as shown in FIG. 5 . Wherein the image acquisition unit includes a parallel visible
其中可见光传感器11可以是CCD或CMOS图像传感器。
Wherein the
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010105673226A CN102024145B (en) | 2010-12-01 | 2010-12-01 | Layered recognition method and system for disguised face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010105673226A CN102024145B (en) | 2010-12-01 | 2010-12-01 | Layered recognition method and system for disguised face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102024145A CN102024145A (en) | 2011-04-20 |
CN102024145B true CN102024145B (en) | 2012-11-21 |
Family
ID=43865426
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010105673226A Expired - Fee Related CN102024145B (en) | 2010-12-01 | 2010-12-01 | Layered recognition method and system for disguised face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102024145B (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102789572B (en) * | 2012-06-26 | 2015-07-01 | 五邑大学 | Living body face safety certification device and living body face safety certification method |
CN103093423A (en) * | 2012-11-30 | 2013-05-08 | 中国人民解放军61517部队 | Method of improving spatial feature similarity of screen surface and background space |
CN105095829B (en) * | 2014-04-29 | 2019-02-19 | 华为技术有限公司 | A kind of face identification method and system |
CN106062774B (en) * | 2014-11-15 | 2020-01-03 | 北京旷视科技有限公司 | Face detection using machine learning |
CN104616006B (en) * | 2015-03-11 | 2017-08-25 | 湖南智慧平安科技有限公司 | A kind of beard method for detecting human face towards monitor video |
CN104732220B (en) * | 2015-04-03 | 2017-12-22 | 中国人民解放军国防科学技术大学 | A kind of particular color human body detecting method towards monitor video |
CN105335712A (en) * | 2015-10-26 | 2016-02-17 | 小米科技有限责任公司 | Image recognition method, device and terminal |
CN107111750B (en) * | 2015-10-30 | 2020-06-05 | 微软技术许可有限责任公司 | Detection of deceptive faces |
CN106202211B (en) * | 2016-06-27 | 2019-12-13 | 四川大学 | An Integrated Microblog Rumor Identification Method Based on Microblog Type |
CN106156780B (en) * | 2016-06-29 | 2019-08-13 | 南京雅信科技集团有限公司 | The method of wrong report is excluded on track in foreign body intrusion identification |
CN106372601B (en) * | 2016-08-31 | 2020-12-22 | 上海依图信息技术有限公司 | Living body detection method and device based on infrared visible binocular images |
CN107798279B (en) * | 2016-09-07 | 2022-01-25 | 北京眼神科技有限公司 | Face living body detection method and device |
CN106599829A (en) * | 2016-12-09 | 2017-04-26 | 杭州宇泛智能科技有限公司 | Face anti-counterfeiting algorithm based on active near-infrared light |
CN106778673A (en) * | 2016-12-30 | 2017-05-31 | 易瓦特科技股份公司 | It is applied to the recognizable method and system of unmanned plane |
CN108629305B (en) * | 2018-04-27 | 2021-10-22 | 广州市中启正浩信息科技有限公司 | Face recognition method |
CN109117862B (en) * | 2018-06-29 | 2019-06-21 | 北京达佳互联信息技术有限公司 | Image tag recognition methods, device and server |
CN109165546B (en) * | 2018-07-06 | 2021-04-02 | 深圳市科脉技术股份有限公司 | Face recognition method and device |
CN109214361A (en) * | 2018-10-18 | 2019-01-15 | 康明飞(北京)科技有限公司 | A kind of face identification method and device and ticket verification method and device |
CN111723700B (en) * | 2020-06-08 | 2022-11-11 | 国网河北省电力有限公司信息通信分公司 | Face recognition method and device and electronic equipment |
CN111860428B (en) * | 2020-07-30 | 2024-06-21 | 上海华虹计通智能系统股份有限公司 | Face recognition system and method |
CN113298166A (en) * | 2021-06-01 | 2021-08-24 | 中科晶源微电子技术(北京)有限公司 | Defect classifier, defect classification method, device, equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI394085B (en) * | 2008-10-28 | 2013-04-21 | Asustek Comp Inc | Method of identifying the dimension of a shot subject |
CN101404060B (en) * | 2008-11-10 | 2010-06-30 | 北京航空航天大学 | A face recognition method based on the fusion of visible light and near-infrared Gabor information |
KR20100073191A (en) * | 2008-12-22 | 2010-07-01 | 한국전자통신연구원 | Method and apparatus for face liveness using range data |
-
2010
- 2010-12-01 CN CN2010105673226A patent/CN102024145B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN102024145A (en) | 2011-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102024145B (en) | Layered recognition method and system for disguised face | |
CN108520216B (en) | Gait image-based identity recognition method | |
Wysoski et al. | Fast and adaptive network of spiking neurons for multi-view visual pattern recognition | |
CN103793690B (en) | A kind of human-body biological biopsy method detected based on subcutaneous haematic flow and application | |
CN106709477A (en) | Face recognition method and system based on adaptive score fusion and deep learning | |
CN107133612A (en) | Based on image procossing and the intelligent ward of speech recognition technology and its operation method | |
Tivive et al. | A gender recognition system using shunting inhibitory convolutional neural networks | |
Kumar et al. | Artificial Emotional Intelligence: Conventional and deep learning approach | |
CN103020602B (en) | Based on the face identification method of neural network | |
CN109670406B (en) | A non-contact emotion recognition method for game users based on heart rate and facial expressions | |
CN115294658B (en) | A personalized gesture recognition system and gesture recognition method for multiple application scenarios | |
Zhang et al. | A survey on face anti-spoofing algorithms | |
Tharwat et al. | Multimodal biometric authentication algorithm using ear and finger knuckle images | |
Oleiwi et al. | Integrated different fingerprint identification and classification systems based deep learning | |
Behera et al. | Variance-guided attention-based twin deep network for cross-spectral periocular recognition | |
Shao et al. | One-shot cross-dataset palmprint recognition via adversarial domain adaptation | |
CN105956570A (en) | Lip characteristic and deep learning based smiling face recognition method | |
Abinaya et al. | A novel biometric approach for facial image recognition using deep learning techniques | |
Garg et al. | Facial expression recognition & classification using hybridization of ICA, GA, and neural network for human-computer interaction | |
CN109522865A (en) | A kind of characteristic weighing fusion face identification method based on deep neural network | |
Mary et al. | Human identification using periocular biometrics | |
CN103034840A (en) | Gender identification method | |
WO2021258284A1 (en) | Edge processing data de-identification | |
Yamsani et al. | facial emotional recognition using faster regional convolutional neural network with vgg16 feature extraction model | |
Alkahla et al. | Face identification in a video file based on hybrid intelligence technique-review |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20121121 Termination date: 20151201 |
|
EXPY | Termination of patent right or utility model |