Nothing Special   »   [go: up one dir, main page]

CN109002790A - Face recognition method, device, equipment and storage medium - Google Patents

Face recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN109002790A
CN109002790A CN201810759811.8A CN201810759811A CN109002790A CN 109002790 A CN109002790 A CN 109002790A CN 201810759811 A CN201810759811 A CN 201810759811A CN 109002790 A CN109002790 A CN 109002790A
Authority
CN
China
Prior art keywords
face
neural network
recognition
image
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810759811.8A
Other languages
Chinese (zh)
Inventor
张玉兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN201810759811.8A priority Critical patent/CN109002790A/en
Publication of CN109002790A publication Critical patent/CN109002790A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method, a face recognition device, face recognition equipment and a storage medium. The method comprises the following steps: constructing an image training sample set, wherein the image training sample set comprises a plurality of images containing human faces; training a face recognition neural network model using the image training sample set, the face recognition neural network model converging through a combined boundary loss function, the combined boundary loss function containing at least two boundary values; and taking the face recognition neural network model which reaches the preset training end condition as the finally obtained face recognition model. The method can realize that the face recognition neural network model obtains a very good balanced effect between the training effect and the convergence.

Description

A kind of method, apparatus of recognition of face, equipment and storage medium
Technical field
The present embodiments relate to nerual network technique more particularly to a kind of method, apparatus of recognition of face, equipment and deposit Storage media.
Background technique
Face recognition technology is the face feature based on people, to the facial image or video flowing of input.First determine whether it With the presence or absence of face, if there is face, then the position of each face, size and each major facial organ are further provided Location information.And according to these information, the identity characteristic contained in each face is further extracted, and by itself and known people Face compares, to identify the identity of each face.The technology is in bank VTM (Video Teller Machine, long-range view Frequency automatic teller machine) it more and more in a large amount of scenes such as verifying or jeweler's shop member identification is employed.
The deep learning model of recognition of face task is substantially after top (characteristic layer) of network at present Deep learning network model parameter is trained, constrains and updated by loss function layer.
In the implementation of the present invention, the discovery prior art has following defects that uses existing loss letter to inventor When number restricted model, model can not obtain extraordinary balance between training effect and convergence.Its result be exactly or Cause training not restrain or because loss function boundary constraint is too strong in order to reach the convergence of training process, and loosens loss The boundary constraint of function, therefore training pattern precision is not high.The two will lead to deep learning prototype network mid-level net network layers Parameter be unable to get and update well, and then the model trained is caused to show inadequate reason in practical face identification mission Think.
Summary of the invention
The present invention provides the method, apparatus, equipment and storage medium of a kind of recognition of face, to realize recognition of face nerve net Network model obtains the effect of extraordinary balance between training effect and convergence.
In a first aspect, the embodiment of the invention provides a kind of methods of recognition of face, comprising:
Image training sample set is constructed, it includes multiple images comprising face that described image training sample, which is concentrated,;
Use described image training sample set training recognition of face neural network model, the recognition of face neural network mould Type is restrained by combined boundary loss function, and the combined boundary loss function contains at least two boundary values;
The recognition of face neural network model of default training termination condition is up to as finally obtained recognition of face mould Type.
Second aspect, the embodiment of the invention also provides a kind of devices of recognition of face, comprising:
Sample set constructing module, for constructing image training sample set, it includes multiple packets that described image training sample, which is concentrated, Image containing face;
Neural network model training module, for using described image training sample set training recognition of face neural network mould Type, the recognition of face neural network model are restrained by combined boundary loss function, the combined boundary loss function Contain at least two boundary values;
Human face recognition model output module, for being up to the recognition of face neural network model of default training termination condition As finally obtained human face recognition model.
The third aspect, the embodiment of the invention also provides a kind of equipment, comprising:
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processing The method that device realizes a kind of recognition of face as described in the embodiments of the present invention.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, are stored thereon with computer Program realizes a kind of method of recognition of face as described in the embodiments of the present invention when the program is executed by processor.
Method, apparatus, equipment and the storage medium of a kind of recognition of face of above-mentioned offer, by constructing image training sample Collection using described image training sample set training recognition of face neural network model, and is arranged and contains at least two boundary values Combined boundary loss function, for constraining recognition of face neural network model, and when recognition of face neural network model reaches When default training termination condition, finally obtained human face recognition model is determined.Solves the prior art through the above technical solution In, using single marginal loss function constraint neural network model, put down between the training effect and convergence of neural network model The bad problem of weighing apparatus property may be implemented to utilize the phase between multiple boundary values by using the loss function comprising multiple boundary values The mutual soft convergence in regulating guarantee boundary, and then under the premise of guaranteeing that neural network model is convergent, reach the higher people of acquisition The effect of face recognition accuracy.
Detailed description of the invention
Fig. 1 is a kind of flow chart for face identification method that the embodiment of the present invention one provides;
Fig. 2 is a kind of flow chart of face identification method provided by Embodiment 2 of the present invention;
Fig. 3 is the stream for carrying out rotation processing in a kind of face identification method provided by Embodiment 2 of the present invention to facial image Cheng Tu;
Fig. 4 is the rotation schematic diagram of the image provided by Embodiment 2 of the present invention comprising facial image key point;
Fig. 5 is showing by combined boundary loss function constraint residual error neural network model provided by Embodiment 2 of the present invention It is intended to;
Fig. 6 is a kind of structure chart of the device for recognition of face that the embodiment of the present invention three provides;
Fig. 7 is a kind of structural schematic diagram for equipment that the embodiment of the present invention four provides.
Specific embodiment
The present invention is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining the present invention rather than limiting the invention.It also should be noted that in order to just Only the parts related to the present invention are shown in description, attached drawing rather than entire infrastructure.
Embodiment one
Fig. 1 is a kind of flow chart for face identification method that the embodiment of the present invention one provides.This method is applicable to pass through Recognition of face confirms identified person's identity etc..This method is executed by face identification device, and face identification device is main It is realized by software and/or hardware mode, which can be integrated in the equipment for being able to carry out model training, example Such as server.
Wherein, the face identification method in the present embodiment is applied to face recognition technology.Face recognition technology has merged meter Calculation machine image processing techniques and biostatistics principle extract people using computer image processing technology from video (image) As characteristic point, and analysis founding mathematical models, i.e. skin detection are carried out using the principle of biostatistics.Then, it utilizes The face-image of built skin detection and measured personnel carries out signature analysis, provides one according to the result of analysis A similar value.It can determine tested personnel connect the most with which skin detection in database by this similar value Closely.
In conjunction with Fig. 1, the present embodiment specifically comprises the following steps:
S110, construction image training sample set.Wherein, it includes multiple images comprising face that image training sample, which is concentrated,.
Wherein, the image of face is obtained by way of shooting by acquisition equipment.When user appears in the bat of acquisition equipment When taking the photograph in range, acquisition equipment can search for user automatically and shoot the image for having user's face.Optionally, acquisition equipment can be with It is the equipment that video camera or camera etc. have image collecting function.For acquiring equipment and be video camera, when video camera passes through The picture that camera obtains can be confirmed whether user appears in coverage, if appearing in coverage, shoot band There is the image of user's face.Wherein, pick-up lens can acquire different facial images, such as still image, dynamic image, no With the image etc. of acquisition position image and different expressions.It should be noted that the above-mentioned user referred to not refers in particular to some use Family refers to any user appeared in acquisition equipment coverage.
Specifically, acquisition belongs to multiple images of different faces, than waiting collected image if any a user of M (M > 2), At this point, acquiring N (N > 2) figure to each user, the image of different angle can be as this N opens images, such as it can be head As shine, half body shine or whole body according to etc., can also be front photograph, side shine, overlook shine or look up according to etc., as long as being comprising face It can.Further, concentrated by the image training sample of above-mentioned image configuration includes M × N facial images.It should be noted that Above description is only example, and in practical application, the corresponding image number of each user is not limited to N, and different user can be with The image of corresponding different numbers.It is more to the image number (N) of user's acquisition, it in this way can be subsequent construction image Training sample set, face identity training set and verifying collection are provided with more materials.Each user acquires the figure that multiple include face Picture, it is ensured that face recognition device obtains accurate, multi-angle the feature about user's face, and then guarantees subsequent training Human face recognition model accuracy.
S120, described image training sample set training recognition of face neural network model is used.
Wherein, recognition of face neural network model is restrained by combined boundary loss function;Combined boundary loses letter Number contains at least two boundary values.
Wherein, neural network (Neural Networks, NN) is (referred to as neural by a large amount of, simple processing unit Member) widely interconnect and the complex networks system that is formed, it reflects many essential characteristics of human brain function, is a height Spend complicated non-linear dynamic learning system.Neural network have large-scale parallel, distributed storage and processing, self-organizing, from Adapt to and self-learning ability, be particularly suitable for processing need and meanwhile consider many factors and condition, at inaccurate and fuzzy information Reason problem.In the present embodiment, trained neural network model is used for recognition of face, and the neural network model being trained to is named For recognition of face neural network model.Further, loss function is the function for measuring recognition of face neural network model effect. Loss function (loss function)=error component (loss term)+regularization part (regularization term). It is better that the smaller effect for indicating recognition of face neural network model of loss function can be simply interpreted as.Boundary value is to loss letter Number is finely adjusted the parameter of section.Combined boundary loss function refers to that there are two above boundary values in loss function.Specifically, phase For the loss function of only one boundary value, due to having more than two boundary values in combined boundary loss function, quite In when being finely adjusted to combined boundary loss function, it is contemplated that two or more boundaries, so that combined boundary loses The scope of application of function is wider.
Specifically, due to not limiting which kind of specifically used neural network model in embodiment, so can in embodiment It is trained with using any neural network model.It will include multiple people specifically, building recognition of face neural network model Face, multiple images image training sample set for training the recognition of face neural network model, by containing two or more The combined boundary loss function of boundary value constrains the neural network model.The combined boundary loss function is carried out Chain type derivation, to obtain the gradient value of each boundary value parameter and other parameters in the recognition of face neural network model. The recognition of face neural network model after training is updated according to stochastic gradient descent rule and the gradient value.
S130, the recognition of face neural network model for being up to default training termination condition are known as finally obtained face Other model.
Wherein, training recognition of face neural network model is the process constantly recycled, and training can all allow people each time Face identification neural network model adjusts the parameter and weight of itself, so that recognition of face neural network model is more in line with trained mesh 's.Default termination condition refers to, sets some conditions, is fixed up certainly so that meeting demand recognition of face neural network model Oneself parameter and weight is used for actual use.Human face recognition model is that the recognition of face neural network model of dynamic change is solid A model that is specific, no longer changing after determining inherent parameters, but substantially, human face recognition model is also recognition of face nerve Network model.
Specifically, making recognition of face neural network model reach default knot by training recognition of face neural network model Beam training condition, and the parameter for being up to the default recognition of face neural network model for terminating training condition is fixed up, as Human face recognition model is for actually using.Wherein, the particular content for presetting termination condition may be set according to actual conditions, generally For, after recognition of face neural network model reaches default end training condition, it is believed that its identification essence for face Degree and convergence have all reached user's desired effect.
The present embodiment uses described image training sample set training recognition of face nerve by construction image training sample set Network model, and the combined boundary loss function containing at least two boundary values is set, for constraining recognition of face neural network Model, and when recognition of face neural network model reaches default training termination condition, determine finally obtained recognition of face Model.It solves through the above technical solution in the prior art, uses single marginal loss function constraint neural network model, mind The bad problem of balance between training effect and convergence through network model, by using the loss comprising multiple boundary values Function may be implemented using the mutual soft convergence in regulating guarantee boundary between multiple boundary values, and then guarantee neural network mould Under the premise of type is convergent, the higher face recognition accuracy of acquisition is achieved the effect that.
Embodiment two
Fig. 2 is a kind of flow chart of face identification method provided by Embodiment 2 of the present invention.The present embodiment is in embodiment It is refined on the basis of one, basically describes how construction image training sample set, how to train recognition of face nerve net Network model, and how to confirm that training reaches desired effect.It is specific:
The construction image training sample set includes:
Multiple images are acquired, include face in described image;
The face in described image is marked, and the image after mark is formed into image training sample set.
Face in the mark described image includes:
Identify the face in described image;
It is same category by the graphic collection for belonging to same face;
Described image and the face are numbered respectively, number classification according to belonging to image determines.
After face in the mark described image, further includes:
The key point position of face in image after identification mark;
Image after rotating the mark, so that the angle presentation of face is identical in described image after the mark, In, the rotation angle is determined according to the key point position;
The image by after mark forms image training sample set
Postrotational image is formed into image training sample set.
The key point position of face includes at least one in eye position, nose position and corners of the mouth position in described image Kind.
Combined boundary loss function expression formula are as follows:
Wherein, m indicates that total number of input picture in a training process, n indicate people different in the input picture The number of face, m1For first boundary value, m2For the second boundary value, m3For third boundary value, yiIndicate the corresponding volume of i-th face Number, s=| | xi| |, xiIndicate output knot of i-th image after feature extraction layer operation in recognition of face neural network model Fruit, scos θj=| | Wj||||xi||cosθj, | | Wj||||xi||cosθj=Wj Txi, the W expression recognition of face neural network The weight parameter vector of final full articulamentum in model, the final full articulamentum is the recognition of face neural network model The last layer, | | Wj| |=1, Wj TxiIndicate jth row vector and x in WiDot product, represent the output of final full articulamentum, θj Indicate jth row vector and x in WiAngle,Indicate θjMiddle j is yi
Recognition of face neural network model is residual error neural network model;
It is described to include: using described image training sample set training recognition of face neural network model
Build residual error neural network model;
Using the image training sample set training residual error neural network model, damaged in the training process by combined boundary Lose residual error neural network described in function convergence;
Chain type derivation is carried out to the combined boundary loss function, to obtain each in the residual error neural network model The gradient value of boundary value parameter, the boundary value parameter include: first boundary value, the second boundary value and third boundary value;
The residual error neural network model after training is updated according to stochastic gradient descent rule and the gradient value.
Default training termination condition includes at least one of following:
Measuring accuracy of the recognition of face neural network model on verifying collection converges to default precision;
The learning rate of recognition of face neural network model is reduced to default learning rate or less;
It is more than preset times with, recognition of face neural network model frequency of training.
Verifying collection include the first preset quantity positive sample to and the second preset quantity negative sample pair, the positive sample pair With the negative sample to being to randomly select acquisition in setting image set, the image set that sets is described image training sample set In not used image set.
Measuring accuracy formula are as follows:
Wherein, y indicates the first preset quantity, and z indicates the second preset quantity, yaRecognition of face neural network model is stated in expression To the judging result of a-th of positive sample pair, zbIndicate judgement of the recognition of face neural network model to b-th of negative sample pair As a result, the judging result includes corresponding first numerical value of correct result and the corresponding second value of error result.
In conjunction with Fig. 2, construction image training sample set method provided in this embodiment includes:
S210, multiple images are acquired.
It wherein, include face in described image.
Wherein, refer to comprising face and carry out preliminary screening to acquired image, reject incoherent image.Preliminary When screening, it is only necessary to select the image comprising face.
Facial image is acquired specifically, can be selected in specific application scene.Specific application scene refers to engineering project pair The usage scenario answered, such as bank VTM verifying, jeweler's shop VIP identification etc..Further, setting acquires video using camera Image, and by network transmission and data line be stored in hard disk or other can by face identification device obtain information position.Into One step, Face datection is carried out to the video image stored in hard disk, the image screenshot comprising face is extracted, and carries out Storage.
Face in S220, mark described image, constructs image training sample set.
Wherein, mark refer to image is numbered and mapping processing.Number is divided into two classes: the number of image itself and people The number of face.It is corresponding with picture number that mapping refers to that the face in identification image numbers face.That is: same face will be belonged to Graphic collection be same category;Described image and the face classification are numbered.Identification image in mapping process In face, refer to and classified by artificial mode to face.Further, respectively to described image and the face The corresponding relationship for referring to and being formed between image and the affiliated face of image is numbered.
Specifically, manually the facial image for belonging to same people is put to detecting and the facial image extracted identifies Together and it is marked.Assuming that total number of persons is M, everyone has been acquired N facial images.Then this M people is accomplished fluently respectively Said tag (everyone classification number), said tag is generally since 0, then M people respectively corresponds: 0,1,2 ... ..., M-1 this M classification number.By taking classification number is 0 face as an example, his N facial images can number following { (PER0), (PHO1), {(PER0), (PHO2) ... ..., { (PER0), (PHON)}。
Optionally, the image after mark is formed into image training sample set.
Specifically, the image after mark can directly constitute image training sample set.At this point, the image in training set is not only Each number for having oneself by oneself, there are also corresponding relationships with affiliated face, in order to subsequent training during, convenient for different people The corresponding image of face is trained.
Specifically, in view of in practical application, the actual acquisition angle of multiple corresponding images comprising face of same user Degree may be different, and during subsequent training pattern, different acquisition angle may be such that training result, and there are errors.Cause This, after the face in embodiment in optional setting mark image, not direct construction image training sample set, but image is held Row rotation process, to guarantee in multiple corresponding images comprising face of same user, the angle presentation of face is identical.Specifically , Fig. 3 is the flow chart for carrying out rotation processing in a kind of face identification method provided by Embodiment 2 of the present invention to facial image. In conjunction with Fig. 3, rotary course is referred to S221-S223:
The key point position of face in image after S221, identification mark.
Wherein, the key point of facial image includes at least one of eye position, nose position and corners of the mouth position.It is typical , the shape description of human face and they the distance between characteristic obtain the characteristic for facilitating face classification, Characteristic component generally includes Euclidean distance, curvature and angle between characteristic point etc..Face is by the part such as eyes, nose, mouth, chin It constitutes, to these local and structural relation between them geometric descriptions, can be used as the important feature of identification face, these features Referred to as geometrical characteristic.
Specifically, concentrate the facial image completed according to classification number classification to carry out key point identification image training sample, Wherein, key point identification can be solved by integrated study (ensemble learning), and integrated study is present non-normalizing Quick-fried machine learning method.An itself not instead of individual machine learning algorithm, by constructing and combining multiple machines Learner completes learning tasks.Integrated study can be used for classification problem and integrate, and regression problem is integrated, and Feature Selection is integrated, Outlier detection is integrated etc..It can certainly be by being manually marked.
S222, the image after the mark is rotated, so that the angle presentation of face is identical in the image after the mark.Its In, rotation angle is determined according to the key point position.
Wherein, the identical line referred between the key point being marked in facial image of angle presentation is in certain error model In enclosing.The line level being such as adjusted between two eyes.Face presentation in image can be made identical by rotation processing Angle is influenced with reducing facial angle to recognition of face bring.Wherein, it is properly termed as by the facial image of rotation processing " standard picture ".
Specifically, rotation angle is determined according to the key point position.For example, determining the line and water between two eyes Angle between horizontal line, and rotation angle is determined according to the angle.
Optionally, after being labeled processing to the image comprising face, the key point position in image is successively identified, in turn Judge whether image is standard picture by key point position, if not standard picture, then determine according to key point position and rotate Angle, and the image is rotated according to rotation angle.
Specifically, Fig. 4 is the rotation schematic diagram of the image provided by Embodiment 2 of the present invention comprising facial image key point. Key point position is marked in the tilted image 21 directly collected, adjusting tilted image 21 is the company between two eyes The state of line level obtains postrotational standard picture 22.
S223, postrotational image is formed into image training sample set.
At this point it is possible to which postrotational image is formed image training sample set.Image training sample concentrates each image not only There is corresponding number, is also standard picture.
S230, residual error neural network model is built.
Wherein, the depth of neural network model has very big influence to the effect of last classification and identification, conventional For the stacking (plain network) of network when network is very deep, effect can be worse and worse.Wherein one of reason is network It is deeper, gradient disappear the phenomenon that be just more and more obvious, the training effect of network will not be fine.But the network of present shallow-layer (shallower network) can not be obviously improved the recognition effect of network again.And residual error neural network can deepen net Solve the problems, such as that gradient disappears in the case where network.Simultaneously as recognition of face neural network model belongs to deep neural network, institute To set recognition of face neural network in embodiment as residual error neural network.
Specifically, building a residual error neural network model, the input of the final full articulamentum of the model is i-th image Output result after feature extraction layer operation in residual error neural network model.The final full articulamentum of residual error neural network model It can be understood as the last one full articulamentum, the neuron number and figure of the final full articulamentum of the residual error neural network model As training sample concentrates face number identical.For example, image training sample concentrates the face comprising M user, then final complete The neuron number of articulamentum is M.
S240, the residual error neural network model is trained using image training sample set, passes through combination in the training process Residual error neural network described in marginal loss function convergence.
Wherein, combined boundary loss function expression formula are as follows:
Wherein, m indicates that total number of input picture in a training process, n indicate people different in the input picture The number of face, m1For first boundary value, m2For the second boundary value, m3For third boundary value, yiIndicate the corresponding volume of i-th face Number, s=| | xi| |, xiIndicate output knot of i-th image after feature extraction layer operation in recognition of face neural network model Fruit, scos θj=| | Wj||||xi||cosθj, | | Wj||||xi||cosθj=Wj Txi, the W expression recognition of face neural network The weight parameter vector of final full articulamentum in model, the final full articulamentum is the recognition of face neural network model The last layer, | | Wj| |=1, Wj TxiIndicate jth row vector and x in WiDot product, represent the output of final full articulamentum, θj Indicate jth row vector and x in WiAngle,Indicate θjMiddle j is yi
Specifically, Fig. 5 is provided by Embodiment 2 of the present invention by combined boundary loss function constraint residual error neural network The schematic diagram of model.With reference to Fig. 5, xiIndicate i-th image after feature extraction layer operation in recognition of face neural network model Output as a result, i maximum value be m, wherein m indicate a training process (batch) in input picture total number.
The output x of feature extraction layeriAs the input of final full articulamentum, final full articulamentum includes the side institute of neuron The weight parameter vector W for including, the weight (W on each sidej) indicate W in jth row vector.Final full articulamentum further includes mind The function for being included through member calculates, vector WjWith vector xiDot product (product is denoted as Zj, the maximum value of j is n, and wherein n is indicated The number of practical face in input picture).ZjIt can indicate are as follows: Zj=Wj Txi=| | Wj||||xi||cosθj.Wherein, θjIt indicates Vector WjWith vector xiAngle;In actual model training, calculate to simplify, usually enable | | Wj| | value 1, | | xi| |=s, then have: Wj Txi=| | Wj||||xi||cosθj=scos θj;Wherein, cos θjIndicate WjWith xiCOS distance, i.e., i-th Facial image is opened by the output of feature extraction layer as a result, the distance between the practical face information with i-th facial image.
In order to facilitate calculating, by Zj=Wj Txi=| | Wj||||xi||cosθj=s cos θjThis equation carries out at abbreviation Reason, available:The data of final full articulamentum output are brought into assembling loss function, it can be with Calculate loss numerical value.The loss numerical value indicates the gap between the corresponding practical face information of i-th image.
S250, chain type derivation is carried out to the combined boundary loss function, to obtain in the residual error neural network model The gradient value of each boundary value parameter.
Wherein, boundary value parameter includes: first boundary value, the second boundary value and third boundary value.In general, using three A boundary value has met the needs of accuracy and convergence sexual balance.
Wherein, derivation is the basis of calculus, while being also the important pillar that calculus calculates.It is physics, several Some key concepts in the subjects such as He Xue, economics can be indicated with derivative.As derivative can indicate moving object Instantaneous velocity and acceleration can indicate curve slope on one point, also may indicate that limit and the elasticity in economics.Chain type Derivation is used to ask the derivative of a compound function, is a kind of common method in the derivative operation of calculus.Compound function Derivative will be the product for constituting this compound derivative of limited function in respective point, connect with one another closely just as chain, therefore claim Chain rule.Gradient is intended that a vector (vector), indicates the directional derivative of a certain function at this point along the direction Maximum value is obtained, i.e. function changes along the direction (direction of this gradient) most fast at this point, and change rate maximum is (for the gradient Mould).Gradient value is exactly the value of this vector.
Specifically, can be obtained in residual error neural network model according to the formula of assembling loss function and chain type Rule for derivation The gradient value of each parameter.Similarly, chain type derivation is carried out to combined boundary loss function, it can be to obtain the residual error nerve The gradient value of each boundary value parameter in network model.
S260, the residual error neural network model after training is updated according to stochastic gradient descent rule and the gradient value.
Wherein, gradient descent method (gradient descent) is to solve for the common side of one kind of Unconstrained Optimization Problem Method has the advantages that realize simple.Gradient descent method is iterative algorithm, and each step needs to solve the gradient vector of objective function.With Machine gradient descent method is updated by investigating every time training example, and the value of each step-length is different.
Specifically, once being instructed using face identity training set (batch) to recognition of face neural network model every time Practice, each of recognition of face neural network model parameter is just obtained by combined boundary loss function and chain type Rule for derivation Gradient value and each boundary value parameter gradient value, recognition of face neural network mould is then updated according to gradient descent method immediately Type.Another face identity training set (batch) is inputted into updated recognition of face neural network model, repeats above-mentioned instruction Practice, the operation of derivation and update.
S270, the recognition of face neural network model for being up to default training termination condition are known as finally obtained face Other model.
Wherein, recognition of face neural network model can constantly regulate model parameter and boundary value parameter in the training process, When recognition of face neural network model reaches default training termination condition, by this time model parameter and boundary value parameter fix Get off, can be taken in model used in real life, and by recognition of face neural network model at this time as final Entitled human face recognition model.Human face recognition model has determining model parameter and boundary parameter.
It includes at least one of following for wherein presetting training termination condition: first, recognition of face neural network model is being verified Measuring accuracy on collection converges to default precision;Second, the learning rate of recognition of face neural network model is reduced to default study Below rate;Third, recognition of face neural network model frequency of training are more than preset times.
In the first training termination condition, the verifying collection include the first preset quantity positive sample to and it is second default The negative sample pair of quantity, the positive sample to the negative sample to be setting image set in randomly select acquisition, it is described to set Determining image set is the set that described image training sample concentrates not used image.Measuring accuracy formula are as follows:Wherein, y indicates the first preset quantity, and z indicates the second preset quantity, yaFace is stated in expression Identify judging result of the neural network model to a-th of positive sample pair, zbIndicate the recognition of face neural network model to b The judging result of a negative sample pair, the judging result include corresponding first numerical value of correct result and error result corresponding Two numerical value.
Specifically, assuming there is the image of N number of people to take part in construction image training sample set, there is the image of K (K < N) individual Construction face identity training set (batch) is taken part in, then the image of remaining N-K people is used to make verifying collection.Wherein, people Face identity training set refers to that an image training sample set has many face identity training sets.Further, sample set includes The negative sample pair of the positive sample pair of first preset quantity and the second preset quantity, wherein the first preset quantity and the second present count Amount can be identical value and be also possible to different value, and it is 3000 that the first preset quantity and the second preset quantity are set in embodiment. Positive sample is to two images for being same face, and negative sample is to two images for being different faces.Test order are as follows: by positive sample Two images of centering judge into the same person, then calculate correct judgment;It is not same that two images of negative sample centering, which are judged into, One people, then calculate correct judgment.Correct judgment is denoted as 1, and misjudgment is denoted as 0.In the case, measuring accuracy can be written as:When correct judgment of the recognition of face neural network model to sample pair, it is denoted as 1;Judgement When mistake, it is denoted as 0.Each time using face identity training set (batch) training recognition of face neural network model after, using testing Card collection calculates measuring accuracy, when recognition of face neural network model is when verifying the measuring accuracy no longer stable promotion on collection It can terminate to train.
In second of trained termination condition, learning rate refers to that recognition of face neural network model passes through face identity every time After the training readjustment parameter of training set (batch), promotion degree of the new learning efficiency than old learning efficiency.
It generally refers to be reduced to specifically, the learning rate of recognition of face neural network model is reduced to default learning rate or less 0.000001 or less.
In the third training termination condition, frequency of training refers to using face identity training set (batch) training face Identify the number of neural network model.
Specifically, recognition of face neural network model frequency of training is more than that preset times generally refer to frequency of training and are more than 50000。
The present embodiment contains at least two by using image training sample set training recognition of face neural network model, setting The combined boundary loss function of a boundary value, for constraining recognition of face neural network model.It solves in the prior art, uses Single marginal loss function constraint neural network model, balance is bad between the training effect and convergence of neural network model The problem of.The present embodiment is real using the mutual adjusting between multiple boundary values by using the loss function comprising multiple boundary values The soft convergence in boundary is showed, under the premise of guaranteeing that neural network model is convergent, it is accurate to have reached the higher recognition of face of acquisition The effect of degree.
Embodiment three
Fig. 6 is a kind of structure chart of the device for recognition of face that the embodiment of the present invention three provides.The device includes: sample set Constructing module 41, neural network model training module 42 and human face recognition model output module 43.Wherein:
Sample set constructing module 41, for constructing image training sample set, it includes multiple that described image training sample, which is concentrated, Image comprising face;
Neural network model training module 42, for using described image training sample set training recognition of face neural network Model, the recognition of face neural network model are restrained by combined boundary loss function, and the combined boundary loses letter Number contains at least two boundary values;
Human face recognition model output module 43, for being up to the recognition of face neural network mould of default training termination condition Type is as finally obtained human face recognition model.
The present embodiment contains at least two by using image training sample set training recognition of face neural network model, setting The combined boundary loss function of a boundary value, for constraining recognition of face neural network model.It solves in the prior art, uses Single marginal loss function constraint neural network model, balance is bad between the training effect and convergence of neural network model The problem of.The present embodiment is real using the mutual adjusting between multiple boundary values by using the loss function comprising multiple boundary values The soft convergence in boundary is showed, under the premise of guaranteeing that neural network model is convergent, it is accurate to have reached the higher recognition of face of acquisition The effect of degree.
On the basis of the above embodiments, sample set constructing module is specifically used for:
Multiple images are acquired, include face in described image;
The face in described image is marked, and the image after mark is formed into image training sample set.
On the basis of the above embodiments, sample set constructing module further includes face mark submodule, is used for:
Identify the face in described image;
It is same category by the graphic collection for belonging to same face;
Described image and the face are numbered respectively, number classification according to belonging to image determines.
On the basis of the above embodiments, sample set constructing module further includes face rotation submodule, is used for:
The key point position of face in image after identification mark;The key point position of face includes eyes in described image At least one of position, nose position and corners of the mouth position;
Image after rotating the mark, so that the angle presentation of face is identical in image after the mark, wherein institute Rotation angle is stated to be determined according to the key point position;
The image by after mark forms image training sample set
Postrotational image is formed into image training sample set.
On the basis of the above embodiments, in neural network model training module: combined boundary loss function expression formula are as follows:
Wherein, m indicates that total number of input picture in a training process, n indicate people different in the input picture The number of face, m1For first boundary value, m2For the second boundary value, m3For third boundary value, yiIndicate the corresponding volume of i-th face Number, s=| | xi| |, xiIndicate output knot of i-th image after feature extraction layer operation in recognition of face neural network model Fruit, s cos θj=| | Wj||||xi||cosθj, | | Wj||||xi||cosθj=Wj Txi, the W expression recognition of face neural network The weight parameter vector of final full articulamentum in model, the final full articulamentum is the recognition of face neural network model The last layer, | | Wj| |=1, Wj TxiIndicate jth row vector and x in WiDot product, represent the output of final full articulamentum, θj Indicate jth row vector and x in WiAngle,Indicate θjMiddle j is yi
On the basis of the above embodiments, in neural network model training module: the recognition of face neural network model For residual error neural network model;
It is described to include: using described image training sample set training recognition of face neural network model
Build residual error neural network model;
Using the image training sample set training residual error neural network model, damaged in the training process by combined boundary Lose residual error neural network described in function convergence;
Chain type derivation is carried out to the combined boundary loss function, to obtain each in the residual error neural network model The gradient value of boundary value parameter, the boundary value parameter include: first boundary value, the second boundary value and third boundary value;
The residual error neural network model after training is updated according to stochastic gradient descent rule and the gradient value.
On the basis of the above embodiments, further include that training terminates submodule in human face recognition model output module, be used for:
When reaching default training termination condition, terminate model training;
The default trained termination condition includes at least one of following:
Measuring accuracy of the recognition of face neural network model on verifying collection converges to default precision;
The learning rate of recognition of face neural network model is reduced to default learning rate or less;
It is more than preset times with, recognition of face neural network model frequency of training.
On the basis of the above embodiments, training terminates in submodule: the verifying collects including the first preset quantity just Sample to and the second preset quantity negative sample pair, the positive sample to the negative sample to be in setting image set it is random It extracts and obtains, the set for setting image set and concentrating not used image as described image training sample.
On the basis of the above embodiments, training terminates in submodule: the measuring accuracy formula are as follows:
Wherein, y indicates the first preset quantity, and z indicates the second preset quantity, yaRecognition of face neural network model is stated in expression To the judging result of a-th of positive sample pair, zbIndicate judgement of the recognition of face neural network model to b-th of negative sample pair As a result, the judging result includes corresponding first numerical value of correct result and the corresponding second value of error result.
A kind of face identification device provided in this embodiment can be used for executing a kind of face that any of the above-described embodiment provides Know method for distinguishing, there is corresponding function and beneficial effect.
Example IV
Fig. 7 is a kind of structural schematic diagram for equipment that the embodiment of the present invention four provides.As shown in fig. 7, the equipment includes place Manage device 50, memory 51, communication module 52, input unit 53 and output device 54;The quantity of processor 50 can be in equipment One or more, in Fig. 7 by taking a processor 50 as an example;Processor 50, memory 51, communication module 52, input in equipment Device 53 can be connected with output device 54 by bus or other modes, in Fig. 7 for being connected by bus.
Memory 51 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer Sequence and module, if the corresponding module of one of the present embodiment face identification method is (for example, in a kind of face identification device Sample set constructing module 41, neural network model training module 42 and human face recognition model output module 43).Processor 50 passes through Software program, instruction and the module being stored in memory 51 are run, thereby executing the various function application and number of equipment According to processing, that is, realize a kind of above-mentioned face identification method.
Memory 51 can mainly include storing program area and storage data area, wherein storing program area can store operation system Application program needed for system, at least one function;Storage data area, which can be stored, uses created data etc. according to equipment.This Outside, memory 51 may include high-speed random access memory, can also include nonvolatile memory, for example, at least a magnetic Disk storage device, flush memory device or other non-volatile solid state memory parts.In some instances, memory 51 can be further Including the memory remotely located relative to processor 50, these remote memories can pass through network connection to equipment.It is above-mentioned The example of network includes but is not limited to internet, intranet, local area network, mobile radio communication and combinations thereof.
Communication module 52 for establishing connection with display screen, and realizes the data interaction with display screen.Input unit 53 can Number for receiving input or character information, and generate key signals related with the user setting of equipment and function control Input.
The face identification method that any embodiment of the present invention provides can be performed, specifically in a kind of equipment provided in this embodiment Corresponding function and beneficial effect.
Embodiment five
The embodiment of the present invention five also provides a kind of storage medium comprising computer executable instructions, and the computer can be held Row instruction is used to execute a kind of face identification method when being executed by computer processor, this method comprises:
Image training sample set is constructed, it includes multiple images comprising face that described image training sample, which is concentrated,;
Use described image training sample set training recognition of face neural network model, the recognition of face neural network mould Type is restrained by combined boundary loss function, and the combined boundary loss function contains at least two boundary values;
The recognition of face neural network model of default training termination condition is up to as finally obtained recognition of face mould Type.
Certainly, a kind of storage medium comprising computer executable instructions, computer provided by the embodiment of the present invention The method operation that executable instruction is not limited to the described above, can also be performed recognition of face provided by any embodiment of the present invention Relevant operation in method.
By the description above with respect to embodiment, it is apparent to those skilled in the art that, the present invention It can be realized by software and required common hardware, naturally it is also possible to which by hardware realization, but in many cases, the former is more Good embodiment.Based on this understanding, technical solution of the present invention substantially in other words contributes to the prior art Part can be embodied in the form of software products, which can store in computer readable storage medium In, floppy disk, read-only memory (Read-Only Memory, ROM), random access memory (Random such as computer Access Memory, RAM), flash memory (FLASH), hard disk or CD etc., including some instructions are with so that a computer is set Standby (can be personal computer, server or the network equipment etc.) executes method described in each embodiment of the present invention.
It is worth noting that, included each unit and module are only pressed in the embodiment of above-mentioned face identification device It is divided, but is not limited to the above division according to function logic, as long as corresponding functions can be realized;In addition, The specific name of each functional unit is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.
Note that the above is only a better embodiment of the present invention and the applied technical principle.It will be appreciated by those skilled in the art that The invention is not limited to the specific embodiments described herein, be able to carry out for a person skilled in the art it is various it is apparent variation, It readjusts and substitutes without departing from protection scope of the present invention.Therefore, although being carried out by above embodiments to the present invention It is described in further detail, but the present invention is not limited to the above embodiments only, without departing from the inventive concept, also It may include more other equivalent embodiments, and the scope of the invention is determined by the scope of the appended claims.

Claims (13)

1. a kind of method of recognition of face characterized by comprising
Image training sample set is constructed, it includes multiple images comprising face that described image training sample, which is concentrated,;
Using described image training sample set training recognition of face neural network model, the recognition of face neural network model is logical It crosses combined boundary loss function to be restrained, the combined boundary loss function contains at least two boundary values;
The recognition of face neural network model of default training termination condition is up to as finally obtained human face recognition model.
2. the method according to claim 1, wherein the combined boundary loss function expression formula are as follows:
Wherein, m indicates that total number of input picture in a training process, n indicate face different in the input picture Number, m1For first boundary value, m2For the second boundary value, m3For third boundary value, yiIndicate the corresponding number of i-th face, s =‖ xi‖, xiIndicate output of i-th image after feature extraction layer operation in recognition of face neural network model as a result, scos θj=| | Wj||‖xi‖cosθj, | | Wj||‖xi‖cosθj=Wj Txi, W indicates final complete in the recognition of face neural network model The weight parameter vector of articulamentum, the final full articulamentum are the last layer of the recognition of face neural network model, | | Wj| |=1, Wj TxiIndicate jth row vector and x in WiDot product, represent the output of final full articulamentum, θjIndicate jth row in W Vector and xiAngle,Indicate θjMiddle j is yi
3. according to the method described in claim 2, it is characterized in that, the recognition of face neural network model is residual error nerve net Network model;
It is described to include: using described image training sample set training recognition of face neural network model
Build residual error neural network model;
Using the image training sample set training residual error neural network model, letter is lost by combined boundary in the training process Number restrains the residual error neural network;
Chain type derivation is carried out to the combined boundary loss function, to obtain each boundary in the residual error neural network model The gradient value of value parameter, the boundary value parameter include: first boundary value, the second boundary value and third boundary value;
The residual error neural network model after training is updated according to stochastic gradient descent rule and the gradient value.
4. the method according to claim 1, wherein the construction image training sample set includes:
Multiple images are acquired, include face in described image;
The face in described image is marked, and the image after mark is formed into image training sample set.
5. according to the method described in claim 4, it is characterized in that, the face in the mark described image includes:
Identify the face in described image;
It is same category by the graphic collection for belonging to same face;
Described image and the face are numbered respectively, number classification according to belonging to image determines.
6. according to the method described in claim 4, it is characterized in that, after the face marked in described image, further includes:
The key point position of face in image after identification mark;
Image after rotating the mark, so that the angle presentation of face is identical in image after the mark, wherein rotation angle Degree is determined according to the key point position;
The image by after mark forms image training sample set
Postrotational image is formed into image training sample set.
7. according to the method described in claim 6, it is characterized in that wherein, the key point position of face includes eye in described image At least one of eyeball position, nose position and corners of the mouth position.
8. the method according to claim 1, wherein the default trained termination condition includes following at least one :
Measuring accuracy of the recognition of face neural network model on verifying collection converges to default precision;
The learning rate of recognition of face neural network model is reduced to default learning rate or less;
It is more than preset times with, recognition of face neural network model frequency of training.
9. according to the method described in claim 8, it is characterized in that, the verifying collection includes the positive sample pair of the first preset quantity With the negative sample pair of the second preset quantity, the positive sample to the negative sample to being randomly selected and obtain in setting image set , the set for setting image set and concentrating not used image as described image training sample.
10. according to the method described in claim 9, it is characterized in that, the calculation formula of the measuring accuracy are as follows:
Wherein, y indicates the first preset quantity, and z indicates the second preset quantity, yaExpression states recognition of face neural network model to a The judging result of a positive sample pair, zbIndicate the recognition of face neural network model to the judging result of b-th of negative sample pair, The judging result includes corresponding first numerical value of correct result and the corresponding second value of error result.
11. a kind of device of recognition of face characterized by comprising
Sample set constructing module, for constructing image training sample set, it includes that multiple include people that described image training sample, which is concentrated, The image of face;
Neural network model training module, for training recognition of face neural network model using described image training sample set, The recognition of face neural network model is restrained by combined boundary loss function, and the combined boundary loss function contains At least two boundary values;
Human face recognition model output module, for being up to the recognition of face neural network model conduct of default training termination condition Finally obtained human face recognition model.
12. a kind of equipment characterized by comprising
One or more processors;
Memory, for storing one or more programs;
When one or more of programs are executed by one or more of processors, so that one or more of processors are real A kind of now method of recognition of face as described in claim 1-10 is any.
13. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor A kind of method of recognition of face as described in claim 1-10 is any is realized when execution.
CN201810759811.8A 2018-07-11 2018-07-11 Face recognition method, device, equipment and storage medium Pending CN109002790A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810759811.8A CN109002790A (en) 2018-07-11 2018-07-11 Face recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810759811.8A CN109002790A (en) 2018-07-11 2018-07-11 Face recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109002790A true CN109002790A (en) 2018-12-14

Family

ID=64599411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810759811.8A Pending CN109002790A (en) 2018-07-11 2018-07-11 Face recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109002790A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657615A (en) * 2018-12-19 2019-04-19 腾讯科技(深圳)有限公司 A kind of training method of target detection, device and terminal device
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN111415301A (en) * 2019-01-07 2020-07-14 珠海金山办公软件有限公司 Image processing method and device and computer readable storage medium
CN111860316A (en) * 2020-07-20 2020-10-30 上海汽车集团股份有限公司 Driving behavior recognition method and device and storage medium
CN111898550A (en) * 2020-07-31 2020-11-06 平安科技(深圳)有限公司 Method and device for establishing expression recognition model, computer equipment and storage medium
CN112241664A (en) * 2019-07-18 2021-01-19 顺丰科技有限公司 Face recognition method, face recognition device, server and storage medium
CN112308055A (en) * 2020-12-30 2021-02-02 北京沃东天骏信息技术有限公司 Evaluation method and device of face retrieval system, electronic equipment and storage medium
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
CN112784953A (en) * 2019-11-07 2021-05-11 佳能株式会社 Training method and device of object recognition model
WO2021184553A1 (en) * 2020-03-16 2021-09-23 平安科技(深圳)有限公司 Face recognition model training method and apparatus, computer device, and storage medium
CN113496227A (en) * 2020-04-08 2021-10-12 顺丰科技有限公司 Training method and device of character recognition model, server and storage medium
WO2023123926A1 (en) * 2021-12-28 2023-07-06 苏州浪潮智能科技有限公司 Artificial intelligence task processing method and apparatus, electronic device, and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model
WO2018102700A1 (en) * 2016-12-01 2018-06-07 Pinscreen, Inc. Photorealistic facial texture inference using deep neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760859A (en) * 2016-03-22 2016-07-13 中国科学院自动化研究所 Method and device for identifying reticulate pattern face image based on multi-task convolutional neural network
WO2018102700A1 (en) * 2016-12-01 2018-06-07 Pinscreen, Inc. Photorealistic facial texture inference using deep neural networks
CN107679451A (en) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 Establish the method, apparatus, equipment and computer-readable storage medium of human face recognition model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIANKANG DENG 等: "ArcFace: Additive Angular Margin Loss for Deep Face Recognition", 《ARXIV:1801.07698V1》 *
WEIYANG LIU 等: "SphereFace: Deep Hypersphere Embedding for Face Recognition", 《THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *
陶将: "人脸之ArcFace", 《网络在线公开:HTTPS://BLOG.CSDN.NET/WEIXIN_42111770/ARTICLE/DETAILS/80692843》 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657615A (en) * 2018-12-19 2019-04-19 腾讯科技(深圳)有限公司 A kind of training method of target detection, device and terminal device
CN109657615B (en) * 2018-12-19 2021-11-02 腾讯科技(深圳)有限公司 Training method and device for target detection and terminal equipment
CN109858362A (en) * 2018-12-28 2019-06-07 浙江工业大学 A kind of mobile terminal method for detecting human face based on inversion residual error structure and angle associated losses function
CN111415301A (en) * 2019-01-07 2020-07-14 珠海金山办公软件有限公司 Image processing method and device and computer readable storage medium
CN111415301B (en) * 2019-01-07 2024-03-12 珠海金山办公软件有限公司 Image processing method, device and computer readable storage medium
CN110009052A (en) * 2019-04-11 2019-07-12 腾讯科技(深圳)有限公司 A kind of method of image recognition, the method and device of image recognition model training
CN110009052B (en) * 2019-04-11 2022-11-18 腾讯科技(深圳)有限公司 Image recognition method, image recognition model training method and device
CN112241664A (en) * 2019-07-18 2021-01-19 顺丰科技有限公司 Face recognition method, face recognition device, server and storage medium
CN112784953A (en) * 2019-11-07 2021-05-11 佳能株式会社 Training method and device of object recognition model
WO2021184553A1 (en) * 2020-03-16 2021-09-23 平安科技(深圳)有限公司 Face recognition model training method and apparatus, computer device, and storage medium
CN113496227A (en) * 2020-04-08 2021-10-12 顺丰科技有限公司 Training method and device of character recognition model, server and storage medium
CN111860316A (en) * 2020-07-20 2020-10-30 上海汽车集团股份有限公司 Driving behavior recognition method and device and storage medium
CN111860316B (en) * 2020-07-20 2024-03-19 上海汽车集团股份有限公司 Driving behavior recognition method, device and storage medium
WO2021139316A1 (en) * 2020-07-31 2021-07-15 平安科技(深圳)有限公司 Method and apparatus for establishing expression recognition model, and computer device and storage medium
CN111898550B (en) * 2020-07-31 2023-12-29 平安科技(深圳)有限公司 Expression recognition model building method and device, computer equipment and storage medium
CN111898550A (en) * 2020-07-31 2020-11-06 平安科技(深圳)有限公司 Method and device for establishing expression recognition model, computer equipment and storage medium
CN112308055A (en) * 2020-12-30 2021-02-02 北京沃东天骏信息技术有限公司 Evaluation method and device of face retrieval system, electronic equipment and storage medium
CN112613480A (en) * 2021-01-04 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method, face recognition system, electronic equipment and storage medium
WO2023123926A1 (en) * 2021-12-28 2023-07-06 苏州浪潮智能科技有限公司 Artificial intelligence task processing method and apparatus, electronic device, and readable storage medium

Similar Documents

Publication Publication Date Title
CN109002790A (en) Face recognition method, device, equipment and storage medium
CN107451607B (en) A kind of personal identification method of the typical character based on deep learning
CN104239858B (en) A kind of method and apparatus of face characteristic checking
CN108921058A (en) Fish identification method, medium, terminal device and device based on deep learning
CN106469302A (en) A kind of face skin quality detection method based on artificial neural network
CN109583445A (en) Character image correction processing method, device, equipment and storage medium
CN100440246C (en) Positioning method for human face characteristic point
CN108960086A (en) Based on the multi-pose human body target tracking method for generating confrontation network positive sample enhancing
Hu et al. Research on abnormal behavior detection of online examination based on image information
CN109711281A (en) A kind of pedestrian based on deep learning identifies again identifies fusion method with feature
CN108304820A (en) A kind of method for detecting human face, device and terminal device
CN107832802A (en) Quality of human face image evaluation method and device based on face alignment
CN107133616A (en) A kind of non-division character locating and recognition methods based on deep learning
CN106407958B (en) Face feature detection method based on double-layer cascade
CN110119672A (en) A kind of embedded fatigue state detection system and method
CN108304876A (en) Disaggregated model training method, device and sorting technique and device
CN108986064A (en) A kind of people flow rate statistical method, equipment and system
CN107563995A (en) A kind of confrontation network method of more arbiter error-duration models
CN109615387A (en) A kind of consumption and payment system and method based on recognition of face
CN108416265A (en) A kind of method for detecting human face, device, equipment and storage medium
Indi et al. Detection of malpractice in e-exams by head pose and gaze estimation
CN109993021A (en) The positive face detecting method of face, device and electronic equipment
CN106339006A (en) Object tracking method of aircraft and apparatus thereof
WO2021135639A1 (en) Living body detection method and apparatus
Aung et al. Who Are They Looking At? Automatic Eye Gaze Following for Classroom Observation Video Analysis.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181214

RJ01 Rejection of invention patent application after publication