CN107066983A - A kind of auth method and device - Google Patents
A kind of auth method and device Download PDFInfo
- Publication number
- CN107066983A CN107066983A CN201710261931.0A CN201710261931A CN107066983A CN 107066983 A CN107066983 A CN 107066983A CN 201710261931 A CN201710261931 A CN 201710261931A CN 107066983 A CN107066983 A CN 107066983A
- Authority
- CN
- China
- Prior art keywords
- facial image
- verified
- target
- user
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of auth method and device, the auth method includes:Action prompt information is provided to object to be verified;The video stream data of the object to be verified is obtained, the video stream data is the successive frame facial image gathered when the object to be verified makes corresponding actions according to the action prompt information;Target facial image is determined according to the video stream data;The confidence level that the object to be verified is live body is determined according to the target facial image;Authentication is carried out to the object to be verified according to the confidence level and target facial image.Above-mentioned auth method can effectively stop various types of attacks such as photo, video and headform in face recognition process, and method is simple, safe.
Description
Technical field
The present invention relates to field of computer technology, more particularly to a kind of auth method and device.
Background technology
Continuing to develop and popularizing with terminal technology, the application in terminal is also being continuously increased, and some application due to
It is related to a large amount of privacies of user, for ensuring information security property, needs to carry out register in use, to confirm user's body
Part.
At present, common user login method is mainly that account number cipher is logged in, and this login mode needs user each
When logging in, account number cipher is manually entered, cumbersome, for this problem, industry proposes the people based on face recognition technology
Face login method, wherein, face recognition technology is a kind of bio-identification that the facial feature information based on people carries out identification
Technology, the face login method mainly uses the video camera or camera collection facial image of terminal, afterwards by the face figure
As being matched with template image, the match is successful then shows subscriber authentication success, can enter login page, eliminate user
The cumbersome of account number cipher is manually entered, method is simple.However, existing auth method has certain potential safety hazard, dislike
Meaning user can be obtained and be used by a variety of illegal means (such as obtain static photograph, video intercepting by social networks, take on the sly)
The face image at family completes authentication, and security is low, and the individual privacy or property safety to user cause serious threat.
The content of the invention
It is an object of the invention to provide a kind of auth method and device, to solve existing auth method safety
The low technical problem of property.
In order to solve the above technical problems, the embodiment of the present invention provides following technical scheme:
A kind of auth method, including:
Action prompt information is provided to object to be verified;
The video stream data of the object to be verified is obtained, the video stream data is the object to be verified according to described
Action prompt information makes the successive frame facial image gathered during corresponding actions;
Target facial image is determined according to the video stream data;
The confidence level that the object to be verified is live body is determined according to the target facial image;
Authentication is carried out to the object to be verified according to the confidence level and target facial image.
In order to solve the above technical problems, the embodiment of the present invention also provides following technical scheme:
A kind of authentication means, including:
Module is provided, for providing action prompt information to object to be verified;
Acquisition module, the video stream data for obtaining the object to be verified, the video stream data is described to be tested
The successive frame facial image that card object is gathered when making corresponding actions according to the action prompt information;
First determining module, for determining target facial image according to the video stream data;
Second determining module, for determining that the object to be verified is the credible of live body according to the target facial image
Degree;
Authentication module, is tested for carrying out identity to the object to be verified according to the confidence level and target facial image
Card.
Auth method and device of the present invention, by providing action prompt information to object to be verified, and are obtained
The video stream data of the object to be verified is taken, the video stream data is that the object to be verified is believed according to the action prompt
Breath makes the successive frame facial image gathered during corresponding actions, afterwards, target facial image is determined according to the video stream data,
And the confidence level that the object to be verified is live body is determined according to the target facial image, afterwards, according to the confidence level and
Target facial image carries out authentication to the object to be verified, and photo, video can be effectively stopped in face recognition process
With various types of attacks such as headform, method is simple, safe.
Brief description of the drawings
Below in conjunction with the accompanying drawings, it is described in detail by the embodiment to the present invention, technical scheme will be made
And other beneficial effects are apparent.
Fig. 1 is the schematic flow sheet of auth method provided in an embodiment of the present invention;
Fig. 2 a are the schematic flow sheet of auth method provided in an embodiment of the present invention;
Fig. 2 b are the schematic flow sheet of subscriber authentication in meeting signature system provided in an embodiment of the present invention
Fig. 3 a are the structural representation of authentication means provided in an embodiment of the present invention;
Fig. 3 b are the structural representation of another authentication means provided in an embodiment of the present invention;
Fig. 3 c are the structural representation provided in an embodiment of the present invention for verifying submodule;
Fig. 4 is the structural representation of the network equipment provided in an embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on
Embodiment in the present invention, the every other implementation that those skilled in the art are obtained under the premise of creative work is not made
Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of auth method, apparatus and system.It is described in detail individually below.Need
Bright, the numbering of following examples is not intended as the restriction to embodiment preferred sequence.
First embodiment
The present embodiment will be described from the angle of authentication means, and the authentication means specifically can be as independent
Entity realize, can also integrated network equipment, such as realize in terminal or server.
A kind of auth method, including:Action prompt information is provided to object to be verified, and obtains the object to be verified
Video stream data, the video stream data is what the object to be verified was gathered when making corresponding actions according to the action prompt information
Successive frame facial image, afterwards, determines target facial image, and determine according to the target facial image according to the video stream data
The object to be verified is the confidence level of live body, afterwards, and the object to be verified is carried out according to the confidence level and target facial image
Authentication.
As shown in figure 1, the idiographic flow of the auth method can be as follows:
S101, to object to be verified provide action prompt information.
In the present embodiment, the action prompt information be mainly used in prompting user do some actions specified, such as shake the head or
Blink etc., it can be shown by forms such as prompting frame or prompting interfaces.When some on user's click interactive interface is pressed
During button, such as " brush face is logged in ", the offer operation of the action prompt information can be triggered.
S102, the video stream data for obtaining the object to be verified, the video stream data are that the object to be verified is dynamic according to this
The successive frame facial image gathered when making corresponding actions as prompt message.
In the present embodiment, the video stream data can be one section of video that (such as one minute) is gathered in the stipulated time,
It is primarily directed to the view data of user face, can be specifically acquired by imaging first-class video capture device.
S103, target facial image determined according to the video stream data.
For example, above-mentioned steps S103 can specifically include:
1-1, obtain in the video stream data the crucial point set of each frame facial image and the key point concentrates each close
The positional information of key point.
In the present embodiment, the key point that the key point is concentrated refers mainly to the characteristic point in facial image, namely gradation of image
The point of acute variation, or the larger point of curvature (intersection point at i.e. two edges), such as eyes, eyebrow on image border occur for value
Hair, nose, face and face's outline etc..Some deep learning models, such as ASM (Active Shape can specifically be passed through
Model, active shape model) or AAM (Active Appearance Model, active appearance models) etc. carry out key point
Extract operation.Two dimension of the positional information primarily directed to a certain reference frame (the face acquisition interface that such as terminal is shown)
Coordinate.
1-2, the movement locus for determining according to the crucial point set and positional information of each frame facial image the object to be verified.
In the present embodiment, the movement locus refers mainly to object to be verified and makes corresponding actions according to the action prompt information
When, whole face or regional area are from starting the route that is formed to tenth skill of action, such as blink track, track of shaking the head
Etc..Specifically, can first according to the important key point of some in each frame facial image (such as eyes, the corners of the mouth, cheek edge and
Nose etc.) change in location information and these important key points between angle and relative distance determine that this is to be tested
The three-dimensional face model of object is demonstrate,proved, and obtains the three-dimensional coordinate of each key point, afterwards, according to the three-dimensional coordinate of any key point
Determine the movement locus.
1-3, target facial image determined from the video stream data according to the movement locus.
For example, above-mentioned steps 1-3 can specifically include:
Judge whether the movement locus meets preparatory condition;
If so, then choosing the corresponding facial image of desired guiding trajectory point from the video stream data, target facial image is used as;
If it is not, then generation indicates the result that the object to be verified is disabled user.
In the present embodiment, depending on the preparatory condition Main Basiss human action feature, it is contemplated that human action has coherent
Property, the preparatory condition can be set as:Multiple intended trajectory points, such as 5 ° deviation angle points, 15 ° of deviations are included in the movement locus
Angle point and 30 ° of deviation angle points etc., or, the preparatory condition can be set as:Tracing point quantity in the movement locus reaches one
Definite value, such as 10.The desired guiding trajectory point can according to the actual requirements depending on, such as, it is contemplated that the key point on facial image
More, conclusion is more accurate, therefore can choose 0 ° of deviation angle point as the desired guiding trajectory point, namely chooses front face image
It is used as the target facial image.Certainly, it is contemplated that user gathers not may being inclined to angle point since 0 °, and the desired guiding trajectory point can
Think point of some including being inclined to angle point including 0 ° compared with minizone scope, rather than a single point.
The disabled user is primarily present two kinds:Unknown live body user and Virtual User, the unknown live body user refer mainly to not
Registered in system platform or certification live body user, the Virtual User refers mainly to some criminals and utilizes validated user
Single photo or video or headform forge pseudo- live body user into (namely screen reproduction is formed).Specifically, working as
When the movement locus meets specified requirements, illustrate that not single photo or multiple pictures reproduction are formed the target facial image, this
When, it is necessary to further confirm whether the target facial image is by video reproduction or headform according to picture texture feature
Forge.When the movement locus is unsatisfactory for specified requirements, such as tracing point only has a small amount of two or three, then illustrate that this is treated
Identifying object is particularly likely that the pseudo- live body user forged by the single photo or multiple pictures of reproduction user, now,
Disabled user can be directly determined as, and point out user to re-start detection.
S104, determine according to the target facial image confidence level that the object to be verified is live body.
In the present embodiment, the confidence level refers mainly to the credibility that the object to be verified is live body, and it can show as generally
The form of rate value or fractional value.Because the texture of the picture through screen reproduction and the texture of normal picture are different, therefore can
To determine the confidence level of the user to be verified by carrying out signature analysis to the target facial image, that is, above-mentioned steps
S104 can specifically include:
2-1, concentrated from the key point of the target facial image and determine at least one target critical point.
In the present embodiment, it is more stable and with obvious distinguishing characteristic that the target critical point mainly includes some relative positions
Characteristic point, such as two pupils in left and right, left and right two corners of the mouths and nose, etc., specifically can according to the actual requirements depending on.
2-2, normalized image determined according to the positional information and target facial image of the target critical point.
For example, above-mentioned steps 2-2 can specifically include:
Obtain the predeterminated position of each target critical point;
Calculate the Euclidean distance between each predeterminated position and corresponding positional information;
Similarity transformation is carried out to the target facial image according to the Euclidean distance, normalized image is obtained.
In the present embodiment, the predeterminated position can be obtained according to standard faces model, and the Euclidean distance refers to each target and closed
The distance between the corresponding predeterminated position of key point and positional information.The similarity transformation can include the behaviour such as rotation, Pan and Zoom
Make, generally, the image after image and similarity transformation before similarity transformation has identical figure, namely the graphics shape included
It is constant.Specifically, size, the anglec of rotation and coordinate position by constantly adjusting target facial image, can close the target
The distance between predeterminated position and corresponding positional information of key point minimize, also will the target facial image normalize to mark
Quasi- faceform, obtains normalized image.
2-3, using default disaggregated model the normalized image is calculated, obtain the object to be verified for live body can
Reliability.
In the present embodiment, the default disaggregated model refers mainly to the deep neural network trained, and it can be by some depth
Training pattern, such as CNN (Convolutional Neural Networks, convolutional neural networks) training are obtained, wherein, CNN
It is a kind of multilayer neural network, is made up of input layer, convolutional layer, pond layer, full articulamentum and output layer, its support is defeated by multidimensional
The image of incoming vector directly inputs network, it is to avoid the reconstruction of data in feature extraction and assorting process, greatly reduces image
The complexity of processing.When normalized image is inputted in CNN networks, information can be from input layer by conversion step by step, transmission
To output layer, the calculating process that CNN networks are performed is actually that will input (normalized image) and every layer of weight matrix phase
Dot product, so as to obtain the process of final output (namely confidence level of the object to be verified).
It is easily understood that the default disaggregated model needs to be trained according to sample and classification information in advance and obtained,
That is, before being calculated using default disaggregated model the normalized image, the auth method can also include:
Obtain the classification information of each default facial image in default face image set and the default face atlas;
Convolutional neural networks are trained according to the pre-set image collection and classification information, default disaggregated model is obtained.
In the present embodiment, because the default disaggregated model is mainly used in distinguishing whether the user to be verified is by screen reproduction
The Virtual User forged, the sample (negative sample) and normally therefore the default face image set can take a picture including screen turning
Photo sample (positive sample), specific sample size can according to the actual requirements depending on.Category information is generally by manually marking
Into, its can include reproduction photo and normal photo both.
The training process mainly includes two stages:Propagated forward stage and back-propagating stage, in the propagated forward stage
In, can be by each sample XiIn (namely default facial image) input n-layer convolutional neural networks, reality output O is obtainedi, its
In, Oi=Fn(…(F2(F1(XiW(1))W(2))...)W(n)), i is positive integer, W(n)For the weights of n-th layer, F is activation primitive (ratio
Such as sigmoid functions or hyperbolic tangent function), can be with by inputting the default face image set to convolutional neural networks
Weight matrix is obtained, afterwards, in the back-propagating stage, each reality output O can be calculatediY is exported with idealiDifference, by minimum
Change the method backpropagation adjustment weight matrix of error, wherein, YiIt is according to sample XiClassification information obtain, such as, if sample
This XiFor normal photo, then Yi1 can be set to, if sample XiFor reproduction photo, then Yi0 can be set to, finally, after adjustment
Weight matrix determine the convolutional neural networks that train, namely the default disaggregated model.
S105, authentication carried out to the object to be verified according to the confidence level and target facial image.
For example, above-mentioned steps S105 can specifically include:
Judge whether the confidence level is more than the first predetermined threshold value;
If so, then carrying out authentication to the object to be verified according to target facial image;
If it is not, then generation indicates the result that the object to be verified is disabled user.
In the present embodiment, depending on first predetermined threshold value can be according to practical application area, such as, when the authentication side
When method is mainly used in the financial field higher to security requirement, the ratio that first predetermined threshold value can be set is larger, for example
0.9, when the auth method is mainly used in these necks relatively low to security requirement such as similar meeting signature system
During domain, it is smaller that first predetermined threshold value can be set, such as and 0.5.
Specifically, when the confidence level calculated is less than or equal to first predetermined threshold value, illustrating the object pole to be verified
The Virtual User of screen reproduction is likely to be, now, to reduce False Rate, user can be pointed out to re-start facial image
Collection.When the confidence level calculated is more than first predetermined threshold value, illustrate that the object to be verified is particularly likely that live body user,
At this time, it may be necessary to it is unknown live body user further to analyze live body user, or registered or certification live body user, that is,
Above-mentioned steps " carrying out authentication to the object to be verified according to target facial image " can specifically include:
3-1, target facial image is divided into by multiple human face regions according to the crucial point set of the target facial image.
In the present embodiment, the human face region refers mainly to face region, such as eyes, face, nose, eyebrow and cheek
Deng it is based primarily upon the relative position relation between each key point to split target facial image.
3-2, target signature information determined according to the plurality of human face region.
For example, above-mentioned steps 3-2 can specifically include:
Feature extraction operation is carried out to the human face region, a plurality of characteristic information, one spy of each human face region correspondence is obtained
Reference ceases;
The a plurality of characteristic information is recombinated, target signature information is obtained.
In the present embodiment, feature extraction can be carried out to human face region by deep learning network, and by the spy extracted
Levy and recombinated, obtain feature string (namely the target signature information), because the corresponding geometrical model of different human face regions is different,
To improve extraction efficiency and accuracy, different human face regions can be extracted using different deep learning networks.
3-3, authentication carried out to the object to be verified according to the target signature information.
For example, above-mentioned steps 3-3 can specifically include:
3-3-1, acquisition have stored characteristic information collection and this has stored characteristic information and concentrated and each has stored characteristic information
Corresponding user's mark.
In the present embodiment, user mark is the unique identification mark of user, and it can include register account number.This has been stored
Characteristic information collection has stored characteristic information including at least one, and the different characteristic informations that stored are according to different registered users
What facial image was obtained., it is necessary to by user's mark of each registered user and store feature letter in advance in actual application
Breath is associated, that is, before above-mentioned steps 3-3-1, the auth method can also include:
User's registration request is obtained, user's registration request carries user's mark to be registered and facial image to be registered;
Characteristic information to be registered is determined according to the facial image to be registered;
The characteristic information to be registered and user to be registered mark are associated, and the characteristic information to be registered is inserted
Store characteristic information collection.
In the present embodiment, it is possible to use in step 3-1 and 3-2 involved method to the facial image to be registered at
Reason, obtains the characteristic information to be registered.User's registration request can be automatic triggering generation, such as when having gathered user's
After facial image, user's registration request can be automatically generated or user triggers generation, such as when user clicks on
During " completion " button, can generate the user's registration request, specifically can according to the actual requirements depending on.The facial image to be registered
Can collection in worksite or user shoot in advance after upload.
3-3-2, preset algorithm is utilized to calculate each similarity stored between characteristic information and target signature information.
In the present embodiment, the preset algorithm can include joint bayesian algorithm, and it is a kind of statistical sorting technique,
Main thought is to regard a secondary face as two parts to constitute, and a part is interpersonal difference, another part be individual from
The difference (such as expression changes) of body, overall similarity is calculated according to this two-part difference.
3-3-3, authentication carried out to the object to be verified according to the similarity and corresponding user mark.
For example, above-mentioned steps 3-3-3 can specifically include:
Judge to whether there is the similarity not less than the second predetermined threshold value in all similarities calculated;
If in the presence of the corresponding user's mark of similarity that this is not less than into the second predetermined threshold value is marked as targeted customer
Know, and generate the result for indicating that the object to be verified is targeted customer mark;
If being not present, generation indicates the result that the object to be verified is disabled user.
In the present embodiment, second predetermined threshold value can according to the actual requirements depending on, such as can gather a large amount of use in advance
The facial image at family, each user gathers two facial images, afterwards, and two facial images gathered according to each user are calculated
Corresponding similarity, and its average value is counted, value is averaged as second predetermined threshold value, typically, since individual is certainly
The difference of body, the similarity for two facial images that same user shoots in different time is typically slightly less than 1, so that this is second pre-
If threshold value can also be arranged to be slightly less than 1, such as 0.8.
It is pointed out that when it is the result that targeted customer identifies that generation, which indicates the object to be verified, illustrating this
Live body user is registered or certification live body user, at this point it is possible to directly logged according to targeted customer mark, without
User is manually entered password and account, and method is simple, convenient and swift.
From the foregoing, the auth method that the present embodiment is provided, is believed by providing action prompt to object to be verified
Breath, and the video stream data of the object to be verified is obtained, the video stream data is that the object to be verified is believed according to the action prompt
Breath makes the successive frame facial image gathered during corresponding actions, afterwards, target facial image is determined according to the video stream data, and
The confidence level that the object to be verified is live body is determined according to the target facial image, afterwards, according to the confidence level and target face
Image carries out authentication to the object to be verified, and photo, video and headform can be effectively stopped in face recognition process
Etc. various types of attacks, method is simple, safe.
Second embodiment
Citing, is described in further detail by the method according to described by embodiment one below.
In the present embodiment, it will be described in detail so that authentication means are integrated in the network device as an example.
As shown in Figure 2 a, a kind of auth method, idiographic flow can be as follows:
S201, the network equipment obtain user's registration request, and user's registration request carries user's mark to be registered and waits to note
Volume facial image.
For example, when user registers certain application system (such as meeting signature system) first, it can require that the user provides
Account to be registered and facial image to be registered, the facial image to be registered can be that collection in worksite or user shift to an earlier date
Uploaded after shooting well, afterwards, when user clicks on " completion " button, user's registration request can be generated.
S202, the network equipment determine characteristic information to be registered according to the facial image to be registered, afterwards, by the spy to be registered
Reference is ceased and user to be registered mark is associated, and the characteristic information to be registered is inserted has stored characteristic information collection.
For example, key point extraction can be carried out to the facial image to be registered, and is treated this according to the key point extracted
Registered face image is divided into multiple regions, carries out feature extraction to the region of segmentation using multiple deep learning networks afterwards,
And recombinated these features, obtain the characteristic information to be registered.By the way that the user of each registered user is identified and feature
Information is associated storage, so that follow-up in login process, the network equipment can be according to these storage informations to user identity
Verified.
S203, the network equipment obtain log on request, and provide action prompt letter to object to be verified according to the log on request
Breath.
For example, Fig. 2 b are referred to, when object to be verified clicks on " brush face is logged in " button on interactive interface, can be generated
The logging request, can now, on the interactive interface show an action prompt frame, to point out the object to be verified to make specified
Action, such as shake the head.
S204, the network equipment obtain the video stream data of the object to be verified, and the video stream data is the object to be verified
The successive frame facial image gathered when making corresponding actions according to the action prompt information.
For example, the video stream data can be the human face data that (such as 1 minute) is gathered in the stipulated time.In actual acquisition
During, a detection block can be shown on interactive interface, and point out user that face is put into detection block, to guide subscriber station
The vertical collection for carrying out video stream data in place.
S205, the network equipment obtain the crucial point set and the key point of each frame facial image in the video stream data
Concentrate the positional information of each key point.
For example, the crucial point set of each frame facial image can be extracted by ASM algorithms, the key point is concentrated and can wrapped
Include 88 key points including such as eyes, eyebrow, nose, face and face's outline.The positional information can be each pass
Displaing coordinate of the key point in detection block, when the face of user is located in the detection block, can be automatically positioned out each key
The displaing coordinate of point.
S206, the network equipment determine the object to be verified according to the crucial point set and positional information of each frame facial image
Movement locus.
For example, can first according to the change in location information of such as these important key points of eyes, the corners of the mouth and nose, and this
The information such as angle and relative distance between a little important key points determine the three-dimensional face model of the object to be verified, and obtain
To the three-dimensional coordinate of each key point, afterwards, the movement locus is determined according to the three-dimensional coordinate of any key point.
S207, the network equipment judge whether the movement locus meets preparatory condition, if so, following step S208 is then performed,
Indicate that the object to be verified is the result of disabled user if it is not, can then generate, and return to execution above-mentioned steps S203.
For example, if the preparatory condition is:Include 5 ° of deviation angle points, 10 ° of deviation angle points, 15 ° of deviations angle and 30 ° of deviations angle
Point, is formed when the movement locus rotates 40 ° for the head of user by positive (namely 0 °), namely include 0 in movement locus~
During 40 ° of deviation angle point, it is possible to determine that meet preparatory condition.When the movement locus is revolved for the head of user by positive (namely 0 °)
Turn 15 ° of formation, namely when only including 0~15 ° of deviation angle point in movement locus, it is possible to determine that preparatory condition is unsatisfactory for, this
When, further user's checking can be pointed out to fail, and inform failure cause, such as current photo is undesirable etc., so as to
Family is re-shoot.
S208, the network equipment choose the corresponding facial image of desired guiding trajectory point from the video stream data, are used as target person
Face image.
For example, the desired guiding trajectory point can be 0 ° of deviation angle point, now, the target facial image namely the video stream data
In 0 ° of corresponding facial image of deviation angle point.
S209, the network equipment are concentrated from the key point of the target facial image and determine at least one target critical point, and root
Normalized image is determined according to the positional information and target facial image of the target critical point.
For example, above-mentioned steps S209 can specifically include:
Obtain the predeterminated position of each target critical point;
Calculate the Euclidean distance between each predeterminated position and corresponding positional information;
Similarity transformation is carried out to the target facial image according to the Euclidean distance, normalized image is obtained.
For example, the target critical point can be two pupils in left and right, two corners of the mouths in left and right and nose this five points, and this is pre-
If position can be the two-dimensional coordinate that this five points are directed to same reference frame in standard faces model.By by target person
Face image and standard faces model are placed in same reference frame, and via similarity transformations such as rotation, Pan and Zooms
Mode is adjusted to the target facial image, makes this five points in the target facial image close proximity to standard faces model
In respective point, it is possible to achieve the normalized of the target facial image, obtain normalized image.
S210, the network equipment are calculated the normalized image using default disaggregated model, obtain the object to be verified
For the confidence level of live body, and judge whether the confidence level is more than the first predetermined threshold value, if so, following step S211 is then performed, if
It is no, then it can generate and indicate that the object to be verified is the result of disabled user, and return to execution above-mentioned steps S203.
For example, the default disaggregated model can be taken a picture sample (negative sample) and just using substantial amounts of screen turning in advance
Normal photo sample (positive sample) trains what is obtained to CNN.When normalized image is inputted in the CNN trained, image information
Output layer can be transferred to from input layer by conversion step by step, what is finally exported via output layer is a probable value, namely should
Confidence level.First predetermined threshold value can be 0.5, now, if confidence level is 0.7, can be determined that to be yes, if confidence level is
0.3, then can be determined that to be no.
Target facial image is divided into multiple faces by S211, the network equipment according to the crucial point set of the target facial image
Region, and target signature information is determined according to the plurality of human face region.
For example, target facial image can be split based on the relative position relation between each key point, obtains
Multiple human face regions including such as eyes, face, nose, eyebrow and cheek etc., afterwards, pass through different deep learnings
Network carries out feature extraction to different human face regions, and the feature of extraction is recombinated, and obtains the target signature information.
S212, the network equipment using preset algorithm calculate this stored characteristic information concentrate it is each stored characteristic information and
Similarity between target signature information, and judge to whether there is not less than the second predetermined threshold value in all similarities calculated
Similarity, if so, then perform following step S213, indicate the object to be verified for disabled user if it is not, can then generate
The result, and return to execution above-mentioned steps S203.
For example, each stored between characteristic information and target signature information can be calculated by combining bayesian algorithm
Similarity, obtains multiple similarities { A1, A2...An }, now, if exist in { A1, A2...An } Ai more than or equal to this second
Predetermined threshold value, then can be determined that to be yes, if being not present, can be determined that to be no, wherein, i ∈ (1,2...n) are no when being determined as
When, further user's checking can be pointed out to fail, and inform failure cause, such as it can not find this user etc..
S213, the network equipment regard the corresponding user's mark of similarity not less than the second predetermined threshold value as targeted customer
Mark, and generate the result for indicating that the object to be verified is targeted customer mark.
For example, the corresponding user's marks (namely the targeted customer identifies) of similarity Ai can be regard as the object to be verified
Authentication result, the result can be shown by the form of prompt message to user, to inform User logs in success.
From the foregoing, the auth method that the present embodiment is provided, the wherein network equipment please by obtaining user's registration
Ask, user's registration request carries user's mark to be registered and facial image to be registered, and true according to the facial image to be registered
Fixed characteristic information to be registered, afterwards, the characteristic information to be registered and user to be registered mark are associated, and this is to be registered
Characteristic information, which is inserted, has stored characteristic information collection, then, obtains log on request, and carried to object to be verified according to the log on request
For action prompt information, afterwards, the video stream data of the object to be verified is obtained, the video stream data is the object root to be verified
The successive frame facial image gathered when making corresponding actions according to the action prompt information, and obtain each frame in the video stream data
The crucial point set of facial image and the key point concentrate the positional information of each key point, afterwards, according to each frame face figure
The crucial point set and positional information of picture determine the movement locus of the object to be verified, and it is default to judge whether the movement locus meets
Condition, if it is not, the result for indicating that the object to be verified is disabled user can be then generated, if so, then from the video fluxion
According to the corresponding facial image of middle selection desired guiding trajectory point, as target facial image, afterwards, from the key of the target facial image
Point, which is concentrated, determines at least one target critical point, and the positional information and target facial image according to the target critical point determine to return
One changes image, and then, the normalized image is calculated using default disaggregated model, and it is live body to obtain the object to be verified
Confidence level, and judge whether the confidence level is more than the first predetermined threshold value, if so, then will according to the crucial point set of the target facial image
Target facial image is divided into multiple human face regions, and determines target signature information according to the plurality of human face region, afterwards, utilizes
Preset algorithm calculates this and has stored each similarity stored between characteristic information and target signature information of characteristic information concentration,
And judge to whether there is the similarity not less than the second predetermined threshold value in all similarities calculated, if so, this is not less than
The corresponding user's mark of similarity of second predetermined threshold value is identified as targeted customer, and generates the instruction object to be verified to be somebody's turn to do
The result of targeted customer's mark, so as to effectively stop that photo, video and headform etc. are each in face recognition process
The attack of type, method is simple, safe, and without user is manually entered password and account identity can be achieved and test
Card, it is convenient and swift.
3rd embodiment
Method according to described by embodiment one and embodiment two, the present embodiment will enter one from the angle of authentication means
Step is described, the authentication means can with it is integrated in the network device.
Fig. 3 a are referred to, the authentication means of third embodiment of the invention offer are had been described in detail in Fig. 3 a, and it can be wrapped
Include:Module 10, acquisition module 20, the first determining module 30, the second determining module 40 and authentication module 50 are provided, wherein:
(1) module 10 is provided
Module 10 is provided, for providing action prompt information to object to be verified.
In the present embodiment, the action prompt information be mainly used in prompting user do some actions specified, such as shake the head or
Blink etc., it can be shown by forms such as prompting frame or prompting interfaces.When some on user's click interactive interface is pressed
During button, such as " brush face is logged in ", offer module 10 can be provided the action prompt information is provided.
(2) acquisition module 20
Acquisition module 20, the video stream data for obtaining the object to be verified, video stream data is to be verified right for this
As the successive frame facial image gathered when making corresponding actions according to the action prompt information.
In the present embodiment, the video stream data can be one section of video that (such as one minute) is gathered in the stipulated time,
It is primarily directed to the view data of user face, and acquisition module 20 can be adopted by imaging first-class video capture device
Collection.
(3) first determining modules 30
First determining module 30, for determining target facial image according to the video stream data.
For example, first determining module 30 specifically can be used for:
1-1, obtain in the video stream data the crucial point set of each frame facial image and the key point concentrates each close
The positional information of key point.
In the present embodiment, the key point that the key point is concentrated refers mainly to the characteristic point in facial image, namely gradation of image
The point of acute variation, or the larger point of curvature (intersection point at i.e. two edges), such as eyes, eyebrow on image border occur for value
Hair, nose, face and face's outline etc..First determining module 30 can pass through some deep learning models, such as ASM
(Active Shape Model, active shape model) or AAM (Active Appearance Model, active appearance models)
Deng the extraction operation for carrying out key point.The positional information is primarily directed to a certain reference frame (face that such as terminal is shown
Acquisition interface) two-dimensional coordinate.
1-2, the movement locus for determining according to the crucial point set and positional information of each frame facial image the object to be verified.
In the present embodiment, the movement locus refers mainly to object to be verified and makes corresponding actions according to the action prompt information
When, whole face or regional area are from starting the route that is formed to tenth skill of action, such as blink track, track of shaking the head
Etc..Specifically, the first determining module 30 can first according to the important key point of some in each frame facial image (such as eyes,
The corners of the mouth, cheek edge and nose etc.) change in location information and these important key points between angle and it is relative away from
From determining the three-dimensional face model of the object to be verified, and obtain the three-dimensional coordinate of each key point, afterwards, closed according to any
The three-dimensional coordinate of key point determines the movement locus.
1-3, target facial image determined from the video stream data according to the movement locus.
For example, above-mentioned steps 1-3 can specifically include:
Judge whether the movement locus meets preparatory condition;
If so, then choosing the corresponding facial image of desired guiding trajectory point from the video stream data, target facial image is used as;
If it is not, then generation indicates the result that the object to be verified is disabled user.
In the present embodiment, depending on the preparatory condition Main Basiss human action feature, it is contemplated that human action has coherent
Property, the preparatory condition can be set as:Multiple intended trajectory points, such as 5 ° deviation angle points, 15 ° of deviations are included in the movement locus
Angle point and 30 ° of deviation angle points etc., or, the preparatory condition can be set as:Tracing point quantity in the movement locus reaches one
Definite value, such as 10.The desired guiding trajectory point can according to the actual requirements depending on, such as, it is contemplated that the key point on facial image
More, conclusion is more accurate, therefore can choose 0 ° of deviation angle point as the desired guiding trajectory point, namely chooses front face image
It is used as the target facial image.Certainly, it is contemplated that user gathers not may being inclined to angle point since 0 °, and the desired guiding trajectory point can
Think point of some including being inclined to angle point including 0 ° compared with minizone scope, rather than a single point.
The disabled user is primarily present two kinds:Unknown live body user and Virtual User, the unknown live body user refer mainly to not
Registered in system platform or certification live body user, the Virtual User refers mainly to some criminals and utilizes validated user
Single photo or video or headform forge pseudo- live body user into (namely screen reproduction is formed).Specifically, working as
When the movement locus meets specified requirements, illustrate that not single photo or multiple pictures reproduction are formed the target facial image, this
When, it is necessary to further confirm whether the target facial image is by video reproduction or headform according to picture texture feature
Forge.When the movement locus is unsatisfactory for specified requirements, such as tracing point only has a small amount of two or three, then illustrate that this is treated
Identifying object is particularly likely that the pseudo- live body user forged by the single photo or multiple pictures of reproduction user, now,
Disabled user can be directly determined as, and point out user to re-start detection.
(4) second determining modules 40
Second determining module 40, for determining confidence level of the object to be verified for live body according to the target facial image.
In the present embodiment, the confidence level refers mainly to the credibility that the object to be verified is live body, and it can show as generally
The form of rate value or fractional value.Because the texture of the picture through screen reproduction and the texture of normal picture are different, therefore the
Two determining modules 40 can determine the confidence level of the user to be verified by carrying out signature analysis to the target facial image,
That is, Fig. 3 b are referred to, second determining module 40 can specifically include the first determination sub-module 41, the and of the second determination sub-module 42
Calculating sub module 43, wherein:
First determination sub-module 41, at least one target critical is determined for being concentrated from the key point of the target facial image
Point.
In the present embodiment, it is more stable and with obvious distinguishing characteristic that the target critical point mainly includes some relative positions
Characteristic point, such as two pupils in left and right, left and right two corners of the mouths and nose, etc., specifically can according to the actual requirements depending on.
Second determination sub-module 42, normalizing is determined for the positional information and target facial image according to the target critical point
Change image.
For example, second determination sub-module 42 specifically can be used for:
Obtain the predeterminated position of each target critical point;
Calculate the Euclidean distance between each predeterminated position and corresponding positional information;
Similarity transformation is carried out to the target facial image according to the Euclidean distance, normalized image is obtained.
In the present embodiment, the predeterminated position can be obtained according to standard faces model, and the Euclidean distance refers to each target and closed
The distance between the corresponding predeterminated position of key point and positional information.The similarity transformation can include the behaviour such as rotation, Pan and Zoom
Make, generally, the image after image and similarity transformation before similarity transformation has identical figure, namely the graphics shape included
It is constant.Specifically, the second determination sub-module 42 is by constantly adjusting the size, the anglec of rotation and coordinate bit of target facial image
Put, can minimize the distance between predeterminated position and corresponding positional information of the target critical point, also will the target
Facial image normalizes to standard faces model, obtains normalized image.
Calculating sub module 43, for being calculated using default disaggregated model the normalized image, obtains this to be verified
Object is the confidence level of live body.
In the present embodiment, the default disaggregated model refers mainly to the deep neural network trained, and it can be by some depth
Training pattern, such as CNN (Convolutional Neural Networks, convolutional neural networks) training are obtained, wherein, CNN
It is a kind of multilayer neural network, is made up of input layer, convolutional layer, pond layer, full articulamentum and output layer, its support is defeated by multidimensional
The image of incoming vector directly inputs network, it is to avoid the reconstruction of data in feature extraction and assorting process, greatly reduces image
The complexity of processing.When normalized image is inputted in CNN networks, information can be from input layer by conversion step by step, transmission
To output layer, the calculating process that CNN networks are performed is actually that will input (normalized image) and every layer of weight matrix phase
Dot product, so as to obtain the process of final output (namely confidence level of the object to be verified).
It is easily understood that the default disaggregated model needs to be trained according to sample and classification information in advance and obtained,
That is, the authentication means can also include training module 60, be used for:
Before the calculating sub module 43 is calculated the normalized image using default disaggregated model, default people is obtained
The classification information of each default facial image in face image collection and the default face atlas;
Convolutional neural networks are trained according to the pre-set image collection and classification information, default disaggregated model is obtained.
In the present embodiment, because the default disaggregated model is mainly used in distinguishing whether the user to be verified is by screen reproduction
The Virtual User forged, the sample (negative sample) and normally therefore the default face image set can take a picture including screen turning
Photo sample (positive sample), specific sample size can according to the actual requirements depending on.Category information is generally by manually marking
Into, its can include reproduction photo and normal photo both.
The training process mainly includes two stages:Propagated forward stage and back-propagating stage, in the propagated forward stage
In, training module 60 can be by each sample XiIn (namely default facial image) input n-layer convolutional neural networks, reality is obtained
Export Oi, wherein, Oi=Fn(…(F2(F1(XiW(1))W(2))...)W(n)), i is positive integer, W(n)For the weights of n-th layer, F is sharp
Function (such as sigmoid functions or hyperbolic tangent function) living, by inputting the default face figure to convolutional neural networks
Image set, can obtain weight matrix, afterwards, in the back-propagating stage, and training module 60 can calculate each reality output OiWith
Ideal output YiDifference, by minimization error method backpropagation adjust weight matrix, wherein, YiIt is according to sample XiClass
What other information was obtained, such as, if sample XiFor normal photo, then Yi1 can be set to, if sample XiFor reproduction photo, then YiCan be with
0 is set to, finally, the convolutional neural networks for determining to train according to the weight matrix after adjustment, namely the default disaggregated model.
(5) authentication module 50
Authentication module 50, for carrying out authentication to the object to be verified according to the confidence level and target facial image.
For example, the authentication module 50 can specifically include judging submodule 51, checking submodule 52 and generation submodule 53,
Wherein:
Judging submodule 51, for judging whether the confidence level is more than the first predetermined threshold value.
In the present embodiment, depending on first predetermined threshold value can be according to practical application area, such as, when the authentication side
When method is mainly used in the financial field higher to security requirement, the ratio that first predetermined threshold value can be set is larger, for example
0.9, when the auth method is mainly used in these necks relatively low to security requirement such as similar meeting signature system
During domain, it is smaller that first predetermined threshold value can be set, such as and 0.5.
Submodule 52 is verified, if being more than the first predetermined threshold value for the confidence level, this is treated according to target facial image
Identifying object carries out authentication.
In the present embodiment, when the confidence level calculated is more than first predetermined threshold value, illustrate that the object pole to be verified has
It is probably live body user, now, it is unknown live body user that checking submodule 52, which needs further to analyze live body user, still
Registration or the live body user of certification, that is, refer to Fig. 3 c, the checking submodule 52 can specifically include division unit 521, really
Order member 522 and authentication unit 523, wherein:
Target facial image, is divided into multiple by division unit 521 for the crucial point set according to the target facial image
Human face region.
In the present embodiment, the human face region refers mainly to face region, such as eyes, face, nose, eyebrow and cheek
Deng it is based primarily upon the relative position relation between each key point to split target facial image.
Determining unit 522, for determining target signature information according to the plurality of human face region.
For example, the determining unit 522 specifically can be used for:
Feature extraction operation is carried out to the human face region, a plurality of characteristic information, one spy of each human face region correspondence is obtained
Reference ceases;
The a plurality of characteristic information is recombinated, target signature information is obtained.
In the present embodiment, determining unit 522 can carry out feature extraction by deep learning network to human face region, and will
The feature extracted is recombinated, and feature string (namely the target signature information) is obtained, because different human face regions are corresponding several
What model is different, to improve extraction efficiency and accuracy, can be using different deep learning networks to different human face regions
Extracted.
Authentication unit 523, for carrying out authentication to the object to be verified according to the target signature information.
For example, the authentication unit 523 specifically can be used for:
3-3-1, acquisition have stored characteristic information collection and this has stored characteristic information and concentrated and each has stored characteristic information
Corresponding user's mark.
In the present embodiment, user mark is the unique identification mark of user, and it can include register account number.This has been stored
Characteristic information collection has stored characteristic information including at least one, and the different characteristic informations that stored are according to different registered users
What facial image was obtained.
, it is necessary to by the user of each registered user mark and store characteristic information in advance and closed in actual application
Connection, that is, the authentication means can also include relating module, is used for:
Obtained in the authentication unit 523 and stored characteristic information collection and this has stored characteristic information and concentrated and each has deposited
Store up before the corresponding user's mark of characteristic information, obtain user's registration request, user's registration request carries user's mark to be registered
Know and facial image to be registered;
Characteristic information to be registered is determined according to the facial image to be registered;
The characteristic information to be registered and user to be registered mark are associated, and the characteristic information to be registered is inserted
Store characteristic information collection.
In the present embodiment, the relating module may be referred to method used in division unit 521 and determining unit 522 to this
Facial image to be registered is handled, and obtains the characteristic information to be registered.User's registration request can be automatic triggering generation
, such as after the facial image of user has been gathered, user's registration request or user's triggering can be automatically generated
Generation, such as when user clicks on " completion " button, user's registration request can be generated, specifically can be according to the actual requirements
Depending on.The facial image to be registered can collection in worksite or user shoot in advance after upload.
3-3-2, preset algorithm is utilized to calculate each similarity stored between characteristic information and target signature information.
In the present embodiment, the preset algorithm can include joint bayesian algorithm, and it is a kind of statistical sorting technique,
Main thought is to regard a secondary face as two parts to constitute, and a part is interpersonal difference, another part be individual from
The difference (such as expression changes) of body, overall similarity is calculated according to this two-part difference.
3-3-3, authentication carried out to the object to be verified according to the similarity and corresponding user mark.
Further, the authentication unit 523 can be used for:
Judge to whether there is the similarity not less than the second predetermined threshold value in all similarities calculated;
If in the presence of the corresponding user's mark of similarity that this is not less than into the second predetermined threshold value is marked as targeted customer
Know, and generate the result for indicating that the object to be verified is targeted customer mark;
If being not present, generation indicates the result that the object to be verified is disabled user.
In the present embodiment, second predetermined threshold value can according to the actual requirements depending on, such as can gather a large amount of use in advance
The facial image at family, each user gathers two facial images, afterwards, and two facial images gathered according to each user are calculated
Corresponding similarity, and its average value is counted, value is averaged as second predetermined threshold value, typically, since individual is certainly
The difference of body, the similarity for two facial images that same user shoots in different time is typically slightly less than 1, so that this is second pre-
If threshold value can also be arranged to be slightly less than 1, such as 0.8.
It is pointed out that indicating that the object to be verified is the checking knot that targeted customer identifies when authentication unit 523 is generated
During fruit, it is registered or certification live body user to illustrate live body user, at this point it is possible to directly according to the targeted customer identify into
Row is logged in, and password and account are manually entered without user, and method is simple, convenient and swift.
Submodule 53 is generated, if being not more than the first predetermined threshold value for the confidence level, generation indicates the object to be verified
For the result of disabled user.
In the present embodiment, when the confidence level calculated is less than or equal to first predetermined threshold value, illustrate that this is to be verified right
Virtual User as being particularly likely that screen reproduction, now, to reduce False Rate, can point out user to re-start face
IMAQ.
It when it is implemented, above unit can be realized as independent entity, can also be combined, be made
Realized for same or several entities, the specific implementation of above unit can be found in embodiment of the method above, herein not
Repeat again.
From the foregoing, the authentication means that the present embodiment is provided, are provided by providing module 10 to object to be verified
Action prompt information, acquisition module 20 obtains the video stream data of the object to be verified, and the video stream data is to be verified right for this
As the successive frame facial image gathered when making corresponding actions according to the action prompt information, afterwards, the first determining module 30
Target facial image is determined according to the video stream data, the second determining module 40 determines that this is to be verified right according to the target facial image
As the confidence level for live body, afterwards, authentication module 50 is carried out according to the confidence level and target facial image to the object to be verified
Authentication, can effectively stop various types of attacks such as photo, video and headform, method letter in face recognition process
It is single, it is safe.
Fourth embodiment
Accordingly, the embodiment of the present invention also provides a kind of authentication system, including times that the embodiment of the present invention is provided
A kind of authentication means, the authentication means for details, reference can be made to embodiment three.
Wherein, the network equipment can provide action prompt information to object to be verified;Obtain the video of the object to be verified
Flow data, the video stream data is the successive frame gathered when the object to be verified makes corresponding actions according to the action prompt information
Facial image;Target facial image is determined according to the video stream data;The object to be verified is determined according to the target facial image
For the confidence level of live body;Authentication is carried out to the object to be verified according to the confidence level and target facial image.
The specific implementation of each equipment can be found in embodiment above above, will not be repeated here.
By the generation system of the traffic information can include any authentication dress that the embodiment of the present invention is provided
Put, it is thereby achieved that the beneficial effect achieved by any authentication means that the embodiment of the present invention is provided, is referred to
Embodiment above, will not be repeated here.
5th embodiment
Accordingly, the embodiment of the present invention also provides a kind of network equipment, as shown in figure 4, it illustrates the embodiment of the present invention
The structural representation of the involved network equipment, specifically:
The network equipment can include one or more than one processing core processor 701, one or more
The memory 702 of computer-readable recording medium, radio frequency (Radio Frequency, RF) circuit 703, power supply 704, input are single
First 705 and display unit 707 etc. part.It will be understood by those skilled in the art that the network equipment infrastructure shown in Fig. 4 is simultaneously
The restriction to the network equipment is not constituted, can be included than illustrating more or less parts, either combines some parts or not
Same part arrangement.Wherein:
Processor 701 is the control centre of the network equipment, utilizes various interfaces and connection whole network equipment
Various pieces, by operation or perform and are stored in software program and/or module in memory 702, and call and be stored in
Data in reservoir 702, perform the various functions and processing data of the network equipment, so as to carry out integral monitoring to the network equipment.
Optionally, processor 701 may include one or more processing cores;It is preferred that, processor 701 can integrated application processor and tune
Demodulation processor processed, wherein, application processor mainly handles operating system, user interface and application program etc., and modulatedemodulate is mediated
Reason device mainly handles radio communication.It is understood that above-mentioned modem processor can not also be integrated into processor 701
In.
Memory 702 can be used for storage software program and module, and processor 701 is stored in memory 702 by operation
Software program and module, so as to perform various function application and data processing.Memory 702 can mainly include storage journey
Sequence area and storage data field, wherein, the application program (ratio that storing program area can be needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data field can be stored uses created number according to the network equipment
According to etc..In addition, memory 702 can include high-speed random access memory, nonvolatile memory can also be included, for example extremely
A few disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 702 can also be wrapped
Memory Controller is included, to provide access of the processor 701 to memory 702.
RF circuits 703 can be used for during receiving and sending messages, the reception and transmission of signal, especially, by the descending letter of base station
After breath is received, transfer to one or more than one processor 701 is handled;In addition, being sent to base station by up data are related to.It is logical
Often, RF circuits 703 include but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, user identity
Module (SIM) card, transceiver, coupler, low-noise amplifier (LNA, Low Noise Amplifier), duplexer etc..This
Outside, RF circuits 703 can also be communicated by radio communication with network and other equipment.The radio communication can use any communication
Standard or agreement, including but not limited to global system for mobile communications (GSM, Global System of Mobile
Communication), general packet radio service (GPRS, General Packet Radio Service), CDMA
(CDMA, Code Division Multiple Access), WCDMA (WCDMA, Wideband Code
Division Multiple Access), Long Term Evolution (LTE, Long Term Evolution), Email, short message clothes
It is engaged in (SMS, Short Messaging Service) etc..
The network equipment also includes the power supply 704 (such as battery) powered to all parts, it is preferred that power supply 704 can lead to
Cross power-supply management system and processor 701 be logically contiguous, thus by power-supply management system realize management charging, electric discharge and
The functions such as power managed.Power supply 704 can also include one or more direct current or AC power, recharging system, electricity
The random component such as source failure detector circuit, power supply changeover device or inverter, power supply status indicator.
The network equipment may also include input block 705, and the input block 705 can be used for the numeral or character for receiving input
Information, and produce keyboard, mouse, action bars, optics or the trace ball signal relevant with user's setting and function control
Input.Specifically, in a specific embodiment, input block 705 may include touch sensitive surface and other input equipments.Touch
Sensitive surfaces, also referred to as touch display screen or Trackpad, collecting touch operation of the user on or near it, (such as user makes
With the operation of any suitable object such as finger, stylus or annex on touch sensitive surface or near touch sensitive surface), and according to pre-
The formula first set drives corresponding attachment means.Optionally, touch sensitive surface may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detects the touch orientation of user, and detects the signal that touch operation is brought, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give processor 701 again, and the order sent of reception processing device 701 and can be performed.Furthermore, it is possible to using resistance-type, electricity
The polytypes such as appearance formula, infrared ray and surface acoustic wave realize touch sensitive surface.Except touch sensitive surface, input block 705 can be with
Including other input equipments.Specifically, other input equipments can include but is not limited to physical keyboard, function key (such as volume
Control button, switch key etc.), trace ball, mouse, the one or more in action bars etc..
The network equipment may also include display unit 706, and the display unit 706 can be used for the information that display is inputted by user
Or be supplied to the information of user and the various graphical user interface of the network equipment, these graphical user interface can by figure,
Text, icon, video and its any combination are constituted.Display unit 706 may include display panel, optionally, can use liquid
Crystal display (LCD, Liquid Crystal Display), Organic Light Emitting Diode (OLED, Organic Light-
Emitting Diode) etc. form configure display panel.Further, touch sensitive surface can cover display panel, when touch-sensitive table
Face is detected after the touch operation on or near it, processor 701 is sent to determine the type of touch event, with post processing
Device 701 provides corresponding visual output on a display panel according to the type of touch event.Although in Fig. 4, touch sensitive surface with
Display panel is that input and input function are realized as two independent parts, but in some embodiments it is possible to will be touched
Sensitive surfaces and display panel are integrated and realize input and output function.
Although not shown, the network equipment can also include camera, bluetooth module etc., will not be repeated here.Specifically at this
In embodiment, the processor 701 in the network equipment can be according to following instruction, by entering for one or more application program
The corresponding executable file of journey is loaded into memory 702, and run by processor 701 be stored in memory 702 should
With program, so that various functions are realized, it is as follows:
Action prompt information is provided to object to be verified;
The video stream data of the object to be verified is obtained, the video stream data is the object to be verified according to the action prompt
Information makes the successive frame facial image gathered during corresponding actions;
Target facial image is determined according to the video stream data;
The confidence level that the object to be verified is live body is determined according to the target facial image;
Authentication is carried out to the object to be verified according to the confidence level and target facial image.
The implementation method respectively operated above for details, reference can be made to above-described embodiment, and here is omitted.
The network equipment can realize having achieved by any authentication means that the embodiment of the present invention is provided
Effect is imitated, embodiment above is referred to, will not be repeated here.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage
Medium can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random
Access Memory), disk or CD etc..
A kind of auth method for being there is provided above the embodiment of the present invention, device and system are described in detail,
Specific case used herein is set forth to the principle and embodiment of the present invention, and the explanation of above example is to use
Understand the method and its core concept of the present invention in help;Simultaneously for those skilled in the art, the think of according to the present invention
Think, will change in specific embodiments and applications, be to sum up somebody's turn to do, this specification content should not be construed as to this
The limitation of invention.
Claims (15)
1. a kind of auth method, it is characterised in that including:
Action prompt information is provided to object to be verified;
The video stream data of the object to be verified is obtained, the video stream data is the object to be verified according to the action
Prompt message makes the successive frame facial image gathered during corresponding actions;
Target facial image is determined according to the video stream data;
The confidence level that the object to be verified is live body is determined according to the target facial image;
Authentication is carried out to the object to be verified according to the confidence level and target facial image.
2. auth method according to claim 1, it is characterised in that described that mesh is determined according to the video stream data
Facial image is marked, including:
Obtain the crucial point set of each frame facial image and the key point in the video stream data and concentrate each key point
Positional information;
The movement locus of the object to be verified is determined according to the crucial point set and positional information of each frame facial image;
Target facial image is determined from the video stream data according to the movement locus.
3. auth method according to claim 2, it is characterised in that described to be regarded according to the movement locus from described
Target facial image is determined in frequency flow data, including:
Judge whether the movement locus meets preparatory condition;
If so, then choosing the corresponding facial image of desired guiding trajectory point from the video stream data, target facial image is used as;
If it is not, then generation indicates the result that the object to be verified is disabled user.
4. auth method according to claim 2, it is characterised in that described to be determined according to the target facial image
The object to be verified is the confidence level of live body, including:
Concentrated from the key point of the target facial image and determine at least one target critical point;
Normalized image is determined according to the positional information and target facial image of the target critical point;
The normalized image is calculated using default disaggregated model, it is the credible of live body to obtain the object to be verified
Degree.
5. auth method according to claim 4, it is characterised in that the position according to the target critical point
Information and target facial image determine normalized image, including:
Obtain the predeterminated position of each target critical point;
Calculate the Euclidean distance between each predeterminated position and corresponding positional information;
Similarity transformation is carried out to the target facial image according to the Euclidean distance, normalized image is obtained.
6. auth method according to claim 4, it is characterised in that utilizing default disaggregated model to the normalizing
Before change image is calculated, in addition to:
Obtain the classification information of each default facial image in default face image set and the default face atlas;
Convolutional neural networks are trained according to the pre-set image collection and classification information, default disaggregated model is obtained.
7. the auth method according to any one in claim 2-6, it is characterised in that described according to described credible
Degree and target facial image carry out authentication to the object to be verified, including:
Judge whether the confidence level is more than the first predetermined threshold value;
If so, then carrying out authentication to the object to be verified according to target facial image;
If it is not, then generation indicates the result that the object to be verified is disabled user.
8. auth method according to claim 7, it is characterised in that described to be treated according to target facial image to described
Identifying object carries out authentication, including:
Target facial image is divided into by multiple human face regions according to the crucial point set of the target facial image;
Target signature information is determined according to the multiple human face region;
Authentication is carried out to the object to be verified according to the target signature information.
9. auth method according to claim 8, it is characterised in that described to be determined according to the multiple human face region
Target signature information, including:
Feature extraction operation is carried out to the human face region, a plurality of characteristic information, one feature of each human face region correspondence is obtained
Information;
The a plurality of characteristic information is recombinated, target signature information is obtained.
10. auth method according to claim 8, it is characterised in that described according to the target signature information pair
The object to be verified carries out authentication, including:
Acquisition stored characteristic information collection and it is described stored characteristic information concentrate it is each stored characteristic information it is corresponding use
Family is identified;
Each similarity stored between characteristic information and target signature information is calculated using preset algorithm;
Authentication is carried out to the object to be verified according to the similarity and corresponding user mark.
11. auth method according to claim 10, it is characterised in that described according to the similarity and corresponding
User's mark carries out authentication to the object to be verified, including:
Judge to whether there is the similarity not less than the second predetermined threshold value in all similarities calculated;
If in the presence of, the corresponding user's mark of the similarity for being not less than the second predetermined threshold value is identified as targeted customer,
And generate the result for indicating that the object to be verified is targeted customer mark;
If being not present, generation indicates the result that the object to be verified is disabled user.
12. auth method according to claim 10, it is characterised in that obtain stored characteristic information collection, with
And it is described stored characteristic information concentrate it is each stored characteristic information corresponding user mark before, in addition to:
User's registration request is obtained, the user's registration request carries user's mark to be registered and facial image to be registered;
Characteristic information to be registered is determined according to the facial image to be registered;
The characteristic information to be registered and user to be registered mark are associated, and the characteristic information to be registered is inserted
Store characteristic information collection.
13. a kind of authentication means, it is characterised in that including:
Module is provided, for providing action prompt information to object to be verified;
Acquisition module, the video stream data for obtaining the object to be verified, the video stream data is described to be verified right
As the successive frame facial image gathered when making corresponding actions according to the action prompt information;
First determining module, for determining target facial image according to the video stream data;
Second determining module, for determining confidence level of the object to be verified for live body according to the target facial image;
Authentication module, for carrying out authentication to the object to be verified according to the confidence level and target facial image.
14. authentication means according to claim 13, it is characterised in that first determining module specifically for:
Obtain the crucial point set of each frame facial image and the key point in the video stream data and concentrate each key point
Positional information;
The movement locus of the object to be verified is determined according to the crucial point set and positional information of each frame facial image;
Target facial image is determined from the video stream data according to the movement locus.
15. authentication means according to claim 14, it is characterised in that second determining module is specifically included:
First determination sub-module, at least one target critical point is determined for being concentrated from the key point of the target facial image;
Second determination sub-module, normalization figure is determined for the positional information according to the target critical point and target facial image
Picture;
Calculating sub module, for being calculated using default disaggregated model the normalized image, obtains described to be verified right
As the confidence level for live body.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710261931.0A CN107066983B (en) | 2017-04-20 | 2017-04-20 | Identity verification method and device |
PCT/CN2018/082803 WO2018192406A1 (en) | 2017-04-20 | 2018-04-12 | Identity authentication method and apparatus, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710261931.0A CN107066983B (en) | 2017-04-20 | 2017-04-20 | Identity verification method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107066983A true CN107066983A (en) | 2017-08-18 |
CN107066983B CN107066983B (en) | 2022-08-09 |
Family
ID=59600617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710261931.0A Active CN107066983B (en) | 2017-04-20 | 2017-04-20 | Identity verification method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107066983B (en) |
WO (1) | WO2018192406A1 (en) |
Cited By (64)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104518877A (en) * | 2013-10-08 | 2015-04-15 | 鸿富锦精密工业(深圳)有限公司 | Identity authentication system and method |
CN107590485A (en) * | 2017-09-29 | 2018-01-16 | 广州市森锐科技股份有限公司 | It is a kind of for the auth method of express delivery cabinet, device and to take express system |
CN107729857A (en) * | 2017-10-26 | 2018-02-23 | 广东欧珀移动通信有限公司 | Face identification method, device, storage medium and electronic equipment |
CN107733911A (en) * | 2017-10-30 | 2018-02-23 | 郑州云海信息技术有限公司 | A kind of power and environmental monitoring system client login authentication system and method |
CN108171109A (en) * | 2017-11-28 | 2018-06-15 | 苏州市东皓计算机系统工程有限公司 | A kind of face identification system |
CN108335394A (en) * | 2018-03-16 | 2018-07-27 | 东莞市华睿电子科技有限公司 | A kind of long-range control method of intelligent door lock |
CN108494942A (en) * | 2018-03-16 | 2018-09-04 | 东莞市华睿电子科技有限公司 | A kind of solution lock control method based on high in the clouds address list |
CN108564673A (en) * | 2018-04-13 | 2018-09-21 | 北京师范大学 | A kind of check class attendance method and system based on Global Face identification |
CN108615007A (en) * | 2018-04-23 | 2018-10-02 | 深圳大学 | Three-dimensional face identification method, device and the storage medium of feature based tensor |
CN108647874A (en) * | 2018-05-04 | 2018-10-12 | 科大讯飞股份有限公司 | Threshold value determines method and device |
WO2018192406A1 (en) * | 2017-04-20 | 2018-10-25 | 腾讯科技(深圳)有限公司 | Identity authentication method and apparatus, and storage medium |
CN109146879A (en) * | 2018-09-30 | 2019-01-04 | 杭州依图医疗技术有限公司 | A kind of method and device detecting the stone age |
CN109190522A (en) * | 2018-08-17 | 2019-01-11 | 浙江捷尚视觉科技股份有限公司 | A kind of biopsy method based on infrared camera |
CN109583165A (en) * | 2018-10-12 | 2019-04-05 | 阿里巴巴集团控股有限公司 | A kind of biological information processing method, device, equipment and system |
CN109635625A (en) * | 2018-10-16 | 2019-04-16 | 平安科技(深圳)有限公司 | Smart identity checking method, equipment, storage medium and device |
CN109670440A (en) * | 2018-12-14 | 2019-04-23 | 央视国际网络无锡有限公司 | The recognition methods of giant panda face and device |
GB2567798A (en) * | 2017-08-22 | 2019-05-01 | Eyn Ltd | Verification method and system |
CN109815835A (en) * | 2018-12-29 | 2019-05-28 | 联动优势科技有限公司 | A kind of interactive mode biopsy method |
CN109934191A (en) * | 2019-03-20 | 2019-06-25 | 北京字节跳动网络技术有限公司 | Information processing method and device |
CN109993024A (en) * | 2017-12-29 | 2019-07-09 | 技嘉科技股份有限公司 | Authentication means, auth method and computer-readable storage medium |
CN110197108A (en) * | 2018-08-17 | 2019-09-03 | 平安科技(深圳)有限公司 | Auth method, device, computer equipment and storage medium |
CN110210276A (en) * | 2018-05-15 | 2019-09-06 | 腾讯科技(深圳)有限公司 | A kind of motion track acquisition methods and its equipment, storage medium, terminal |
CN110443621A (en) * | 2019-08-07 | 2019-11-12 | 深圳前海微众银行股份有限公司 | Video core body method, apparatus, equipment and computer storage medium |
CN110705351A (en) * | 2019-08-28 | 2020-01-17 | 视联动力信息技术股份有限公司 | Video conference sign-in method and system |
CN110826045A (en) * | 2018-08-13 | 2020-02-21 | 深圳市商汤科技有限公司 | Authentication method and device, electronic equipment and storage medium |
CN110968239A (en) * | 2019-11-28 | 2020-04-07 | 北京市商汤科技开发有限公司 | Control method, device and equipment for display object and storage medium |
CN111091388A (en) * | 2020-02-18 | 2020-05-01 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device, face payment method and device, and electronic equipment |
CN111144169A (en) * | 2018-11-02 | 2020-05-12 | 深圳比亚迪微电子有限公司 | Face recognition method and device and electronic equipment |
CN111178259A (en) * | 2019-12-30 | 2020-05-19 | 八维通科技有限公司 | Recognition method and system supporting multi-algorithm fusion |
CN111209768A (en) * | 2018-11-06 | 2020-05-29 | 深圳市商汤科技有限公司 | Identity authentication system and method, electronic device, and storage medium |
CN111259757A (en) * | 2020-01-13 | 2020-06-09 | 支付宝实验室(新加坡)有限公司 | Image-based living body identification method, device and equipment |
CN111372023A (en) * | 2018-12-25 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | Code stream encryption and decryption method and device |
CN111382624A (en) * | 2018-12-28 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Action recognition method, device, equipment and readable storage medium |
CN111523408A (en) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | Motion capture method and device |
CN111723655A (en) * | 2020-05-12 | 2020-09-29 | 五八有限公司 | Face image processing method, device, server, terminal, equipment and medium |
CN111866589A (en) * | 2019-05-20 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Video data verification method and device, electronic equipment and storage medium |
CN111881707A (en) * | 2019-12-04 | 2020-11-03 | 马上消费金融股份有限公司 | Image reproduction detection method, identity verification method, model training method and device |
WO2020220453A1 (en) * | 2019-04-29 | 2020-11-05 | 众安信息技术服务有限公司 | Method and device for verifying certificate and certificate holder |
CN111932755A (en) * | 2020-07-02 | 2020-11-13 | 北京市威富安防科技有限公司 | Personnel passage verification method and device, computer equipment and storage medium |
CN111985331A (en) * | 2020-07-20 | 2020-11-24 | 中电天奥有限公司 | Detection method and device for preventing secret of business from being stolen |
CN112101286A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service request method, device, computer equipment and storage medium |
CN112287909A (en) * | 2020-12-24 | 2021-01-29 | 四川新网银行股份有限公司 | Double-random in-vivo detection method for randomly generating detection points and interactive elements |
CN112364733A (en) * | 2020-10-30 | 2021-02-12 | 重庆电子工程职业学院 | Intelligent security face recognition system |
CN112434547A (en) * | 2019-08-26 | 2021-03-02 | 中国移动通信集团广东有限公司 | User identity auditing method and device |
CN112700344A (en) * | 2020-12-22 | 2021-04-23 | 成都睿畜电子科技有限公司 | Farm management method, farm management device, farm management medium and farm management equipment |
US10997722B2 (en) | 2018-04-25 | 2021-05-04 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for identifying a body motion |
CN112800885A (en) * | 2021-01-16 | 2021-05-14 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN112906741A (en) * | 2019-05-21 | 2021-06-04 | 北京嘀嘀无限科技发展有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113075212A (en) * | 2019-12-24 | 2021-07-06 | 北京嘀嘀无限科技发展有限公司 | Vehicle verification method and device |
CN113095110A (en) * | 2019-12-23 | 2021-07-09 | 浙江宇视科技有限公司 | Method, device, medium and electronic equipment for dynamically warehousing face data |
CN113255512A (en) * | 2021-05-21 | 2021-08-13 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for living body identification |
CN113255529A (en) * | 2021-05-28 | 2021-08-13 | 支付宝(杭州)信息技术有限公司 | Biological feature identification method, device and equipment |
CN113469135A (en) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | Method and device for determining object identity information, storage medium and electronic device |
CN113505756A (en) * | 2021-08-23 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Face living body detection method and device |
CN113536270A (en) * | 2021-07-26 | 2021-10-22 | 网易(杭州)网络有限公司 | Information verification method and device, computer equipment and storage medium |
WO2022028425A1 (en) * | 2020-08-05 | 2022-02-10 | 广州虎牙科技有限公司 | Object recognition method and apparatus, electronic device and storage medium |
US11308340B2 (en) | 2017-08-22 | 2022-04-19 | Onfido Ltd. | Verification method and system |
CN115514893A (en) * | 2022-09-20 | 2022-12-23 | 北京有竹居网络技术有限公司 | Image uploading method, image uploading device, readable storage medium and electronic equipment |
CN115512426A (en) * | 2022-11-04 | 2022-12-23 | 安徽五域安全技术有限公司 | Intelligent face recognition method and system |
CN115937961A (en) * | 2023-03-02 | 2023-04-07 | 济南丽阳神州智能科技有限公司 | Online learning identification method and equipment |
CN116152936A (en) * | 2023-02-17 | 2023-05-23 | 深圳市永腾翼科技有限公司 | Face identity authentication system with interactive living body detection and method thereof |
US11727663B2 (en) | 2018-11-13 | 2023-08-15 | Bigo Technology Pte. Ltd. | Method and apparatus for detecting face key point, computer device and storage medium |
CN117789272A (en) * | 2023-12-26 | 2024-03-29 | 中邮消费金融有限公司 | Identity verification method, device, equipment and storage medium |
CN118656814A (en) * | 2024-08-19 | 2024-09-17 | 支付宝(杭州)信息技术有限公司 | Digital driving security verification method and device, storage medium and electronic equipment |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635021A (en) * | 2018-10-30 | 2019-04-16 | 平安科技(深圳)有限公司 | A kind of data information input method, device and equipment based on human testing |
CN109670285A (en) * | 2018-11-13 | 2019-04-23 | 平安科技(深圳)有限公司 | Face recognition login method, device, computer equipment and storage medium |
CN111241505A (en) * | 2018-11-28 | 2020-06-05 | 深圳市帝迈生物技术有限公司 | Terminal device, login verification method thereof and computer storage medium |
CN109815658A (en) * | 2018-12-14 | 2019-05-28 | 平安科技(深圳)有限公司 | A kind of verification method and device, computer equipment and computer storage medium |
CN109726648A (en) * | 2018-12-14 | 2019-05-07 | 深圳壹账通智能科技有限公司 | A kind of facial image recognition method and device based on machine learning |
CN109886697B (en) * | 2018-12-26 | 2023-09-08 | 巽腾(广东)科技有限公司 | Operation determination method and device based on expression group and electronic equipment |
TWI690856B (en) * | 2019-01-07 | 2020-04-11 | 國立交通大學 | Identity recognition system and identity recognition method |
CN111435424B (en) * | 2019-01-14 | 2024-10-22 | 北京京东乾石科技有限公司 | Image processing method and device |
JP7363455B2 (en) * | 2019-01-17 | 2023-10-18 | 株式会社デンソーウェーブ | Authentication system, authentication device and authentication method |
CN111461368B (en) * | 2019-01-21 | 2024-01-09 | 北京嘀嘀无限科技发展有限公司 | Abnormal order processing method, device, equipment and computer readable storage medium |
CN109934187B (en) * | 2019-03-19 | 2023-04-07 | 西安电子科技大学 | Random challenge response method based on face activity detection-eye sight |
CN110111129B (en) * | 2019-03-28 | 2024-01-19 | 中国科学院深圳先进技术研究院 | Data analysis method, advertisement playing device and storage medium |
CN110163094A (en) * | 2019-04-15 | 2019-08-23 | 深圳壹账通智能科技有限公司 | Biopsy method, device, equipment and storage medium based on gesture motion |
CN110288272B (en) * | 2019-04-19 | 2024-01-30 | 平安科技(深圳)有限公司 | Data processing method, device, electronic equipment and storage medium |
CN110287971B (en) * | 2019-05-22 | 2023-11-14 | 平安银行股份有限公司 | Data verification method, device, computer equipment and storage medium |
CN110363067A (en) * | 2019-05-24 | 2019-10-22 | 深圳壹账通智能科技有限公司 | Auth method and device, electronic equipment and storage medium |
TWI727337B (en) * | 2019-06-06 | 2021-05-11 | 大陸商鴻富錦精密工業(武漢)有限公司 | Electronic device and face recognition method |
CN112069863B (en) * | 2019-06-11 | 2022-08-19 | 荣耀终端有限公司 | Face feature validity determination method and electronic equipment |
CN110399794B (en) * | 2019-06-20 | 2024-06-28 | 平安科技(深圳)有限公司 | Human body-based gesture recognition method, device, equipment and storage medium |
CN110443137B (en) * | 2019-07-03 | 2023-07-25 | 平安科技(深圳)有限公司 | Multi-dimensional identity information identification method and device, computer equipment and storage medium |
CN112307817B (en) * | 2019-07-29 | 2024-03-19 | 中国移动通信集团浙江有限公司 | Face living body detection method, device, computing equipment and computer storage medium |
CN110705350B (en) * | 2019-08-27 | 2020-08-25 | 阿里巴巴集团控股有限公司 | Certificate identification method and device |
CN110688517B (en) * | 2019-09-02 | 2023-05-30 | 平安科技(深圳)有限公司 | Audio distribution method, device and storage medium |
CN112767436B (en) * | 2019-10-21 | 2024-10-01 | 深圳云天励飞技术有限公司 | Face detection tracking method and device |
CN111062323B (en) * | 2019-12-16 | 2023-06-02 | 腾讯科技(深圳)有限公司 | Face image transmission method, numerical value transfer method, device and electronic equipment |
CN111143703B (en) * | 2019-12-19 | 2023-05-23 | 上海寒武纪信息科技有限公司 | Intelligent line recommendation method and related products |
CN111191207A (en) * | 2019-12-23 | 2020-05-22 | 深圳壹账通智能科技有限公司 | Electronic file control method and device, computer equipment and storage medium |
CN111160243A (en) * | 2019-12-27 | 2020-05-15 | 深圳云天励飞技术有限公司 | Passenger flow volume statistical method and related product |
CN111178287A (en) * | 2019-12-31 | 2020-05-19 | 云知声智能科技股份有限公司 | Audio-video fusion end-to-end identity recognition method and device |
CN114008616B (en) | 2020-02-04 | 2023-04-28 | 格步计程车控股私人有限公司 | Method, server and communication system for authenticating a user for transportation purposes |
SG10202002506YA (en) * | 2020-03-18 | 2020-09-29 | Alipay Labs Singapore Pte Ltd | A user authentication method and system |
CN113554046A (en) * | 2020-04-24 | 2021-10-26 | 阿里巴巴集团控股有限公司 | Image processing method and system, storage medium and computing device |
CN113569594B (en) * | 2020-04-28 | 2024-10-22 | 魔门塔(苏州)科技有限公司 | Method and device for labeling key points of human face |
CN111652086B (en) * | 2020-05-15 | 2022-12-30 | 汉王科技股份有限公司 | Face living body detection method and device, electronic equipment and storage medium |
CN111753271A (en) * | 2020-06-28 | 2020-10-09 | 深圳壹账通智能科技有限公司 | Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification |
CN111985298B (en) * | 2020-06-28 | 2023-07-25 | 百度在线网络技术(北京)有限公司 | Face recognition sample collection method and device |
CN111950401B (en) * | 2020-07-28 | 2023-12-08 | 深圳数联天下智能科技有限公司 | Method, image processing system, device and medium for determining position of key point area |
CN112818733B (en) * | 2020-08-24 | 2024-01-05 | 腾讯科技(深圳)有限公司 | Information processing method, device, storage medium and terminal |
CN112132030B (en) * | 2020-09-23 | 2024-05-28 | 湖南快乐阳光互动娱乐传媒有限公司 | Video processing method and device, storage medium and electronic equipment |
CN112383737B (en) * | 2020-11-11 | 2023-05-30 | 从法信息科技有限公司 | Video processing verification method and device for multi-user online content on same screen and electronic equipment |
CN112491840B (en) * | 2020-11-17 | 2023-07-07 | 平安养老保险股份有限公司 | Information modification method, device, computer equipment and storage medium |
CN114626036B (en) * | 2020-12-08 | 2024-05-24 | 腾讯科技(深圳)有限公司 | Information processing method and device based on face recognition, storage medium and terminal |
CN112633129A (en) * | 2020-12-18 | 2021-04-09 | 深圳追一科技有限公司 | Video analysis method and device, electronic equipment and storage medium |
CN112560768A (en) * | 2020-12-25 | 2021-03-26 | 深圳市商汤科技有限公司 | Gate channel control method and device, computer equipment and storage medium |
CN113128452A (en) * | 2021-04-30 | 2021-07-16 | 重庆锐云科技有限公司 | Greening satisfaction acquisition method and system based on image recognition |
CN113361366A (en) * | 2021-05-27 | 2021-09-07 | 北京百度网讯科技有限公司 | Face labeling method and device, electronic equipment and storage medium |
CN113569676B (en) * | 2021-07-16 | 2024-06-11 | 北京市商汤科技开发有限公司 | Image processing method, device, electronic equipment and storage medium |
CN113673473A (en) * | 2021-08-31 | 2021-11-19 | 浙江大华技术股份有限公司 | Gate control method and device, electronic equipment and storage medium |
CN113742776B (en) * | 2021-09-08 | 2024-07-12 | 北京昱华荣泰生物科技有限公司 | Data verification method and device based on biological recognition technology and computer equipment |
CN113780212A (en) * | 2021-09-16 | 2021-12-10 | 平安科技(深圳)有限公司 | User identity verification method, device, equipment and storage medium |
CN114267066B (en) * | 2021-12-24 | 2022-11-01 | 合肥的卢深视科技有限公司 | Face recognition method, electronic device and storage medium |
CN114359798A (en) * | 2021-12-29 | 2022-04-15 | 天翼物联科技有限公司 | Data auditing method and device for real person authentication, computer equipment and storage medium |
CN114638684A (en) * | 2022-02-16 | 2022-06-17 | 中和农信项目管理有限公司 | Financial survey anti-cheating method and device, terminal equipment and storage medium |
CN114613018B (en) * | 2022-03-23 | 2024-08-23 | Oppo广东移动通信有限公司 | Living body detection method, living body detection device, storage medium and electronic equipment |
CN114760068A (en) * | 2022-04-08 | 2022-07-15 | 中国银行股份有限公司 | User identity authentication method, system, electronic device and storage medium |
CN116469196B (en) * | 2023-03-16 | 2024-03-15 | 南京誉泰瑞思科技有限公司 | Digital integrated management system and method |
CN116597545A (en) * | 2023-05-17 | 2023-08-15 | 广东保伦电子股份有限公司 | Door lock control method and device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000113197A (en) * | 1998-10-02 | 2000-04-21 | Victor Co Of Japan Ltd | Individual identifying device |
CN101162500A (en) * | 2006-10-13 | 2008-04-16 | 上海银晨智能识别科技有限公司 | Sectorization type human face recognition method |
CN104036276A (en) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Face recognition method and device |
CN105426827A (en) * | 2015-11-09 | 2016-03-23 | 北京市商汤科技开发有限公司 | Living body verification method, device and system |
CN105426850A (en) * | 2015-11-23 | 2016-03-23 | 深圳市商汤科技有限公司 | Human face identification based related information pushing device and method |
CN105518708A (en) * | 2015-04-29 | 2016-04-20 | 北京旷视科技有限公司 | Method and equipment for verifying living human face, and computer program product |
CN105847735A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based instant pop-up screen video communication method and system |
CN106156578A (en) * | 2015-04-22 | 2016-11-23 | 深圳市腾讯计算机系统有限公司 | Auth method and device |
CN106295574A (en) * | 2016-08-12 | 2017-01-04 | 广州视源电子科技股份有限公司 | Face feature extraction modeling and face recognition method and device based on neural network |
WO2017016516A1 (en) * | 2015-07-24 | 2017-02-02 | 上海依图网络科技有限公司 | Method for face recognition-based video human image tracking under complex scenes |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111898108B (en) * | 2014-09-03 | 2024-06-04 | 创新先进技术有限公司 | Identity authentication method, device, terminal and server |
CN105989264B (en) * | 2015-02-02 | 2020-04-07 | 北京中科奥森数据科技有限公司 | Biological characteristic living body detection method and system |
CN106302330B (en) * | 2015-05-21 | 2021-01-05 | 腾讯科技(深圳)有限公司 | Identity verification method, device and system |
CN105227316A (en) * | 2015-09-01 | 2016-01-06 | 深圳市创想一登科技有限公司 | Based on mobile Internet account login system and the method for facial image authentication |
CN111144293A (en) * | 2015-09-25 | 2020-05-12 | 北京市商汤科技开发有限公司 | Human face identity authentication system with interactive living body detection and method thereof |
CN105718874A (en) * | 2016-01-18 | 2016-06-29 | 北京天诚盛业科技有限公司 | Method and device of in-vivo detection and authentication |
CN107066983B (en) * | 2017-04-20 | 2022-08-09 | 腾讯科技(上海)有限公司 | Identity verification method and device |
-
2017
- 2017-04-20 CN CN201710261931.0A patent/CN107066983B/en active Active
-
2018
- 2018-04-12 WO PCT/CN2018/082803 patent/WO2018192406A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000113197A (en) * | 1998-10-02 | 2000-04-21 | Victor Co Of Japan Ltd | Individual identifying device |
CN101162500A (en) * | 2006-10-13 | 2008-04-16 | 上海银晨智能识别科技有限公司 | Sectorization type human face recognition method |
CN104036276A (en) * | 2014-05-29 | 2014-09-10 | 无锡天脉聚源传媒科技有限公司 | Face recognition method and device |
CN106156578A (en) * | 2015-04-22 | 2016-11-23 | 深圳市腾讯计算机系统有限公司 | Auth method and device |
CN105518708A (en) * | 2015-04-29 | 2016-04-20 | 北京旷视科技有限公司 | Method and equipment for verifying living human face, and computer program product |
WO2017016516A1 (en) * | 2015-07-24 | 2017-02-02 | 上海依图网络科技有限公司 | Method for face recognition-based video human image tracking under complex scenes |
CN105426827A (en) * | 2015-11-09 | 2016-03-23 | 北京市商汤科技开发有限公司 | Living body verification method, device and system |
CN105426850A (en) * | 2015-11-23 | 2016-03-23 | 深圳市商汤科技有限公司 | Human face identification based related information pushing device and method |
CN105847735A (en) * | 2016-03-30 | 2016-08-10 | 宁波三博电子科技有限公司 | Face recognition-based instant pop-up screen video communication method and system |
CN106295574A (en) * | 2016-08-12 | 2017-01-04 | 广州视源电子科技股份有限公司 | Face feature extraction modeling and face recognition method and device based on neural network |
Non-Patent Citations (3)
Title |
---|
K.KOLLREIDER: "Non-intrusive liveness detection by face images", 《IMAGE AND VISION COMPUTING》 * |
吴炜: "《基于学习的图像增强技术》", 28 February 2013 * |
陈曦: "生物识别中的活体检测技术综述", 《第三十四届中国控制会议》 * |
Cited By (87)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104518877A (en) * | 2013-10-08 | 2015-04-15 | 鸿富锦精密工业(深圳)有限公司 | Identity authentication system and method |
WO2018192406A1 (en) * | 2017-04-20 | 2018-10-25 | 腾讯科技(深圳)有限公司 | Identity authentication method and apparatus, and storage medium |
GB2567798A (en) * | 2017-08-22 | 2019-05-01 | Eyn Ltd | Verification method and system |
US11308340B2 (en) | 2017-08-22 | 2022-04-19 | Onfido Ltd. | Verification method and system |
CN107590485A (en) * | 2017-09-29 | 2018-01-16 | 广州市森锐科技股份有限公司 | It is a kind of for the auth method of express delivery cabinet, device and to take express system |
CN107729857A (en) * | 2017-10-26 | 2018-02-23 | 广东欧珀移动通信有限公司 | Face identification method, device, storage medium and electronic equipment |
CN107733911A (en) * | 2017-10-30 | 2018-02-23 | 郑州云海信息技术有限公司 | A kind of power and environmental monitoring system client login authentication system and method |
CN108171109A (en) * | 2017-11-28 | 2018-06-15 | 苏州市东皓计算机系统工程有限公司 | A kind of face identification system |
CN109993024A (en) * | 2017-12-29 | 2019-07-09 | 技嘉科技股份有限公司 | Authentication means, auth method and computer-readable storage medium |
CN108494942A (en) * | 2018-03-16 | 2018-09-04 | 东莞市华睿电子科技有限公司 | A kind of solution lock control method based on high in the clouds address list |
CN108335394A (en) * | 2018-03-16 | 2018-07-27 | 东莞市华睿电子科技有限公司 | A kind of long-range control method of intelligent door lock |
CN108494942B (en) * | 2018-03-16 | 2021-12-10 | 深圳八爪网络科技有限公司 | Unlocking control method based on cloud address book |
CN108564673A (en) * | 2018-04-13 | 2018-09-21 | 北京师范大学 | A kind of check class attendance method and system based on Global Face identification |
CN108615007A (en) * | 2018-04-23 | 2018-10-02 | 深圳大学 | Three-dimensional face identification method, device and the storage medium of feature based tensor |
US10997722B2 (en) | 2018-04-25 | 2021-05-04 | Beijing Didi Infinity Technology And Development Co., Ltd. | Systems and methods for identifying a body motion |
CN108647874B (en) * | 2018-05-04 | 2020-12-08 | 科大讯飞股份有限公司 | Threshold value determining method and device |
CN108647874A (en) * | 2018-05-04 | 2018-10-12 | 科大讯飞股份有限公司 | Threshold value determines method and device |
CN110210276A (en) * | 2018-05-15 | 2019-09-06 | 腾讯科技(深圳)有限公司 | A kind of motion track acquisition methods and its equipment, storage medium, terminal |
CN110826045A (en) * | 2018-08-13 | 2020-02-21 | 深圳市商汤科技有限公司 | Authentication method and device, electronic equipment and storage medium |
CN110197108A (en) * | 2018-08-17 | 2019-09-03 | 平安科技(深圳)有限公司 | Auth method, device, computer equipment and storage medium |
CN109190522B (en) * | 2018-08-17 | 2021-05-07 | 浙江捷尚视觉科技股份有限公司 | Living body detection method based on infrared camera |
CN109190522A (en) * | 2018-08-17 | 2019-01-11 | 浙江捷尚视觉科技股份有限公司 | A kind of biopsy method based on infrared camera |
CN109146879A (en) * | 2018-09-30 | 2019-01-04 | 杭州依图医疗技术有限公司 | A kind of method and device detecting the stone age |
CN109146879B (en) * | 2018-09-30 | 2021-05-18 | 杭州依图医疗技术有限公司 | Method and device for detecting bone age |
CN109583165A (en) * | 2018-10-12 | 2019-04-05 | 阿里巴巴集团控股有限公司 | A kind of biological information processing method, device, equipment and system |
CN109635625B (en) * | 2018-10-16 | 2023-08-18 | 平安科技(深圳)有限公司 | Intelligent identity verification method, equipment, storage medium and device |
CN109635625A (en) * | 2018-10-16 | 2019-04-16 | 平安科技(深圳)有限公司 | Smart identity checking method, equipment, storage medium and device |
CN111144169A (en) * | 2018-11-02 | 2020-05-12 | 深圳比亚迪微电子有限公司 | Face recognition method and device and electronic equipment |
CN111209768A (en) * | 2018-11-06 | 2020-05-29 | 深圳市商汤科技有限公司 | Identity authentication system and method, electronic device, and storage medium |
US11727663B2 (en) | 2018-11-13 | 2023-08-15 | Bigo Technology Pte. Ltd. | Method and apparatus for detecting face key point, computer device and storage medium |
CN109670440A (en) * | 2018-12-14 | 2019-04-23 | 央视国际网络无锡有限公司 | The recognition methods of giant panda face and device |
CN111372023A (en) * | 2018-12-25 | 2020-07-03 | 杭州海康威视数字技术股份有限公司 | Code stream encryption and decryption method and device |
CN111382624A (en) * | 2018-12-28 | 2020-07-07 | 杭州海康威视数字技术股份有限公司 | Action recognition method, device, equipment and readable storage medium |
CN111382624B (en) * | 2018-12-28 | 2023-08-11 | 杭州海康威视数字技术股份有限公司 | Action recognition method, device, equipment and readable storage medium |
CN109815835A (en) * | 2018-12-29 | 2019-05-28 | 联动优势科技有限公司 | A kind of interactive mode biopsy method |
CN109934191A (en) * | 2019-03-20 | 2019-06-25 | 北京字节跳动网络技术有限公司 | Information processing method and device |
WO2020220453A1 (en) * | 2019-04-29 | 2020-11-05 | 众安信息技术服务有限公司 | Method and device for verifying certificate and certificate holder |
CN111866589A (en) * | 2019-05-20 | 2020-10-30 | 北京嘀嘀无限科技发展有限公司 | Video data verification method and device, electronic equipment and storage medium |
CN112906741A (en) * | 2019-05-21 | 2021-06-04 | 北京嘀嘀无限科技发展有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN110443621A (en) * | 2019-08-07 | 2019-11-12 | 深圳前海微众银行股份有限公司 | Video core body method, apparatus, equipment and computer storage medium |
CN112434547A (en) * | 2019-08-26 | 2021-03-02 | 中国移动通信集团广东有限公司 | User identity auditing method and device |
CN112434547B (en) * | 2019-08-26 | 2023-11-14 | 中国移动通信集团广东有限公司 | User identity auditing method and device |
CN110705351A (en) * | 2019-08-28 | 2020-01-17 | 视联动力信息技术股份有限公司 | Video conference sign-in method and system |
TWI758837B (en) * | 2019-11-28 | 2022-03-21 | 大陸商北京市商湯科技開發有限公司 | Method and apparatus for controlling a display object, electronic device and storage medium |
CN110968239A (en) * | 2019-11-28 | 2020-04-07 | 北京市商汤科技开发有限公司 | Control method, device and equipment for display object and storage medium |
WO2021103610A1 (en) * | 2019-11-28 | 2021-06-03 | 北京市商汤科技开发有限公司 | Display object control method and apparatus, electronic device and storage medium |
CN111881707A (en) * | 2019-12-04 | 2020-11-03 | 马上消费金融股份有限公司 | Image reproduction detection method, identity verification method, model training method and device |
CN113095110A (en) * | 2019-12-23 | 2021-07-09 | 浙江宇视科技有限公司 | Method, device, medium and electronic equipment for dynamically warehousing face data |
CN113095110B (en) * | 2019-12-23 | 2024-03-08 | 浙江宇视科技有限公司 | Method, device, medium and electronic equipment for dynamically warehousing face data |
CN113075212A (en) * | 2019-12-24 | 2021-07-06 | 北京嘀嘀无限科技发展有限公司 | Vehicle verification method and device |
CN111178259A (en) * | 2019-12-30 | 2020-05-19 | 八维通科技有限公司 | Recognition method and system supporting multi-algorithm fusion |
CN111259757A (en) * | 2020-01-13 | 2020-06-09 | 支付宝实验室(新加坡)有限公司 | Image-based living body identification method, device and equipment |
CN111259757B (en) * | 2020-01-13 | 2023-06-20 | 支付宝实验室(新加坡)有限公司 | Living body identification method, device and equipment based on image |
CN111091388B (en) * | 2020-02-18 | 2024-02-09 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device, face payment method and device and electronic equipment |
CN111091388A (en) * | 2020-02-18 | 2020-05-01 | 支付宝实验室(新加坡)有限公司 | Living body detection method and device, face payment method and device, and electronic equipment |
CN111523408B (en) * | 2020-04-09 | 2023-09-15 | 北京百度网讯科技有限公司 | Motion capturing method and device |
CN111523408A (en) * | 2020-04-09 | 2020-08-11 | 北京百度网讯科技有限公司 | Motion capture method and device |
CN111723655B (en) * | 2020-05-12 | 2024-03-08 | 五八有限公司 | Face image processing method, device, server, terminal, equipment and medium |
CN111723655A (en) * | 2020-05-12 | 2020-09-29 | 五八有限公司 | Face image processing method, device, server, terminal, equipment and medium |
CN111932755A (en) * | 2020-07-02 | 2020-11-13 | 北京市威富安防科技有限公司 | Personnel passage verification method and device, computer equipment and storage medium |
CN111985331A (en) * | 2020-07-20 | 2020-11-24 | 中电天奥有限公司 | Detection method and device for preventing secret of business from being stolen |
CN111985331B (en) * | 2020-07-20 | 2024-05-10 | 中电天奥有限公司 | Detection method and device for preventing trade secret from being stolen |
WO2022028425A1 (en) * | 2020-08-05 | 2022-02-10 | 广州虎牙科技有限公司 | Object recognition method and apparatus, electronic device and storage medium |
CN112101286A (en) * | 2020-09-25 | 2020-12-18 | 北京市商汤科技开发有限公司 | Service request method, device, computer equipment and storage medium |
CN112364733B (en) * | 2020-10-30 | 2022-07-26 | 重庆电子工程职业学院 | Intelligent security face recognition system |
CN112364733A (en) * | 2020-10-30 | 2021-02-12 | 重庆电子工程职业学院 | Intelligent security face recognition system |
CN112700344A (en) * | 2020-12-22 | 2021-04-23 | 成都睿畜电子科技有限公司 | Farm management method, farm management device, farm management medium and farm management equipment |
CN112287909B (en) * | 2020-12-24 | 2021-09-07 | 四川新网银行股份有限公司 | Double-random in-vivo detection method for randomly generating detection points and interactive elements |
CN112287909A (en) * | 2020-12-24 | 2021-01-29 | 四川新网银行股份有限公司 | Double-random in-vivo detection method for randomly generating detection points and interactive elements |
CN112800885A (en) * | 2021-01-16 | 2021-05-14 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN112800885B (en) * | 2021-01-16 | 2023-09-26 | 南京众鑫云创软件科技有限公司 | Data processing system and method based on big data |
CN113255512A (en) * | 2021-05-21 | 2021-08-13 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for living body identification |
CN113255512B (en) * | 2021-05-21 | 2023-07-28 | 北京百度网讯科技有限公司 | Method, apparatus, device and storage medium for living body identification |
CN113255529A (en) * | 2021-05-28 | 2021-08-13 | 支付宝(杭州)信息技术有限公司 | Biological feature identification method, device and equipment |
CN113536270A (en) * | 2021-07-26 | 2021-10-22 | 网易(杭州)网络有限公司 | Information verification method and device, computer equipment and storage medium |
CN113536270B (en) * | 2021-07-26 | 2023-08-08 | 网易(杭州)网络有限公司 | Information verification method, device, computer equipment and storage medium |
CN113469135A (en) * | 2021-07-28 | 2021-10-01 | 浙江大华技术股份有限公司 | Method and device for determining object identity information, storage medium and electronic device |
CN113505756A (en) * | 2021-08-23 | 2021-10-15 | 支付宝(杭州)信息技术有限公司 | Face living body detection method and device |
CN115514893A (en) * | 2022-09-20 | 2022-12-23 | 北京有竹居网络技术有限公司 | Image uploading method, image uploading device, readable storage medium and electronic equipment |
CN115514893B (en) * | 2022-09-20 | 2023-10-27 | 北京有竹居网络技术有限公司 | Image uploading method, image uploading device, readable storage medium and electronic equipment |
CN115512426B (en) * | 2022-11-04 | 2023-03-24 | 安徽五域安全技术有限公司 | Intelligent face recognition method and system |
CN115512426A (en) * | 2022-11-04 | 2022-12-23 | 安徽五域安全技术有限公司 | Intelligent face recognition method and system |
CN116152936A (en) * | 2023-02-17 | 2023-05-23 | 深圳市永腾翼科技有限公司 | Face identity authentication system with interactive living body detection and method thereof |
CN115937961B (en) * | 2023-03-02 | 2023-07-11 | 济南丽阳神州智能科技有限公司 | Online learning identification method and equipment |
CN115937961A (en) * | 2023-03-02 | 2023-04-07 | 济南丽阳神州智能科技有限公司 | Online learning identification method and equipment |
CN117789272A (en) * | 2023-12-26 | 2024-03-29 | 中邮消费金融有限公司 | Identity verification method, device, equipment and storage medium |
CN118656814A (en) * | 2024-08-19 | 2024-09-17 | 支付宝(杭州)信息技术有限公司 | Digital driving security verification method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
WO2018192406A1 (en) | 2018-10-25 |
CN107066983B (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107066983A (en) | A kind of auth method and device | |
CN112651348B (en) | Identity authentication method and device and storage medium | |
De Marsico et al. | Firme: Face and iris recognition for mobile engagement | |
CN107239725A (en) | A kind of information displaying method, apparatus and system | |
CN105809415A (en) | Human face recognition based check-in system, method and device | |
CN106803289A (en) | A kind of false proof method and system of registering of intelligent mobile | |
CN105518708A (en) | Method and equipment for verifying living human face, and computer program product | |
CN104143097B (en) | Classification function obtaining method and device, face age recognition method and device and equipment | |
WO2018166291A1 (en) | User sign-in identification method based on multifactor cross-verification | |
CN104580143A (en) | Security authentication method based on gesture recognition, terminal, server and system | |
GB2560340A (en) | Verification method and system | |
CN110135262A (en) | The anti-peeping processing method of sensitive data, device, equipment and storage medium | |
WO2018072028A1 (en) | Face authentication to mitigate spoofing | |
CN109003346A (en) | A kind of campus Work attendance method and its system based on face recognition technology | |
US10805255B2 (en) | Network information identification method and apparatus | |
CN107992728A (en) | Face verification method and device | |
US20220262163A1 (en) | Method of face anti-spoofing, device, and storage medium | |
CN109993212A (en) | Location privacy protection method, social network-i i-platform in the sharing of social networks picture | |
CN106951866A (en) | A kind of face authentication method and device | |
CN104572654A (en) | User searching method and device | |
CN108154103A (en) | Detect method, apparatus, equipment and the computer storage media of promotion message conspicuousness | |
CN108875582A (en) | Auth method, device, equipment, storage medium and program | |
US20230306792A1 (en) | Spoof Detection Based on Challenge Response Analysis | |
CN110503409A (en) | The method and relevant apparatus of information processing | |
CN113642519A (en) | Face recognition system and face recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |