Nothing Special   »   [go: up one dir, main page]

CN110598571A - Living body detection method, living body detection device and computer-readable storage medium - Google Patents

Living body detection method, living body detection device and computer-readable storage medium Download PDF

Info

Publication number
CN110598571A
CN110598571A CN201910768142.5A CN201910768142A CN110598571A CN 110598571 A CN110598571 A CN 110598571A CN 201910768142 A CN201910768142 A CN 201910768142A CN 110598571 A CN110598571 A CN 110598571A
Authority
CN
China
Prior art keywords
dimensional
face
face image
image
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910768142.5A
Other languages
Chinese (zh)
Inventor
欧阳高询
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN201910768142.5A priority Critical patent/CN110598571A/en
Publication of CN110598571A publication Critical patent/CN110598571A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a biological identification technology, and discloses a living body detection method, which comprises the following steps: emitting structured light to an object to be detected; acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected; acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data; extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points; and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.

Description

Living body detection method, living body detection device and computer-readable storage medium
Technical Field
The present invention relates to the field of face recognition, and in particular, to a method and an apparatus for detecting a living body, and a computer-readable storage medium.
Background
Currently, face recognition systems are increasingly applied to security and financial fields requiring authentication, such as bank remote account opening, access control systems, remote transaction operation authentication and the like. In these high security level application fields, in addition to ensuring that the face similarity of the authenticatee conforms to the underlying library stored in the database, it is first necessary to verify that the authenticatee is a legitimate living organism. That is, the face recognition system needs to be able to prevent an attacker from attacking the face by using a photo, a video, a mask, or a three-dimensional face model (made of a material such as paper, plaster, or rubber).
The existing in-vivo detection device is complex, high in cost and low in detection accuracy; the detection algorithm needs to acquire temperature data and is easily interfered by the environment; in addition, the existing living body detection method is easily influenced by illumination during detection, and the identification effect is seriously influenced when the ambient illumination intensity is weak.
Disclosure of Invention
The invention provides a living body detection method, a living body detection device and a computer-readable storage medium, and mainly aims to improve the speed and the precision of living body detection.
In order to achieve the above object, the present invention provides a living body detection method including:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
Optionally, the structured light is light with light and dark stripes having a certain structure, the light and dark stripes are vertical linear stripes, and a stripe interval is configured to match a size of the object to be detected.
Optionally, the structured light is emitted by a light emitting device, which in turn emits at least two structured lights with different spatial frequencies for illuminating the object to be detected.
Optionally, the step of determining whether the three-dimensional face image is from a living body by using the face feature information representing the face feature and the pose feature information of the three-dimensional face image includes:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
Optionally, the face feature points include a plurality of feature points of one or more of the face eyes, nose and mouth regions, and the posture feature information of the three-dimensional face image is a yaw angle, a pitch angle and a roll angle of the three-dimensional face image.
The present invention also provides an electronic device comprising a memory and a processor, the memory having stored thereon a liveness detection program executable on the processor, the liveness detection program when executed by the processor implementing the steps of:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
Optionally, the structured light is light with light and dark stripes having a certain structure, the light and dark stripes are vertical linear stripes, and a stripe interval is configured to match a size of the object to be detected.
Optionally, the structured light is emitted by a light emitting device, which in turn emits at least two structured lights with different spatial frequencies for illuminating the object to be detected.
Optionally, the step of determining whether the three-dimensional face image is from a living body by using the face feature information representing the face feature and the pose feature information of the three-dimensional face image includes:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
Further, to achieve the above object, the present invention also provides a computer-readable storage medium having a living body detection program stored thereon, the living body detection program being executable by one or more processors to implement the steps of the living body detection method described above.
The in-vivo detection method, the in-vivo detection device and the computer readable storage medium do not need a large amount of data acquisition training models, do not need an additional temperature acquisition device, and only need to complete in-vivo detection in an image-based mode; the color filter is not easily interfered by color information, is not influenced by illumination, and is still used under dark conditions; the device is simple, easy to realize, can conveniently use in the middle of various application scenes, and the cost is lower, and later maintenance is simple.
Drawings
FIG. 1 is a schematic flow chart of a biopsy method according to an embodiment of the present invention;
fig. 2 is a schematic diagram illustrating an internal structure of an electronic device according to an embodiment of the invention;
fig. 3 is a block diagram of a live body detection program based module in an electronic device according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The invention provides a living body detection method. Fig. 1 is a schematic flow chart of a living body detection method according to an embodiment of the present invention. The method may be performed by a device, which may be implemented by software and/or hardware, and in this embodiment, the device is an intelligent terminal.
In the present embodiment, the living body detecting method includes:
s101, emitting structured light to an object to be detected;
s102, acquiring digital images of two different-angle faces of the object to be detected and an image containing depth data of the face of the object to be detected;
s103, acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
s104, extracting three-dimensional coordinate information of the human face characteristic points and posture characteristic information of the three-dimensional human face image from the three-dimensional human face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
s105, calculating face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and S106, judging whether the three-dimensional face image is from a living body by using the face feature information representing the face feature and the posture feature information of the three-dimensional face image.
The structured light is light with light and dark stripes of a certain structure, the light and dark stripes are vertical linear stripes, and the stripe intervals are configured to match the size of the object to be detected.
Wherein the structured light is light having structured light and dark stripes at a wavelength such as an infrared band (e.g., 850 nm). For example, when the object to be detected is a human face, the stripe pitch of the light and dark stripes is about 2 mm. In one embodiment of the present invention, the structured light may be generated by placing a specific grating (e.g., 161 lines/inch) before the in-line laser.
The structured light is emitted by a light emitting device, which in turn emits at least two structured lights having different spatial frequencies for illuminating an object to be detected.
The step of judging whether the three-dimensional face image comes from a living body by using the face feature information representing the face features and the posture feature information of the three-dimensional face image comprises the following steps:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
The face image acquired by the common two-dimensional face acquisition device is a two-dimensional planar face image, and only two-dimensional coordinates of the feature points, that is, (x, y) coordinate values of the feature points, can be obtained, while the three-dimensional face image can be acquired by the three-dimensional face image acquisition device, and three-dimensional coordinates of the feature points can be obtained, including horizontal coordinates, vertical coordinates and depth coordinates, that is, (x, y, z) coordinate values of the feature points, compared with the two-dimensional image, the depth coordinates of the face image, that is, the z value, is increased, for example: the three-dimensional face image acquisition equipment can be composed of two infrared induction cameras and an infrared laser transmitter, or an infrared induction camera, a color induction camera and an infrared laser transmitter, imitates the parallax principle of human eyes, simultaneously acquires face images, tracks the tracks of infrared rays, and calculates the depth coordinate of the three-dimensional face image through the triangulation positioning principle.
The three-dimensional coordinate information of the face characteristic points is distance information of a three-dimensional coordinate space, a camera of three-dimensional face image acquisition equipment is used as an original point of the three-dimensional coordinate space, the direction of the three-dimensional face image acquisition equipment facing a user is used as the positive direction of a z axis, the positive directions of an x axis and a y axis can be determined according to a right-hand coordinate system, so that the three-dimensional coordinate space is established, the distance information of the characteristic points on the face image in the three-dimensional coordinate space relative to the original point of the coordinate is obtained through conversion, and the three-dimensional coordinate information of the face characteristic points is obtained;
the posture characteristic information of the three-dimensional face image refers to posture characteristic information of one frame of three-dimensional face image and/or posture characteristic change information of two frames of three-dimensional face images, such as an offset position, an inclination angle and a rotation angle.
Wherein the step of determining whether the three-dimensional face image is from a living body by using the face feature information representing the face features and the pose feature information of the three-dimensional face image may further include:
judging whether the three-dimensional face image is from a living body or not by utilizing the attitude characteristic information of the three-dimensional face image, and if so, turning to the next step;
and judging whether the three-dimensional face image comes from a living body or not by using face feature information representing the face features.
The human face characteristic points comprise a plurality of characteristic points of one or more areas in human face eyes, nose and mouth areas, and the posture characteristic information of the three-dimensional human face image is the yaw angle, pitch angle and roll angle of the three-dimensional human face image.
The face feature information refers to face feature information of one frame of three-dimensional face image and/or face feature change information of two frames of three-dimensional face images. Because the characteristic information on the face image is more, the face image comprises a nose, eyes, a mouth, cheeks, eyebrows and other main areas, and each area is composed of a plurality of characteristic points, the three-dimensional coordinate information of the extracted face characteristic points is more. The face feature information representing the face features is obtained by utilizing the three-dimensional coordinate information of the face feature points, so that interference information and noise can be effectively removed, the face features can be better represented by the face feature information, false detection is avoided, and the detection accuracy is improved.
The three-dimensional face image of the object to be detected under the structured light irradiation is obtained by reconstructing a face three-dimensional model, wherein the step of reconstructing the face three-dimensional model comprises the following steps:
correcting the digital images of the human faces at the two different angles by using a stereogram so as to eliminate vertical parallax;
carrying out geometric transformation and super-resolution transformation on the image containing the face depth data;
extracting seed pixels, wherein the extracted seed pixel points are projection coordinates of each pixel point of the image containing the face depth data on the digital image acquisition device;
obtaining a disparity map based on seed pixel expansion with the same resolution as the digital image acquisition device according to the disparity map, the binary mask image and the seed pixels obtained by respectively performing geometric transformation and super-resolution transformation on the image;
and establishing a human face three-dimensional model according to the disparity map which has the same resolution as the digital image acquisition device and is based on seed pixel expansion.
The in-vivo detection method provided by the embodiment does not need a large amount of acquired data training models, does not need an additional temperature acquisition device, and only needs an image-based mode to complete in-vivo detection; the color filter is not easily interfered by color information, is not influenced by illumination, and is still used under dark conditions; the device is simple, easy to realize, can conveniently use in the middle of various application scenes, and the cost is lower, and later maintenance is simple.
The invention also provides an electronic device 1. Fig. 2 is a schematic view of an internal structure of an electronic device according to an embodiment of the invention.
In this embodiment, the electronic device 1 may be a computer, an intelligent terminal or a server. The electronic device 1 comprises at least a memory 11, a processor 13, a communication bus 15, and a network interface 17. In this embodiment, the electronic device 1 is an intelligent terminal.
The memory 11 includes at least one type of readable storage medium, which includes a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, and the like. The memory 11 may in some embodiments be an internal storage unit of the electronic device, for example a hard disk of the electronic device. The memory 11 may be an external storage device of the electronic apparatus in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash Card (FlashCard), and the like, which are provided on the electronic apparatus. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic apparatus. The memory 11 may be used not only to store application software installed in the electronic apparatus 1 and various types of data, such as a code of the living body detection program 111, but also to temporarily store data that has been output or is to be output.
The processor 13 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor or other data Processing chip in some embodiments, and is used for executing program codes stored in the memory 11 or Processing data.
The communication bus 15 is used to realize connection communication between these components.
The network interface 17 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), and is typically used to establish a communication link between the electronic apparatus 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface may also comprise a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device and for displaying a visualized user interface.
While FIG. 2 shows only the electronic device 1 with the components 11-17, those skilled in the art will appreciate that the configuration shown in FIG. 2 does not constitute a limitation of the electronic device, and may include fewer or more components than shown, or some components in combination, or a different arrangement of components.
In the embodiment of the electronic device 1 shown in fig. 2, a living body detection program 111 is stored in the memory 11; the processor 13 implements the following steps when executing the liveness detection program 111 stored in the memory 11:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
The structured light is light with light and dark stripes of a certain structure, the light and dark stripes are vertical linear stripes, and the stripe intervals are configured to match the size of the object to be detected.
Wherein the structured light is light having structured light and dark stripes at a wavelength such as an infrared band (e.g., 850 nm). For example, when the object to be detected is a human face, the stripe pitch of the light and dark stripes is about 2 mm. In one embodiment of the present invention, the structured light may be generated by placing a specific grating (e.g., 161 lines/inch) before the in-line laser.
The structured light is emitted by a light emitting device, which in turn emits at least two structured lights having different spatial frequencies for illuminating an object to be detected.
The step of judging whether the three-dimensional face image comes from a living body by using the face feature information representing the face features and the posture feature information of the three-dimensional face image comprises the following steps:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
The face image acquired by the common two-dimensional face acquisition device is a two-dimensional planar face image, and only two-dimensional coordinates of the feature points, that is, (x, y) coordinate values of the feature points, can be obtained, while the three-dimensional face image can be acquired by the three-dimensional face image acquisition device, and three-dimensional coordinates of the feature points can be obtained, including horizontal coordinates, vertical coordinates and depth coordinates, that is, (x, y, z) coordinate values of the feature points, compared with the two-dimensional image, the depth coordinates of the face image, that is, the z value, is increased, for example: the three-dimensional face image acquisition equipment can be composed of two infrared induction cameras and an infrared laser transmitter, or an infrared induction camera, a color induction camera and an infrared laser transmitter, imitates the parallax principle of human eyes, simultaneously acquires face images, tracks the tracks of infrared rays, and calculates the depth coordinate of the three-dimensional face image through the triangulation positioning principle.
The three-dimensional coordinate information of the face characteristic points is distance information of a three-dimensional coordinate space, a camera of three-dimensional face image acquisition equipment is used as an original point of the three-dimensional coordinate space, the direction of the three-dimensional face image acquisition equipment facing a user is used as the positive direction of a z axis, the positive directions of an x axis and a y axis can be determined according to a right-hand coordinate system, so that the three-dimensional coordinate space is established, the distance information of the characteristic points on the face image in the three-dimensional coordinate space relative to the original point of the coordinate is obtained through conversion, and the three-dimensional coordinate information of the face characteristic points is obtained;
the posture characteristic information of the three-dimensional face image refers to posture characteristic information of one frame of three-dimensional face image and/or posture characteristic change information of two frames of three-dimensional face images, such as an offset position, an inclination angle and a rotation angle.
Wherein the step of determining whether the three-dimensional face image is from a living body by using the face feature information representing the face features and the pose feature information of the three-dimensional face image may further include:
judging whether the three-dimensional face image is from a living body or not by utilizing the attitude characteristic information of the three-dimensional face image, and if so, turning to the next step;
and judging whether the three-dimensional face image comes from a living body or not by using face feature information representing the face features.
The human face characteristic points comprise a plurality of characteristic points of one or more areas in human face eyes, nose and mouth areas, and the posture characteristic information of the three-dimensional human face image is the yaw angle, pitch angle and roll angle of the three-dimensional human face image.
The face feature information refers to face feature information of one frame of three-dimensional face image and/or face feature change information of two frames of three-dimensional face images. Because the characteristic information on the face image is more, the face image comprises a nose, eyes, a mouth, cheeks, eyebrows and other main areas, and each area is composed of a plurality of characteristic points, the three-dimensional coordinate information of the extracted face characteristic points is more. The face feature information representing the face features is obtained by utilizing the three-dimensional coordinate information of the face feature points, so that interference information and noise can be effectively removed, the face features can be better represented by the face feature information, false detection is avoided, and the detection accuracy is improved.
The three-dimensional face image of the object to be detected under the structured light irradiation is obtained by reconstructing a face three-dimensional model, wherein the step of reconstructing the face three-dimensional model comprises the following steps:
correcting the digital images of the human faces at the two different angles by using a stereogram so as to eliminate vertical parallax;
carrying out geometric transformation and super-resolution transformation on the image containing the face depth data;
extracting seed pixels, wherein the extracted seed pixel points are projection coordinates of each pixel point of the image containing the face depth data on the digital image acquisition device;
obtaining a disparity map based on seed pixel expansion with the same resolution as the digital image acquisition device according to the disparity map, the binary mask image and the seed pixels obtained by respectively performing geometric transformation and super-resolution transformation on the image;
and establishing a human face three-dimensional model according to the disparity map which has the same resolution as the digital image acquisition device and is based on seed pixel expansion.
The electronic device provided by the embodiment does not need a large amount of acquired data training models, does not need an additional temperature acquisition device, and only needs an image-based mode to complete in vivo detection; the color filter is not easily interfered by color information, is not influenced by illumination, and is still used under dark conditions; the device is simple, easy to realize, can conveniently use in the middle of various application scenes, and the cost is lower, and later maintenance is simple.
Furthermore, an embodiment of the present invention also provides a computer-readable storage medium, where a living body detection program 111 is stored, where the living body detection program 111 is executable by one or more processors to implement the following operations:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
The embodiment of the computer readable storage medium of the present invention is substantially the same as the embodiments of the electronic device and the method, and will not be described herein in a repeated manner.
Alternatively, in other embodiments, the living body detection program 111 may be divided into one or more modules, and the one or more modules are stored in the memory 11 and executed by one or more processors (in this embodiment, the processor 13) to implement the present invention, where the module referred to in the present invention refers to a series of computer program instruction segments capable of performing a specific function for describing the execution process of the living body detection program in the electronic device.
For example, referring to fig. 3, which is a schematic diagram of program modules of the living body detection program 111 in an embodiment of the electronic device of the present invention, in this embodiment, the living body detection program 111 can be divided into the transmitting module 10, the acquiring module 20, the extracting module 30, the calculating module 40, and the determining module 50, and exemplarily:
the transmitting module 10 is used for transmitting the structured light to an object to be detected;
the acquiring module 20 is configured to acquire digital images of two different angles of faces of the object to be detected and an image including depth data of the faces of the object to be detected; acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
the extraction module 30 is configured to extract three-dimensional coordinate information of a human face feature point and posture feature information of a three-dimensional human face image from the three-dimensional human face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
the calculating module 40 is configured to calculate, by using the three-dimensional coordinate information of the face feature point, face feature information representing a face feature;
the judging module 50 is configured to judge whether the three-dimensional face image is from a living body by using the face feature information representing the face feature and the pose feature information of the three-dimensional face image.
The functions or operation steps implemented when the program modules such as the transmitting module 10, the obtaining module 20, the extracting module 30, the calculating module 40, and the determining module 50 are executed are substantially the same as those of the above embodiments, and are not described herein again.
It should be noted that the above-mentioned numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of in vivo detection, the method of in vivo detection comprising:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
2. The biopsy method according to claim 1, wherein the structured light is light having light and dark stripes of a certain structure, the light and dark stripes are vertical line-shaped stripes, and a stripe interval is configured to match a size of the object to be detected.
3. The in vivo method as defined in claim 2, wherein said structured light is emitted by a light emitting device which in turn emits at least two structured lights having different spatial frequencies for illuminating the object to be detected.
4. The living body detection method according to claim 1, wherein the step of determining whether the three-dimensional face image is from a living body using the face feature information representing the face feature and the pose feature information of the three-dimensional face image comprises:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
5. The in-vivo detection method as claimed in claim 1, wherein the face feature points include a plurality of feature points of one or more regions of the eyes, nose and mouth of the human face, and the posture feature information of the three-dimensional face image is a yaw angle, a pitch angle and a roll angle of the three-dimensional face image.
6. An electronic device, comprising a memory and a processor, the memory having stored thereon a liveness detection program executable on the processor, the liveness detection program when executed by the processor implementing the steps of:
emitting structured light to an object to be detected;
acquiring digital images of two faces with different angles of the object to be detected and an image containing depth data of the face of the object to be detected;
acquiring a three-dimensional face image of the object to be detected under the structured light irradiation based on the digital image and the image containing the face depth data;
extracting three-dimensional coordinate information of a face characteristic point and posture characteristic information of the three-dimensional face image from the three-dimensional face image; extracting three-dimensional coordinate information of the human face characteristic points by acquiring distance information of the characteristic points on the three-dimensional human face image relative to a coordinate origin in a three-dimensional coordinate space, wherein the posture characteristic information of the three-dimensional human face image is posture characteristic information of one frame of three-dimensional human face image and/or posture characteristic change information of two frames of three-dimensional human face images;
calculating to obtain face feature information representing face features by using the three-dimensional coordinate information of the face feature points;
and judging whether the three-dimensional face image is from a living body or not by using the face feature information representing the face features and the posture feature information of the three-dimensional face image.
7. The electronic device according to claim 6, wherein the structured light is a light having a structure of light and dark stripes, the light and dark stripes are vertical line-shaped stripes, and a stripe interval is configured to match a size of the object to be detected.
8. An electronic device as claimed in claim 7, characterized in that the structured light is emitted by light-emitting means which in turn emit at least two structured lights with different spatial frequencies for illuminating the object to be detected.
9. The electronic device according to claim 1, wherein the step of determining whether the three-dimensional face image is from a living body using the face feature information representing the face feature and the pose feature information of the three-dimensional face image comprises:
combining the human face feature information representing the human face features and the posture feature information of the three-dimensional human face image into joint feature information;
and judging whether the three-dimensional face image is from a living body or not by using the joint feature information.
10. A computer-readable storage medium, having a liveness detection program stored thereon, the liveness detection program being executable by one or more processors to implement the steps of the liveness detection method as recited in any one of claims 1 to 5.
CN201910768142.5A 2019-08-15 2019-08-15 Living body detection method, living body detection device and computer-readable storage medium Pending CN110598571A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910768142.5A CN110598571A (en) 2019-08-15 2019-08-15 Living body detection method, living body detection device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910768142.5A CN110598571A (en) 2019-08-15 2019-08-15 Living body detection method, living body detection device and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN110598571A true CN110598571A (en) 2019-12-20

Family

ID=68854686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910768142.5A Pending CN110598571A (en) 2019-08-15 2019-08-15 Living body detection method, living body detection device and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110598571A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112016505A (en) * 2020-09-03 2020-12-01 平安科技(深圳)有限公司 Living body detection method, living body detection equipment, storage medium and living body detection device based on face image
CN112115925A (en) * 2020-11-18 2020-12-22 鹏城实验室 Face recognition method and device and computer readable storage medium
CN113435342A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN114200364A (en) * 2021-12-08 2022-03-18 深圳市联影高端医疗装备创新研究院 Pose detection method, pose detection device and pose detection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN105447483A (en) * 2015-12-31 2016-03-30 北京旷视科技有限公司 Living body detection method and device
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740778A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Improved three-dimensional human face in-vivo detection method and device thereof
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103971408A (en) * 2014-05-21 2014-08-06 中国科学院苏州纳米技术与纳米仿生研究所 Three-dimensional facial model generating system and method
CN105447483A (en) * 2015-12-31 2016-03-30 北京旷视科技有限公司 Living body detection method and device
CN105574518A (en) * 2016-01-25 2016-05-11 北京天诚盛业科技有限公司 Method and device for human face living detection
CN105740778A (en) * 2016-01-25 2016-07-06 北京天诚盛业科技有限公司 Improved three-dimensional human face in-vivo detection method and device thereof
CN105912986A (en) * 2016-04-01 2016-08-31 北京旷视科技有限公司 In vivo detection method, in vivo detection system and computer program product
CN108734057A (en) * 2017-04-18 2018-11-02 北京旷视科技有限公司 The method, apparatus and computer storage media of In vivo detection

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898553A (en) * 2020-07-31 2020-11-06 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN111898553B (en) * 2020-07-31 2022-08-09 成都新潮传媒集团有限公司 Method and device for distinguishing virtual image personnel and computer equipment
CN112016505A (en) * 2020-09-03 2020-12-01 平安科技(深圳)有限公司 Living body detection method, living body detection equipment, storage medium and living body detection device based on face image
CN112016505B (en) * 2020-09-03 2024-05-28 平安科技(深圳)有限公司 Living body detection method, equipment, storage medium and device based on face image
CN112115925A (en) * 2020-11-18 2020-12-22 鹏城实验室 Face recognition method and device and computer readable storage medium
CN113435342A (en) * 2021-06-29 2021-09-24 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN113435342B (en) * 2021-06-29 2022-08-12 平安科技(深圳)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN114200364A (en) * 2021-12-08 2022-03-18 深圳市联影高端医疗装备创新研究院 Pose detection method, pose detection device and pose detection system

Similar Documents

Publication Publication Date Title
CN110598571A (en) Living body detection method, living body detection device and computer-readable storage medium
US10719954B2 (en) Method and electronic device for extracting a center position of an infrared spot
CN106446873B (en) Face detection method and device
US10223834B2 (en) System and method for immersive and interactive multimedia generation
US9519968B2 (en) Calibrating visual sensors using homography operators
CN110458895B (en) Image coordinate system conversion method, device, equipment and storage medium
US9829309B2 (en) Depth sensing method, device and system based on symbols array plane structured light
TWI419081B (en) Method and system for providing augmented reality based on marker tracing, and computer program product thereof
EP2531980B1 (en) Depth camera compatibility
US10872227B2 (en) Automatic object recognition method and system thereof, shopping device and storage medium
CN110619662A (en) Monocular vision-based multi-pedestrian target space continuous positioning method and system
US8687044B2 (en) Depth camera compatibility
CN108022264B (en) Method and equipment for determining camera pose
US20140092132A1 (en) Systems and methods for 3d pose estimation
CN109583304A (en) A kind of quick 3D face point cloud generation method and device based on structure optical mode group
CN105335722A (en) Detection system and detection method based on depth image information
CN104246793A (en) Three-dimensional face recognition for mobile devices
CN104156998A (en) Implementation method and system based on fusion of virtual image contents and real scene
CN111079470A (en) Method and device for detecting living human face
WO2014188446A2 (en) Method and apparatus for image matching
CN109523570B (en) Motion parameter calculation method and device
CN111091031A (en) Target object selection method and face unlocking method
CN110059537B (en) Three-dimensional face data acquisition method and device based on Kinect sensor
CN110832851B (en) Image processing apparatus, image conversion method, and program
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication