WO2019144956A1 - 图像传感器、镜头模组、移动终端、人脸识别方法及装置 - Google Patents
图像传感器、镜头模组、移动终端、人脸识别方法及装置 Download PDFInfo
- Publication number
- WO2019144956A1 WO2019144956A1 PCT/CN2019/073403 CN2019073403W WO2019144956A1 WO 2019144956 A1 WO2019144956 A1 WO 2019144956A1 CN 2019073403 W CN2019073403 W CN 2019073403W WO 2019144956 A1 WO2019144956 A1 WO 2019144956A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- visible light
- region
- light
- image feature
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/17—Image acquisition using hand-held instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Definitions
- the present disclosure relates to the field of image processing, and in particular, to an image sensor, a lens module, a mobile terminal, a face recognition method, and a device.
- face recognition technology has received more and more attention.
- face recognition technology has become a very popular research field in the field of computer vision and pattern recognition.
- Face recognition technology refers to the technique of recognizing the facial features and contours of people. Because of the different facial features and contour distribution of different people, face recognition technology has good non-intrusiveness compared with other biometric technologies. Interfering with people's normal behavior can achieve better recognition results.
- the face recognition technology applied to the mobile terminal cannot accurately determine whether the image captured by the camera is a real face, and the criminals can use the face photo to act as a real face to complete the face recognition, resulting in the security of the face recognition. Poor sex.
- the present disclosure provides an image sensor, a lens module, a mobile terminal, a face recognition method and a device, so as to solve the problem that the image captured by the camera cannot be accurately determined as a real face, resulting in low security of face recognition. .
- an image sensor including:
- microlens layer a microlens layer, a photosensitive element layer, and a filter layer disposed between the microlens layer and the photosensitive element layer;
- the incident light rays are sequentially transmitted through the microlens layer and the filter layer, and then transmitted to the photosensitive element layer.
- the photosensitive element layer includes a first photosensitive region corresponding to visible light and a second photosensitive region corresponding to infrared light.
- some embodiments of the present disclosure provide a lens module including the image sensor described above.
- some embodiments of the present disclosure provide a mobile terminal, including the lens module described above.
- some embodiments of the present disclosure provide a method for recognizing a face, including:
- the verification of the face to be verified is determined to be successful.
- some embodiments of the present disclosure further provide a face recognition device, including:
- An acquisition module configured to acquire a visible light image and an infrared light image after image acquisition of the face to be verified
- a first determining module configured to determine whether an image feature of the visible light image matches an image feature of a pre-stored visible light facial image, and whether an image feature of the infrared light image matches an image feature of the pre-stored infrared light facial image;
- a second determining module configured to: if the image feature of the visible light image matches the image feature of the pre-stored visible light facial image, and the image feature of the infrared light image matches the image feature image of the pre-stored infrared light surface, determine the face to be verified Verification is successful.
- some embodiments of the present disclosure further provide a mobile terminal, including: a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program being The processor implements the steps of the face recognition method as described above when the processor is executed.
- the embodiment of the present disclosure by performing image feature matching on the image features of the infrared light image after image acquisition of the face to be verified, it can be determined whether the face in front of the camera is a real face or a face photo.
- the security and accuracy of the face recognition are improved; and since the image features of the visible light image and the image features of the infrared light image are matched successfully, the verification of the face to be verified is determined to be successful.
- the accuracy of face recognition is improved.
- FIG. 1 shows one of structural diagrams of an example of an image sensor of the present disclosure
- FIG. 2 is a second structural diagram of an example of an image sensor of the present disclosure
- FIG. 3 is a third structural diagram of an example of an image sensor of the present disclosure.
- FIG. 4 is a flow chart showing an example of a face recognition method of the present disclosure
- FIG. 5 is a schematic structural diagram showing an example of a face recognition device of the present disclosure
- FIG. 6 is a second structural diagram of an example of a face recognition device of the present disclosure.
- FIG. 7 shows a block diagram of an example of a mobile terminal of the present disclosure.
- an image sensor which is applied to the above-described face recognition method, and includes:
- microlens layer 11 a microlens layer 11, a photosensitive element layer, and a filter layer disposed between the microlens layer 11 and the photosensitive element layer;
- the incident light rays are sequentially transmitted through the microlens layer 11 and the filter layer, and then transmitted to the photosensitive element layer, and the photosensitive element layer forms a first photosensitive region 131 including visible light and a second photosensitive region 132 corresponding to infrared light.
- a visible light image can be obtained by acquiring a photosensitive signal in the first photosensitive area 131, and an infrared image can be obtained by a photosensitive signal in the second photosensitive area 132.
- the incident light described above refers to natural light.
- the image sensor in some embodiments of the present disclosure can realize visible light imaging and realize infrared light imaging.
- image feature matching By performing image feature matching on the image features of the infrared light image after image acquisition of the face to be verified, it can be determined whether the face in front of the camera is a real face or a face photo, which improves the security of face recognition and Accuracy; and, since it is necessary to successfully verify the verification of the face to be verified after the image features of the visible light image and the image features of the infrared light image are successfully matched, the accuracy of the face recognition is improved.
- the image sensor in some embodiments of the present disclosure includes three implementations, the image sensor of the three implementations, depending on the different designs of the filter layer and the photosensitive element layer
- the microlens layers 11 are all the same, and therefore, only the respective filter layers and photosensitive element layers will be described later.
- the second photosensitive region 132 is provided with infrared light having a photosensitive wavelength range between 780 nm and 1 mm.
- the photodiode, the filter layer includes: a first filter layer including an invisible light filter region 1211 for passing visible light and a first pass region 1212a for passing natural light; and a second filter layer including for transmitting visible light a color filter region 1221 and a second pass region 1222a for passing natural light; wherein the invisible light filter region 1211, the color filter region 1221, and the first photosensitive region 131 correspond to each other, the first pass region 1212a and the filter region 1222 Corresponding to the second photosensitive region 132.
- the invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows visible light to pass through; the first pass region 1212a of the first filter layer allows natural light to pass through, and the first pass through the first filter layer A filter structure or a filter material that blocks the passage of natural light is not disposed at the location of the region 1212a. That is, the light reaching the position of the color filter region 1221 is visible light, and the light reaching the second pass region 1222a of the second filter layer is natural light.
- the color filter region 1221 includes a monochromatic light passage region through which three rays of red light, green light, and blue light pass, and correspondingly, a red light photoreceptor region, a green photoreceptor subregion, is included at the first photosensitive region 131. And a blue light photoreceptor sub-region, wherein the red light passage region on the color filter region 1221 corresponds to the red photoreceptor sub-region on the first photosensitive region 131, and the green light passage region on the color filter region 1221 is first The green photoreceptor sub-region on the photosensitive region 131 corresponds to the blue light-sensitive sub-region on the first photosensitive region 131 corresponding to the blue light-passing region on the color filter region 1221.
- the visible light ray After the visible light ray reaches any one of the monochromatic light passage regions on the color filter region 1221, the light of the visible light corresponding to the color corresponding to the monochromatic light passage region passes through the region to reach the first photosensitive region 131 and Monochromatic light passes through the corresponding location on the area.
- Natural light is allowed to pass through the second pass region 1222a of the second filter layer, and no filter structure or filter material that blocks the passage of natural light is disposed at the second pass region 1222a of the second filter layer. That is, the light reaching the position of the second photosensitive region 132 is natural light.
- the first photosensitive region 131 is provided with a visible light photosensitive diode, which comprises: a red light photodiode, a green light photodiode and a blue light photodiode, and the red light photodiode is located in the red photo photoreceptor region.
- the green photodiode is located in the green photoreceptor sub-region, the blue photodiode is located in the blue photoreceptor sub-region; the red photo-sensing diode has a photosensitive wavelength range between 640 and 780 nm, and the green photo-sensing diode has a photosensitive wavelength range Between 505 nm and 525 nm, the photosensitive wavelength of the blue photodiode is between 475 nm and 505 nm.
- the monochromatic light reaching the three photoreceptor sub-regions of the first photosensitive region 131 converts the optical signal into an electrical signal through a corresponding monochromatic photodiode, thereby generating a visible light image based on the electrical signal.
- the photosensitive wavelength range of the infrared photodiode in the second photosensitive region 132 is between 780 nm and 1 mm, that is, it can only convert the optical signal of infrared light in natural light. Therefore, the natural light reaching the second photosensitive region 132 can realize the signal conversion of the infrared light, thereby generating an infrared light image.
- the filter layer includes: a first filter layer, including invisible light filtering for passing visible light. a light region 1211 and a third pass region 1212b for passing natural light; a second filter layer including a color filter region 1221 for passing visible light and a first filter region 1222b for passing infrared light; wherein the invisible light
- the filter region 1211, the color filter region 1221, and the first photosensitive region 131 correspond to each other, and the third pass region 1212b, the first filter region 1222b, and the second photosensitive region 132 correspond to each other.
- the invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows visible light to pass through; the third pass region 1212b of the first filter layer allows natural light to pass through, and the third pass through the first filter layer.
- a filter structure or a filter material that blocks the passage of natural light is not disposed at the position of the region 1212b. That is, the light reaching the position of the color filter region 1221 is visible light, and the light reaching the first filter region 1222b is natural light.
- the color filter region 1221 includes a monochromatic light passage region through which three rays of red light, green light, and blue light pass, and correspondingly, a red light photoreceptor region, a green photoreceptor subregion, is included at the first photosensitive region 131. And a blue light photoreceptor sub-region, wherein the red light passage region on the color filter region 1221 corresponds to the red photoreceptor sub-region on the first photosensitive region 131, and the green light passage region on the color filter region 1221 is first The green photoreceptor sub-region on the photosensitive region 131 corresponds to the blue light-sensitive sub-region on the first photosensitive region 131 corresponding to the blue light-passing region on the color filter region 1221.
- the visible light ray After the visible light ray reaches any one of the monochromatic light passage regions on the color filter region 1221, the light of the visible light corresponding to the color corresponding to the monochromatic light passage region passes through the region to reach the first photosensitive region 131 and Monochromatic light passes through the corresponding location on the area.
- the first filter region 1222b refers to a filter region that blocks other light from passing through and allows only infrared light to pass through, and the natural light passes through the first pass region 1212b of the first filter layer to reach the position of the first filter region 1222b.
- the first filter region 1222b filters out visible light in natural light such that the infrared light enters the second photosensitive region 132.
- the first photosensitive region 131 is provided with a visible light photosensitive diode
- the second photosensitive region 132 is provided with an infrared photosensitive diode.
- the visible light sensing diode and the infrared photosensitive diode have different photosensitive wavelengths.
- the visible light photosensitive diode disposed in the first photosensitive area 131 includes: a red light photodiode, a green light photodiode, and a blue light photodiode.
- the red light photodiode is located in the red photoreceptor sub-area
- the green photo-sensing diode is located in the green light-sensitive photoreceptor.
- the blue photodiode is located in the blue photoreceptor sub-region; the red photo-sensing diode has a photo-sensing wavelength range of 640 to 780 nm, and the green photo-sensing diode has a photo-sensing wavelength range of 505 nm to 525 nm, and the blue photo-sensing diode
- the photosensitive wavelength is between 475 nm and 505 nm, and the monochromatic light on the three photoreceptor regions of the first photosensitive region 131 is converted into an electrical signal by a corresponding monochromatic photodiode, and then generated according to the electrical signal. Visible light image.
- the infrared light-sensing diode disposed in the second photosensitive region 132 has a wavelength range of infrared light, that is, a wavelength range of 780 nm to 1 mm, and infrared light reaching the second photosensitive region 132, and the infrared light signal is transmitted through the infrared light-sensitive diode. It is converted into an electrical signal, and an infrared light image is generated based on the electrical signal.
- the first photosensitive region 131 and the second photosensitive region 132 may also be disposed as photosensitive diodes having the same photosensitive wavelength range.
- a region for filtering visible light is not disposed on the first filter layer, and the manner is convenient for the first filter layer and the second filter layer. manufacturing.
- the light reaching the second photosensitive region 132 is only infrared light, it is possible to ensure better image quality of the generated infrared light image.
- the filter layer of the image sensor includes: a first filter layer, including a visible light filter region 1211 and a second filter region 1212c for passing infrared light; a second filter layer including a color filter region 1221 for passing visible light and a fourth pass region 1222c for passing infrared light;
- the invisible light filter region 1211, the color filter region 1221, and the first photosensitive region 131 correspond to each other, and the second filter region 1212c, the fourth pass region 1222c, and the second photosensitive region 132 correspond to each other.
- the invisible light filter region 1211 refers to a filter region that blocks the passage of invisible light and allows visible light to pass through;
- the second filter region 1212c refers to a filter region that blocks other light from passing through and allows only infrared light to pass. That is, the light reaching the position to the color filter region 1221 is visible light, and the light reaching the position of the fourth passing region 1222c to the second filter layer is infrared light.
- the fourth pass region 1222c of the second filter layer allows infrared light to pass therethrough, and at the position of the fourth pass region 1222c of the second filter layer, there is no filter structure or filter material that blocks light from passing therethrough.
- the color filter region 1221 includes a monochromatic light passage region through which three rays of red light, green light, and blue light pass, and correspondingly, a red light photoreceptor region, a green photoreceptor subregion, is included at the first photosensitive region 131. And a blue light photoreceptor sub-region, wherein the red light passage region on the color filter region 1221 corresponds to the red photoreceptor sub-region on the first photosensitive region 131, and the green light passage region on the color filter region 1221 is first The green photoreceptor sub-region on the photosensitive region 131 corresponds to the blue light-sensitive sub-region on the first photosensitive region 131 corresponding to the blue light-passing region on the color filter region 1221.
- the visible light ray After the visible light ray reaches any one of the monochromatic light passage regions on the color filter region 1221, the light of the visible light corresponding to the color corresponding to the monochromatic light passage region passes through the region to reach the first photosensitive region 131 and Monochromatic light passes through the corresponding location on the area.
- the first photosensitive region 131 is provided with a visible light photodiode
- the second photosensitive region 132 is provided with an infrared photodiode.
- the visible light sensing diode and the infrared photodiode have different photosensitive wavelengths.
- the visible light photosensitive diode disposed in the first photosensitive area comprises: a red light photodiode, a green light photodiode and a blue light photodiode, the red light photodiode is located in the red photo photoreceptor region, and the green photo photodiode is located in the green photo photoreceptor.
- the blue photodiode is located in the blue photoreceptor sub-region; the red photodiode has a photoreceptor wavelength range of 640 to 780 nm, and the green photodiode has a photoreceptor wavelength range between 505 nm and 525 nm, and the blue photodiode
- the photosensitive wavelength is between 475 nm and 505 nm, and the monochromatic light reaching the three photoreceptor regions of the first photosensitive region 131 converts the optical signal into an electrical signal through a corresponding monochromatic photodiode, thereby generating visible light according to the electrical signal. image.
- the infrared light-sensing diode disposed in the second photosensitive region 132 has a wavelength range of infrared light, that is, a wavelength range of 780 nm to 1 mm, and infrared light reaching the second photosensitive region 132, and the infrared light signal is transmitted through the infrared light-sensitive diode. It is converted into an electrical signal, and an infrared light image is generated based on the electrical signal.
- the first photosensitive region 131 and the second photosensitive region 132 may also be disposed as photosensitive electrodes having the same photosensitive wavelength range.
- the image sensor of the third implementation differs from the image sensor of the second implementation in that the visible light filter layer is disposed at a different position, and the method can also image the visible light and image the infrared light.
- the image sensor in the above three implementations can satisfy the following conditions.
- the first condition is that the invisible light filter region 1211 is an infrared light filter coating applied on the color filter region 1221 or a separately disposed infrared light filter.
- the first pass region 1212a and the second pass region 1222a in the first implementation are both vacant regions or are not provided with a filter structure or
- the lens of the filter material the third pass region 1212b in the second implementation is a vacant region or a lens without a filter structure or a filter material
- the fourth pass region 1222c in the third implementation is a vacant region. Or a lens without a filter structure or a filter material.
- the first pass region 1212a and the second pass region 1222a in the first implementation are both vacant regions or lenses without a filter structure or filter material.
- the third pass region 1212b in the second implementation manner is a vacant region or a lens that is not provided with a filter structure or a filter material.
- the fourth pass region 1222c in the third implementation manner is a vacant region or is not provided with a filter. A lens of light structure or filter material.
- the image sensor may further satisfy one of the second condition and the third condition, wherein the second condition is that the photosensitive element layer includes a plurality of pixel units, wherein each of the pixel units includes the first photosensitive area 131 And a second photosensitive area 132.
- the first photosensitive region 131 is three monochromatic optical sub-units, namely: a red optical sub-unit, a green optical sub-unit, and a blue optical sub-unit,
- the two photosensitive regions 132 are an infrared light pixel subunit.
- the third condition is that the photosensitive element layer includes a plurality of pixel units, and some of the plurality of pixel units include the first photosensitive area 131 and the second photosensitive area 132, and the other partial pixel unit includes only the first photosensitive area 131. .
- the first photosensitive area in each of the partial pixel units includes a red light pixel sub-unit, a green The light pixel sub-unit and a blue light pixel sub-unit
- the second photosensitive area 132 is an infrared light pixel sub-unit.
- the first photosensitive region 131 in each of the partial pixel units includes two red light pixel sub-units, one green light pixel sub-unit, A blue light pixel subunit.
- the number of the pixel units including the first photosensitive region 131 and the second photosensitive region 132 is smaller than the other pixel unit including only the first photosensitive region.
- the photosensitive element layer satisfying the third condition makes the finally obtained visible light imaging effect better with respect to the photosensitive element layer satisfying the second condition.
- the image sensor provided by some embodiments of the present disclosure, by performing image feature matching on the image feature of the visible light image after image acquisition of the face to be verified and the image feature of the infrared light image, it can be determined whether the face in front of the camera is one.
- the real face of the face improves the security and accuracy of the face recognition; and, since it is necessary to match the image features of the visible light image and the image features of the infrared light image, the face to be verified is determined. Successful verification has improved the accuracy of face recognition.
- Some embodiments of the present disclosure also provide a lens module including the image sensor described above.
- the lens group of the lens module is disposed adjacent to the image sensor, that is, the infrared filter is cancelled.
- Some embodiments of the present disclosure also provide a mobile terminal, including the lens module described above.
- some embodiments of the present disclosure provide a face recognition method including steps 201-203.
- Step 201 Acquire a visible light image and an infrared light image after image acquisition of the face to be verified.
- the mobile terminal controls the camera to be turned on, and performs face detection in the camera photographing area.
- the camera takes a photo in the photographing area.
- the manner of controlling the camera to take a picture is also different, and the description of this part will be described in detail later.
- Step 202 Determine whether the image feature of the visible light image matches the image feature of the pre-stored visible light facial image, and whether the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image.
- step 202 the step of determining whether the image feature of the visible light image matches the image feature of the pre-stored visible light facial image includes:
- Step 2021 extract a first image feature from the visible light image, and extract a second image feature from the pre-stored visible light facial image;
- Step 2022 comparing the first image feature and the second image feature
- Step 2023 if the comparison result of the first image feature and the second image feature is greater than or equal to the first predetermined threshold, determining that the image feature of the visible light image matches the image feature of the pre-stored visible light facial image;
- Step 2024 If the comparison result of the first image feature and the second image feature is less than the first predetermined threshold, determining that the image feature of the visible light image does not match the image feature of the pre-stored visible light facial image.
- step 202 the step of determining whether the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image includes:
- Step 2025 extract a third image feature from the infrared light image, and extract a fourth image feature from the pre-stored infrared light facial image;
- Step 2026 comparing the third image feature and the fourth image feature
- Step 2027 if the comparison result of the third image feature and the fourth image feature is greater than or equal to a second predetermined threshold, determining that the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image;
- Step 2028 If the comparison result of the third image feature and the fourth image feature is less than a second predetermined threshold, determining that the image feature of the infrared light image does not match the image feature of the pre-stored infrared light facial image.
- the first image feature and the second image feature refer to a facial feature of the face to be verified in the visible light image.
- the third image feature and the fourth image feature refer to the facial features of the face to be verified in the infrared light image.
- Step 203 If the image feature of the visible light image matches the image feature of the pre-stored visible light facial image, and the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image, then the verification of the face to be verified is determined to be successful.
- the verification of the face to be verified is determined to be a failure.
- a plurality of pre-stored visible light facial images and a plurality of pre-stored infrared light facial images are pre-stored in the mobile terminal, and when the image features of the visible light image match the image features of one of the plurality of pre-stored visible light images, It is considered that the image feature of the visible light image matches the image feature of the pre-stored visible light facial image; similarly, when the image feature of the infrared light image matches the image feature of one of the plurality of pre-stored visible light images, the visible light image is considered The image features match the image features of the pre-stored infrared light facial image.
- the face recognition method can perform image feature matching by using image features of the infrared light image and the image features of the visible light image after the image is collected on the face to be verified, and can determine that the face in front of the camera is one.
- the real face or the face photo improves the security and accuracy of the face recognition; and it is determined that the image features of the visible light image and the image features of the infrared light image are matched successfully. Verifying the verification of the face successfully improves the accuracy of face recognition.
- the steps of acquiring the visible light image and the infrared image after the image is to be verified include:
- Photographing through an infrared light camera to obtain an infrared light image Photographing through an infrared light camera to obtain an infrared light image.
- the photosensitive element layer in the image sensor of the visible light camera is only capable of sensing the first photosensitive area of visible light; and the photosensitive element layer of the image sensor of the infrared light camera is only capable of sensing the second photosensitive area of the infrared light. That is, in all the pixel units of the photosensitive element layer in the image sensor of the visible light camera, only the visible light photosensitive diode is included; in all the pixel units of the photosensitive element layer in the image sensor of the infrared light camera, only the infrared light is included Photosensitive diode.
- the two cameras do not interfere with each other, and the imaging effect of the finally imaged visible light image and infrared light image can be better.
- the step of acquiring the visible light image and the infrared light image after image acquisition of the face to be verified includes:
- the camera takes a picture to obtain a visible light image after the camera takes a picture;
- the camera takes a picture to obtain an infrared light image after the camera takes a picture.
- the photosensitive element layer of the image sensor of the camera module includes a first photosensitive area corresponding to visible light and a second photosensitive area corresponding to infrared light.
- the first photosensitive area is provided with a visible light photosensitive diode
- the second photosensitive area is provided with an infrared photosensitive diode.
- a visible light sensing diode is disposed in the first photosensitive area of the pixel unit, which is specifically: a red light photodiode, a green photo photodiode and A blue light photodiode is provided with a visible light photodiode in a second photosensitive region of the pixel unit, and the four photodiodes form a pixel unit.
- Each of the four photodiodes is connected to a control circuit, and the control circuit specifically includes: a first switch tube, a base of the first switch tube is connected to the controller, a collector of the first switch tube is connected to the photodiode; and the second switch a tube, an emitter of the first switching tube is connected to a collector of the second switching tube, a base of the second switching tube is connected to the controller, and a third switching tube, a base of the third switching tube and an emission of the first switching tube
- the pole is connected to the collector of the second switch tube, the emitter of the third switch tube is connected to the emitter of the second switch tube; the fourth switch tube, the base of the fourth switch tube is connected to the controller, and the fourth switch tube is connected to the controller
- the emitter is connected to the collector of the third switch, and the collector of the fourth switch is connected to the output.
- the second switch tube, the third switch tube and the fourth switch tube are in an on state.
- the controller When in the visible light photographing mode, the controller sends a signal to the first switch tube connected to the visible light photosensitive diode, so that the first switch tube turns on the connection between the visible light photosensitive diode and the third switch tube, and at this time, the light is incident on the visible light.
- the optical signal emitted by the diode is converted into an electrical signal by the visible light sensing diode, and then sequentially outputted through the first switch tube, the third switch tube and the fourth switch tube; in the process of taking the visible light photographing mode, the controller does not control and the infrared signal
- the first switch connected to the photo-sensing diode is turned on, so that the infrared photodiode cannot be turned on between the corresponding third switch tube; thereby achieving the purpose of imaging only visible light.
- the controller When in the infrared light photographing mode, the controller sends a signal to the first switch tube connected to the infrared light photodiode, so that the first switch tube turns on the connection between the infrared light photodiode and the third switch tube, at this time, the incident After the infrared light photodiode is converted into an electric signal, the optical signal is sequentially outputted through the first switch tube, the third switch tube and the fourth switch tube; and in the process of taking the infrared light photographing mode, The controller does not control the conduction of the first switch tube connected to the visible light photodiode, so that the visible light photodiode cannot be turned on between the corresponding third switch tube; thereby achieving the purpose of imaging only the infrared light.
- the method of realizing visible light imaging and infrared light imaging through a camera can increase the arrangement space of other components in the moving space, and at the same time, reduce the manufacturing cost of the mobile terminal.
- the face recognition method in some embodiments of the present disclosure may be used for screen unlocking of the mobile terminal.
- the screen is unlocked.
- the face recognition method in some embodiments of the present disclosure may also be applied to the payment field of the mobile terminal.
- the mobile terminal needs to enter the corresponding payment interface to perform face recognition, and When the face recognition is successful, payment is made.
- the face in front of the camera is a real face. It is also a face photo, which improves the security and accuracy of face recognition.
- the verification of the face to be verified is successful after the image features of the visible light image and the image features of the infrared light image are successfully matched. The accuracy of face recognition.
- some embodiments of the present disclosure further provide a face recognition apparatus 300, including:
- the obtaining module 301 is configured to obtain a visible light image and an infrared light image after image acquisition of the face to be verified;
- a first determining module 302 configured to determine whether an image feature of the visible light image matches an image feature of the pre-stored visible light facial image, and whether the image feature of the infrared light image matches an image feature of the pre-stored infrared light facial image;
- the second determining module 303 is configured to: if the image feature of the visible light image matches the image feature of the pre-stored visible light facial image, and the image feature of the infrared light image matches the image feature of the pre-stored infrared light facial image, determine the face to be verified The verification is successful.
- the obtaining module 301 includes:
- the first obtaining unit 3011 is configured to: when the mobile terminal is in the visible light photographing mode, take a photo by the camera, and obtain a visible light image after the camera performs the photographing;
- the second obtaining unit 3012 is configured to: when the mobile terminal is in the infrared light photographing mode, take a photo by the camera, and obtain an infrared light image after the camera performs the photographing.
- the first determining module 302 includes:
- a first extracting unit 3021 configured to extract a first image feature from the visible light image, and extract a second image feature from the pre-stored visible light facial image
- a first comparison unit 3022 configured to compare the first image feature and the second image feature
- the first determining unit 3023 is configured to determine that an image feature of the visible light image matches an image feature of the pre-stored visible light facial image if the comparison result of the first image feature and the second image feature is greater than or equal to a first predetermined threshold;
- the second determining unit 3024 is configured to determine that the image feature of the visible light image does not match the image feature of the pre-stored visible light facial image if the comparison result of the first image feature and the second image feature is less than a first predetermined threshold.
- the first determining module 302 further includes:
- a second extracting unit 3025 configured to extract a third image feature from the infrared light image, and extract a fourth image feature from the pre-stored infrared light facial image;
- a second comparison unit 3026 configured to compare the third image feature and the fourth image feature
- the third determining unit 3027 is configured to determine that an image feature of the infrared light image matches an image feature of the pre-stored infrared light facial image if the comparison result of the third image feature and the fourth image feature is greater than or equal to a second predetermined threshold;
- the fourth determining unit 3028 is configured to determine that the image feature of the infrared light image does not match the image feature of the pre-stored infrared light facial image if the comparison result of the third image feature and the fourth image feature is less than a second predetermined threshold.
- the mobile terminal provided by some embodiments of the present disclosure can implement various processes implemented by the mobile terminal in the method embodiment in FIG. 4, and details are not described herein again to avoid repetition.
- image feature matching By performing image feature matching on the image features of the infrared light image after image acquisition of the face to be verified, it can be determined whether the face in front of the camera is a real face, which improves the security and accuracy of the face recognition; Moreover, since the image features of the visible light image and the image features of the infrared light image are matched successfully, the verification of the face to be verified is determined to be successful, and the accuracy of the face recognition is improved.
- FIG. 7 is a schematic structural diagram of hardware of a mobile terminal that implements some embodiments of the present disclosure.
- the mobile terminal 400 includes, but is not limited to, a radio frequency unit 401, a network module 402, an audio output unit 403, an input unit 404, a sensor 405, a display unit 406, a user input unit 407, an interface unit 408, a memory 409, a processor 410, and Power supply 411 and other components.
- a radio frequency unit 401 includes, but is not limited to, a radio frequency unit 401, a network module 402, an audio output unit 403, an input unit 404, a sensor 405, a display unit 406, a user input unit 407, an interface unit 408, a memory 409, a processor 410, and Power supply 411 and other components.
- the mobile terminal structure shown in FIG. 7 does not constitute a limitation of the mobile terminal, and the mobile terminal may include more or less components than those illustrated, or combine some components, or different components. Arrangement.
- the mobile terminal includes, but is not limited to, a mobile phone, a tablet, a notebook, a palmtop, an in-ve
- the radio frequency unit 401 is configured to send and receive data under the control of the processor 410.
- the processor 410 is configured to obtain a visible light image and an infrared light image after image acquisition of the face to be verified; if the visible light image and the image feature pre-stored the image feature of the visible light facial image, and the image feature of the infrared light image and the pre-stored infrared image If the image features of the light facial image match, it is determined that the verification of the face to be verified is successful.
- Image feature matching is performed on the image features of the visible light image and the image features of the infrared light image after the image is to be verified, and the verification of the face to be verified is determined after the two image features are successfully matched. Improve the accuracy of face recognition.
- the radio frequency unit 401 may be configured to receive and transmit signals during and after receiving or transmitting information, and specifically, after receiving downlink data from the base station, processing the data to the processor 410; Send the uplink data to the base station.
- radio frequency unit 401 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
- the radio unit 401 can also communicate with the network and other devices through a wireless communication system.
- the mobile terminal provides the user with wireless broadband Internet access through the network module 402, such as helping the user to send and receive emails, browse web pages, and access streaming media.
- the audio output unit 403 can convert the audio data received by the radio frequency unit 401 or the network module 402 or stored in the memory 409 into an audio signal and output as a sound. Moreover, the audio output unit 403 can also provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 400.
- the audio output unit 403 includes a speaker, a buzzer, a receiver, and the like.
- the input unit 404 is for receiving an audio or video signal.
- the input unit 404 may include a graphics processing unit (GPU) 4041 and a microphone 4042 that images an still picture or video obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode.
- the data is processed.
- the processed image frame can be displayed on the display unit 406.
- the image frames processed by the graphics processor 4041 may be stored in the memory 409 (or other storage medium) or transmitted via the radio unit 401 or the network module 402.
- the microphone 4042 can receive sound and can process such sound as audio data.
- the processed audio data can be converted to a format output that can be transmitted to the mobile communication base station via the radio unit 401 in the case of a telephone call mode.
- the mobile terminal 400 also includes at least one type of sensor 405, such as a light sensor, motion sensor, and other sensors.
- the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel 4061 according to the brightness of the ambient light, and the proximity sensor can close the display panel 4061 when the mobile terminal 400 moves to the ear. / or backlight.
- the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity. It can be used to identify the attitude of the mobile terminal (such as horizontal and vertical screen switching, related games).
- sensor 405 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, Infrared sensors and the like are not described here.
- the display unit 406 is for displaying information input by the user or information provided to the user.
- the display unit 406 can include a display panel 4061.
- the display panel 4061 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
- the user input unit 407 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the mobile terminal.
- the user input unit 407 includes a touch panel 4071 and other input devices 4072.
- the touch panel 4071 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 4071 or near the touch panel 4071. operating).
- the touch panel 4071 may include two parts of a touch detection device and a touch controller.
- the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
- the processor 410 receives the commands from the processor 410 and executes them.
- the touch panel 4071 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
- the user input unit 407 may also include other input devices 4072.
- the other input devices 4072 may include, but are not limited to, a physical keyboard, function keys (such as a volume control button, a switch button, etc.), a trackball, a mouse, and a joystick, and are not described herein again.
- the touch panel 4071 can be overlaid on the display panel 4061. After the touch panel 4071 detects a touch operation thereon or nearby, the touch panel 4071 transmits to the processor 410 to determine the type of the touch event, and then the processor 410 according to the touch. The type of event provides a corresponding visual output on display panel 4061.
- the touch panel 4071 and the display panel 4061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 4071 can be integrated with the display panel 4061. The input and output functions of the mobile terminal are implemented, and are not limited herein.
- the interface unit 408 is an interface in which an external device is connected to the mobile terminal 400.
- the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
- the interface unit 408 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 400 or can be used at the mobile terminal 400 and externally Data is transferred between devices.
- Memory 409 can be used to store software programs as well as various data.
- the memory 409 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
- memory 409 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
- the processor 410 is a control center of the mobile terminal that connects various portions of the entire mobile terminal using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 409, and recalling data stored in the memory 409.
- the mobile terminal performs various functions and processing data to perform overall monitoring on the mobile terminal.
- the processor 410 may include one or more processing units; optionally, the processor 410 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application, etc., and a modulation solution
- the processor mainly handles wireless communication. It can be understood that the above modem processor may not be integrated into the processor 410.
- the mobile terminal 400 may further include a power source 411 (such as a battery) for supplying power to various components.
- a power source 411 such as a battery
- the power source 411 may be logically connected to the processor 410 through a power management system to manage charging, discharging, and power consumption through the power management system. Management and other functions.
- the mobile terminal 400 includes some functional modules not shown, and details are not described herein again.
- some embodiments of the present disclosure further provide a mobile terminal, including a processor 410, a memory 409, a computer program stored on the memory 409 and executable on the processor 410, the computer program being processed by the processor
- a mobile terminal including a processor 410, a memory 409, a computer program stored on the memory 409 and executable on the processor 410, the computer program being processed by the processor
- Some embodiments of the present disclosure further provide a computer readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement various processes of the above-described face recognition method embodiment, and The same technical effect, in order to avoid repetition, will not be described here.
- the computer readable storage medium may be a volatile storage medium or a non-volatile storage medium, or may include both a volatile storage medium and a non-volatile storage medium, such as a read-only memory (Read- Only Memory (ROM), Random Access Memory (RAM), disk or CD.
- the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware.
- the technical solution of the present disclosure which is essential or contributes to the related art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk, CD-ROM).
- the instructions include a number of instructions for causing a terminal (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in various embodiments of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Studio Devices (AREA)
Abstract
本公开公开了一种图像传感器、镜头模组、移动终端、人脸识别方法及装置。图像传感器包括:微透镜层、感光元件层和设置于微透镜层和感光元件层之间的滤光层;入射光线依次经过微透镜层和滤光层后,传输至感光元件层,感光元件层包括对应可见光的第一感光区域和对应红外光的第二感光区域。
Description
相关申请的交叉引用
本申请主张在2018年1月29日在中国提交的中国专利申请号No.201810083033.5的优先权,其全部内容通过引用包含于此。
本公开涉及图像处理领域,尤其涉及一种图像传感器、镜头模组、移动终端、人脸识别方法及装置。
随着人脸识别技术在智能移动终端上的应用,人脸识别技术越来越受到关注。人脸识别技术作为生物特征识别技术中一个非常重要的一个分支,已成为计算机视觉与模式识别领域中非常热门的一个研究领域。
人脸识别技术是指对人的面部五官以及轮廓等分布特征进行识别的技术,由于不同人的五官以及轮廓分布不同,相对于其他生物识别技术,人脸识别技术具有良好的非侵扰性,无需干扰人们的正常行为就能达到较好的识别效果。
应用在移动终端上的人脸识别技术,不能准确的判断出摄像头拍摄的图像是否为真实人脸,不法分子可以利用人脸照片充当真实人脸,以完成人脸识别,导致人脸识别的安全性较差。
发明内容
本公开提供了一种图像传感器、镜头模组、移动终端、人脸识别方法及装置,以解决不能准确的判断出摄像头拍摄的图像是否为真实人脸,导致人脸识别安全性较低的问题。
第一方面,本公开的一些实施例提供了一种图像传感器,包括:
微透镜层、感光元件层和设置于微透镜层和感光元件层之间的滤光层;
其中,入射光线依次经过微透镜层和滤光层后,传输至感光元件层,感 光元件层包括对应可见光的第一感光区域和对应红外光的第二感光区域。
第二方面,本公开的一些实施例提供了一种镜头模组,包括上述的图像传感器。
第三方面,本公开的一些实施例提供了一种移动终端,包括上述的镜头模组。
第四方面,本公开的一些实施例提供了一种人脸识别方法,包括:
获取对待验证人脸进行图像采集后的可见光图像和红外光图像;
确定所述可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配,以及所述红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配;
若可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,且红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对待验证人脸的验证为成功。
第五方面,本公开的一些实施例还提供了一种人脸识别装置,包括:
获取模块,用于获取对待验证人脸进行图像采集后的可见光图像和红外光图像;
第一确定模块,用于确定所述可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配,以及所述红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配;
第二确定模块,用于若可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,以及红外光图像的图像特征与预存红外光面部的图像特征图像相匹配,则确定对待验证人脸的验证为成功。
第六方面,本公开的一些实施例还提供了一种移动终端,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时所述处理器实现如上述的人脸识别方法的步骤。
这样,本公开的实施例中,通过对待验证人脸进行图像采集后的红外光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是一张真实的人脸还是人脸照片,提高了人脸识别的安全性和准确性;并且,由于需要 在可见光图像的图像特征和红外光图像的图像特征两种图像的图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
为了更清楚地说明本公开的一些实施例的技术方案,下面将对本公开的一些实施例的描述中所需要使用的附图作简单地介绍。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1表示本公开的图像传感器的实例的结构示意图之一;
图2表示本公开的图像传感器的实例的结构示意图之二;
图3表示本公开的图像传感器的实例的结构示意图之三;
图4表示本公开的人脸识别方法的实例的流程图;
图5表示本公开的人脸识别装置的实例的结构示意图之一;
图6表示本公开的人脸识别装置的实例的结构示意图之二;以及
图7表示本公开的移动终端的实例的框图。
下面将结合本公开的一些实施例中的附图,对本公开的一些实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
参照图1至图3,本公开的一些实施例提供了一种图像传感器,应用于上述的人脸识别方法,包括:
微透镜层11、感光元件层和设置于微透镜层11和感光元件层之间的滤光层;
其中,入射光线依次经过微透镜层11和滤光层后,传输至感光元件层,感光元件层形成包括可见光的第一感光区域131和对应红外光的第二感光区域132。通过采集第一感光区域131内的感光信号可以获得可见光图像,通 过第二感光区域132内的感光信号可以获得红外图像。
具体地,上述的入射光线是指自然光线。
本公开的一些实施例中的图像传感器,即能实现可见光成像,又能实现红外光成像。通过对待验证人脸进行图像采集后的红外光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是一张真实的人脸还是人脸照片,提高了人脸识别的安全性和准确性;并且,由于需要在可见光图像的图像特征和红外光图像的图像特征两种图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
具体地,在本公开的一些实施例中,根据对滤光层和感光元件层的不同设计,本公开的一些实施例中的图像传感器包括三种实现方式,三种实现方式中的图像传感器的微透镜层11均相同,因此,后文中仅对各自的滤光层和感光元件层进行介绍。
下面对本公开的一些实施例中图像传感器的第一实现方式进行介绍,参照图1,在第一种实现方式中,第二感光区域132内设置有感光波长范围位于780nm至1mm之间的红外光感光二极管,滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域1211以及用于通过自然光的第一通过区域1212a;第二滤光层,包括用于通过可见光的彩色滤光区域1221以及用于通过自然光的第二通过区域1222a;其中,不可见光滤光区域1211、彩色滤光区域1221和第一感光区域131相对应,第一通过区域1212a、滤光区域1222和第二感光区域132相对应。
其中,不可见光滤光区域1211是指阻止不可见光通过且允许可见光通过的滤光区域;第一滤光层的第一通过区域1212a处允许自然光线通过,在第一滤光层的第一通过区域1212a位置处未设置有阻止自然光线通过的滤光结构或滤光物质。也即,到达至彩色滤光区域1221位置处的光线为可见光光线,而到达至第二滤光层的第二通过区域1222a位置处的光线为自然光线。
彩色滤光区域1221包括供红色光线、绿色光线和蓝色光线三种光线通过的单色光通过区域,相应地,在该第一感光区域131处包括红色光感光子区域,绿色光感光子区域和蓝色光感光子区域,其中,彩色滤光区域1221上的红色光通过区域与第一感光区域131上的红色光感光子区域相对应,彩色滤 光区域1221上的绿色光通过区域与第一感光区域131上的绿色光感光子区域相对应,彩色滤光区域1221上的蓝色光通过区域与第一感光区域131上的蓝色光感光子区域相对应。可见光光线在到达至该彩色滤光区域1221上的任意一种单色光通过区域上后,可见光线中与该单色光通过区域对应颜色的光线通过该区域到达第一感光区域131上与该单色光通过区域上对应的位置处。
第二滤光层的第二通过区域1222a处允许自然光线通过,在第二滤光层的第二通过区域1222a位置处未设置有阻止自然光线通过的滤光结构或滤光物质。也即,到达第二感光区域132位置处的光线为自然光线。
其中,在该第一种实现方式中,第一感光区域131设置有可见光感光二极管,其包括:红色光感光二极管、绿色光感光二极管以及蓝色光感光二极管,红色光感光二极管位于红色光感光子区域内,绿色光感光二极管位于绿色光感光子区域内,蓝色光感光二极管位于蓝色光感光子区域内;红色光感光二极管的感光波长范围位于640至780nm之间,绿色光感光二极管的感光波长范围位于505nm至525nm之间,蓝色光感光二极管的感光波长位于475nm至505nm之间。到达第一感光区域131的三个感光子区域上的单色光,通过对应的单色光光电二极管将光信号转换为电信号,进而根据该电信号生成可见光图像。
由于第二感光区域132内的红外光感光二极管的感光波长范围位于780nm至1mm之间,也即,其只能对自然光线中的红外光线的光信号进行转换。因此,到达第二感光区域132的自然光线能够实现红外光线的信号转换,进而生成红外光图像。
下面对本公开的一些实施例中图像传感器的第二实现方式进行介绍,参照图2,在第二种实现方式中,滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域1211和用于通过自然光的第三通过区域1212b;第二滤光层,包括用于通过可见光的彩色滤光区域1221和用于通过红外光的第一滤光区域1222b;其中,不可见光滤光区域1211、彩色滤光区域1221和第一感光区域131相对应,第三通过区域1212b、第一滤光区域1222b和第二感光区域132相对应。
其中,不可见光滤光区域1211是指阻止不可见光通过且允许可见光通过 的滤光区域;第一滤光层的第三通过区域1212b处允许自然光线通过,在第一滤光层的第三通过区域1212b位置处未设置有阻止自然光线通过的滤光结构或滤光物质。也即,到达至彩色滤光区域1221位置处的光线为可见光光线,而到达至第一滤光区域1222b位置出的光线为自然光线。
彩色滤光区域1221包括供红色光线、绿色光线和蓝色光线三种光线通过的单色光通过区域,相应地,在该第一感光区域131处包括红色光感光子区域,绿色光感光子区域和蓝色光感光子区域,其中,彩色滤光区域1221上的红色光通过区域与第一感光区域131上的红色光感光子区域相对应,彩色滤光区域1221上的绿色光通过区域与第一感光区域131上的绿色光感光子区域相对应,彩色滤光区域1221上的蓝色光通过区域与第一感光区域131上的蓝色光感光子区域相对应。可见光光线在到达至该彩色滤光区域1221上的任意一种单色光通过区域上后,可见光线中与该单色光通过区域对应颜色的光线通过该区域到达第一感光区域131上与该单色光通过区域上对应的位置处。
第一滤光区域1222b是指阻止其它光线通过且仅允许红外光通过的滤光区域,自然光线通过第一滤光层的第一通过区域1212b到达该第一滤光区域1222b位置处,通过该第一滤光区域1222b将自然光线中的可见光过滤掉,使得红外光线进入至第二感光区域132处。
其中,在该第二种实现方式中,该第一感光区域131内设置有可见光感光二极管,第二感光区域132内设置有红外光感光二极管,可见光感光二极管与红外光感光二极管的感光波长不同。其中,第一感光区域131内设置的可见光感光二极管包括:红色光感光二极管、绿色光感光二极管以及蓝色光感光二极管,红色光感光二极管位于红色光感光子区域内,绿色光感光二极管位于绿色光感光子区域内,蓝色光感光二极管位于蓝色光感光子区域内;红色光感光二极管的感光波长范围位于640至780nm之间,绿色光感光二极管的感光波长范围位于505nm至525nm之间,蓝色光感光二极管的感光波长位于475nm至505nm之间,到达第一感光区域131的三个感光子区域上的单色光,通过对应的单色光光电二极管将光信号转换为电信号,进而根据该电信号生成可见光图像。第二感光区域132内设置的红外光感光二极管的感光波长范围为红外光线的波长范围,即780nm至1mm之间,到达第二感光区 域132的红外光,通过该红外光感光二极管将红外光信号转换为电信号,进而根据该电信号生成红外光图像。
具体地,在该第二种实现方式中,该第一感光区域131和第二感光区域132内还可以设置为感光波长范围相同的感光二极管。
其中,在本公开图像传感器的第二种实现方式中,在第一滤光层上未设置有对可见光进行滤光的区域,该种方式便于对第一滤光层和第二滤光层的生产制造。与第一种实现方式中的图像传感器相比,由于到达第二感光区域132处的光线仅为红外光线,因此,能够保证生成的红外光图像的图像质量更好。
参照图3,图3中提供了本公开第三种实现方式的图像传感器,在该第三种实现方式中,图像传感器的滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域1211和用于通过红外光的第二滤光区域1212c;第二滤光层,包括用于通过可见光的彩色滤光区域1221和用于通过红外光的第四通过区域1222c;其中,不可见光滤光区域1211、彩色滤光区域1221和第一感光区域131相对应,第二滤光区域1212c、第四通过区域1222c和第二感光区域132相对应。
其中,不可见光滤光区域1211是指阻止不可见光通过且允许可见光通过的滤光区域;第二滤光区域1212c是指阻止其它光线通过且仅允许红外光通过的滤光区域。也即,到达至彩色滤光区域1221位置处的光线为可见光光线,而到达至第二滤光层的第四通过区域1222c位置处的光线为红外光线。
第二滤光层的第四通过区域1222c处允许红外光线通过,在第二滤光层的第四通过区域1222c位置处未设置有阻止光线通过的滤光结构或滤光物质。
彩色滤光区域1221包括供红色光线、绿色光线和蓝色光线三种光线通过的单色光通过区域,相应地,在该第一感光区域131处包括红色光感光子区域,绿色光感光子区域和蓝色光感光子区域,其中,彩色滤光区域1221上的红色光通过区域与第一感光区域131上的红色光感光子区域相对应,彩色滤光区域1221上的绿色光通过区域与第一感光区域131上的绿色光感光子区域相对应,彩色滤光区域1221上的蓝色光通过区域与第一感光区域131上的蓝色光感光子区域相对应。可见光光线在到达至该彩色滤光区域1221上的任意 一种单色光通过区域上后,可见光线中与该单色光通过区域对应颜色的光线通过该区域到达第一感光区域131上与该单色光通过区域上对应的位置处。
其中,在该第三种实现方式中,该第一感光区域131内设置有可见光感光二极管,第二感光区域132内设置有红外光感光二极管,可见光感光二极管与红外光感光二极管的感光波长不同。其中,第一感光区域内设置的可见光感光二极管包括:红色光感光二极管、绿色光感光二极管以及蓝色光感光二极管,红色光感光二极管位于红色光感光子区域内,绿色光感光二极管位于绿色光感光子区域内,蓝色光感光二极管位于蓝色光感光子区域内;红色光感光二极管的感光波长范围位于640至780nm之间,绿色光感光二极管的感光波长范围位于505nm至525nm之间,蓝色光感光二极管的感光波长位于475nm至505nm之间,到达第一感光区域131的三个感光子区域上的单色光,通过对应的单色光光电二极管将光信号转换为电信号,进而根据该电信号生成可见光图像。第二感光区域132内设置的红外光感光二极管的感光波长范围为红外光线的波长范围,即780nm至1mm之间,到达第二感光区域132的红外光,通过该红外光感光二极管将红外光信号转换为电信号,进而根据该电信号生成红外光图像。
具体地,在该第三种实现方式中,该第一感光区域131和第二感光区域132内还可以设置为感光波长范围相同的感光二极管。
与第一种实现方式中的图像传感器相比,由于到达第二感光区域132处的光线仅为红外光线,因此,能够保证生成的红外光图像的图像质量更好。该第三种实现方式的图像传感器与第二种实现方式中的图像传感器的区别在于,可见光滤光层设置位置的不同,该种方式同样能够实现对可见光进行成像以及对红外光进行成像。
其中,对于上述三种实现方式中的图像传感器来说,均可以满足以下条件。
第一种条件为,该不可见光滤光区域1211均为涂覆于彩色滤光区域1221上的红外光滤光涂层或单独设置的红外光滤光片。
若采用红外光滤光图层涂覆在彩色滤光区域1221的方式,则第一种实现方式中的第一通过区域1212a和第二通过区域1222a均为空置区域或者为未 设置滤光结构或滤光物质的镜片,第二种实现方式中的第三通过区域1212b为空置区域或者为未设置滤光结构或滤光物质的镜片,第三种实现方式中的第四通过区域1222c为空置区域或者为未设置滤光结构或滤光物质的镜片。
若采用单独设置的红外光滤光片,同样地,则第一种实现方式中的第一通过区域1212a和第二通过区域1222a均为空置区域或者为未设置滤光结构或滤光物质的镜片,第二种实现方式中的第三通过区域1212b为空置区域或者为未设置滤光结构或滤光物质的镜片,第三种实现方式中的第四通过区域1222c为空置区域或者为未设置滤光结构或滤光物质的镜片。
图像传感器还可以满足第二种条件和第三种条件中的其中一种,其中,第二种条件为,感光元件层包括多个像素单元,其中,每一像素单元均包括第一感光区域131和第二感光区域132。
此时,对于一个像素单元来说,第一感光区域131为三个单色光像素子单元,分别为:一个红色光像素子单元、一个绿色光像素子单元和一个蓝色光像素子单元,第二感光区域132为一个红外光像素子单元。
其中,第三种条件为,感光元件层包括多个像素单元,多个像素单元中的一部分像素单元包括第一感光区域131和第二感光区域132,另一部分像素单元仅包括第一感光区域131。
此时,对于包括第一感光区域131和第二感光区域132的一部分像素单元来说,该部分像素单元中的每一个像素单元中的第一感光区域均包括一个红色光像素子单元、一个绿色光像素子单元和一个蓝色光像素子单元,第二感光区域132为一红外光像素子单元。
对于仅包括第一感光区域131的另一部分像素单元来说,该部分像素单元中的每一个像素单元中的第一感光区域131均包括两个红色光像素子单元、一个绿色光像素子单元、一个蓝色光像素子单元。
并且,第三种条件中,包括有第一感光区域131和第二感光区域132的一部分像素单元的个数小于仅包括第一感光区域的另一部分像素单元。
满足第三种条件的感光元件层相对于满足第二种条件的感光元件层来说,会使得最终获得的可见光成像效果更好。
通过本公开的一些实施例提供的图像传感器,通过对待验证人脸进行图 像采集后的可见光图像的图像特征和红外光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是否为一张真实的人脸,提高了人脸识别的安全性和准确性;并且,由于需要在可见光图像的图像特征和红外光图像的图像特征两种图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
本公开的一些实施例还提供了一种镜头模组,包括上述的图像传感器。
所述镜头模组的镜片组与图像传感器相邻设置,即取消了红外滤光片。
本公开的一些实施例还提供了一种移动终端,包括上述的镜头模组。
参照图4,本公开的一些实施例提供了一种人脸识别方法,包括步骤201-203。
步骤201,获取对待验证人脸进行图像采集后的可见光图像和红外光图像。
在需要进行人脸识别时,移动终端控制摄像头开启,并对摄像头拍照区域内进行人脸检测,在检测到人脸时,摄像头对拍照区域内进行拍照。
具体地,根据移动终端上安装的摄像头的功能的不同,控制摄像头拍照的方式也不相同,对此部分的描述在后文中进行详细介绍。
步骤202,确定可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配,以及红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配。
其中,步骤202中,确定可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配的步骤包括:
步骤2021,从可见光图像中提取第一图像特征,以及从预存可见光面部图像中提取第二图像特征;
步骤2022,对第一图像特征和第二图像特征进行比对;
步骤2023,若第一图像特征和第二图像特征的比对结果大于或等于第一预定阈值,则确定可见光图像的图像特征与预存可见光面部图像的图像特征相匹配;
步骤2024,若第一图像特征和第二图像特征的比对结果小于第一预定阈值,则确定可见光图像的图像特征与预存可见光面部图像的图像特征不匹配。
步骤202中,确定红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配的步骤包括:
步骤2025,从红外光图像中提取第三图像特征,以及从预存红外光面部图像中提取第四图像特征;
步骤2026,对第三图像特征和第四图像特征进行比对;
步骤2027,若第三图像特征和第四图像特征的比对结果大于或等于第二预定阈值,则确定红外光图像的图像特征与预存红外光面部图像的图像特征相匹配;
步骤2028,若第三图像特征和第四图像特征的比对结果小于第二预定阈值,则确定红外光图像的图像特征与预存红外光面部图像的图像特征不匹配。
其中,在步骤2021中,第一图像特征和第二图像特征是指可见光图像中待验证人脸的五官特征。
其中,在步骤2025中,第三图像特征和第四图像特征是指红外光图像中待验证人脸的五官特征。
步骤203,若可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,且红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对待验证人脸的验证为成功。
其中,若可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,和/或红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对待验证人脸的验证为失败。
具体地,在移动终端中预先存储有多张预存可见光面部图像和多张预存红外光面部图像,当可见光图像的图像特征与多张预存可见光图像中的其中一张的图像特征相匹配时,则认为可见光图像的图像特征与预存可见光面部图像的图像特征相匹配;同理地,当红外光图像的图像特征与多张预存可见光图像中的其中一张的图像特征相匹配时,则认为可见光图像的图像特征与预存红外光面部图像的图像特征相匹配。
本公开的一些实施例提供的人脸识别方法,通过对待验证人脸进行图像采集后的红外光图像的图像特征和可见光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是一张真实的人脸还是人脸照片,提高了人脸识 别的安全性和准确性;并且,由于需要在可见光图像的图像特征和红外光图像的图像特征两种图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
更进一步地,本公开的一些实施例中,根据移动终端上安装的摄像头的不同,对待验证人脸的可见光图像和红外光图像进行获取的方式有两种。
其中,当移动终端上安装有可见光拍照摄像头和红外光拍照摄像头时,获取对待验证人脸进行图像采集后的可见光图像和红外图像的步骤包括:
通过可见光拍照摄像头进行拍照,获取可见光图像;
通过红外光拍照摄像头进行拍照,获取红外光图像。
其中,可见光拍照摄像头的图像传感器中的感光元件层仅能对可见光进行感光的第一感光区域;红外光拍照摄像头的图像传感器中的感光元件层仅能对红外光进行感光的第二感光区域。也即,在可见光拍照摄像头的图像传感器中的感光元件层的所有像素单元中,只包括可见光感光二极管;在红外光拍照摄像头的图像传感器中的感光元件层的所有像素单元中,只包括红外光感光二极管。
两个摄像头之间相互不发生干扰,能够使得最终成像的可见光图像和红外光图像的成像效果较好。
更进一步地,本公开的一些实施例中,获取对待验证人脸进行图像采集后的可见光图像和红外光图像的步骤包括:
当移动终端处于可见光拍照模式时,通过摄像头拍照,获取摄像头进行拍照后的可见光图像;
当移动终端处于红外光拍照模式时,通过摄像头拍照,获取摄像头进行拍照后的红外光图像。
具体地的,在该种方式中,对可见光成像和红外图像的采集是通过上述的摄像模组进行采集获得的。摄像模组的图像传感器的感光元件层包括对应可见光的第一感光区域和对应红外光的第二感光区域,第一感光区域内设置有可见光感光二极管,第二感光区域内设置有红外光感光二极管。以摄像头的感光元件层中的一个像素单元进行解释说明,在该像素单元中的第一感光区域内设置有可见光感光二极管,其具体为:一个红色光光感光二极管,一 个绿色光光感光二极管和一个蓝色光光感光二极管,在该像素单元中的第二感光区域内设置有可见光感光二极管,上述四个感光二极管形成一个像素单元。其中,四个感光二极管均连接有一控制电路,该控制电路具体包括:第一开关管,第一开关管的基极与控制器连接,第一开关管的集电极与感光二极管连接;第二开关管,第一开关管的发射极与第二开关管的集电极连接,第二开关管的基极与控制器连接;第三开关管,第三开关管的基极与第一开关管的发射极和第二开关管的集电极连接,第三开关管的发射极与第二开关管的发射极连接;第四开关管,第四开关管的基极与控制器连接,第四开关管的发射极与第三开关管的集电极连接,第四开关管的集电极连接至输出端。其中,第二开关管,第三开关管和第四开关管处于导通状态。
在处于可见光拍照模式时,控制器向与可见光感光二极管连接的第一开关管发出信号,使得第一开关管导通可见光感光二极管和第三开关管之间的连接,此时,入射至可见光感光二极管出的光信号经由该可见光感光二极管转换为电信号后,依次通过第一开关管、第三开关管和第四开关管后对外输出;在处于可见光拍照模式过程中,控制器不控制与红外光感光二极管连接的第一开关管导通,使得红外光感光二极管无法与其对应的第三开关管之间导通;进而达到只对可见光成像的目的。
在处于红外光拍照模式时,控制器向与红外光感光二极管连接的第一开关管发出信号,使得第一开关管导通红外光感光二极管和第三开关管之间的连接,此时,入射至红外光感光二极管出的光信号经由该红外光感光二极管转换为电信号后,依次通过第一开关管、第三开关管和第四开关管后对外输出;在处于红外光拍照模式过程中,控制器不控制与可见光感光二极管连接的第一开关管导通,使得可见光感光二极管无法与其对应的第三开关管之间导通;进而达到只对红外光成像的目的。
通过一个摄像头实现可见光成像和红外光成像的方式,能够增大移动空间内其它部件的布置空间,同时,降低移动终端的制造成本。
具体地,本公开的一些实施例中的人脸识别方法,可以用于移动终端的屏幕解锁,在此类实现方式中,当对待验证人脸进行验证成功时,进行屏幕解锁。
或者,本公开的一些实施例中的人脸识别方法还可以应用于移动终端的支付领域,在此类实现方式中,需要进入移动终端进入到相应的支付界面上,才能进行人脸识别,并在人脸识别成功时,进行支付。
本公开的一些实施例中,通过对待验证人脸进行图像采集后的红外光图像的图像特征和可见光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是一张真实的人脸还是人脸照片,提高了人脸识别的安全性和准确性;并且,由于需要在可见光图像的图像特征和红外光图像的图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
参照图5至图7,本公开的一些实施例还提供了一种人脸识别装置300,包括:
获取模块301,用于获取对待验证人脸进行图像采集后的可见光图像和红外光图像;
第一确定模块302,用于确定可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配,以及红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配;
第二确定模块303,用于若可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,以及红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对待验证人脸的验证为成功。
参照图6,获取模块301包括:
第一获取单元3011,用于当移动终端处于可见光拍照模式时,通过摄像头拍照,获取摄像头进行拍照后的可见光图像;
第二获取单元3012,用于当移动终端处于红外光拍照模式时,通过摄像头拍照,获取摄像头进行拍照后的红外光图像。
参照图6,第一确定模块302包括:
第一提取单元3021,用于从可见光图像中提取第一图像特征,以及从预存可见光面部图像中提取第二图像特征;
第一比对单元3022,用于对第一图像特征和第二图像特征进行比对;
第一确定单元3023,用于若第一图像特征和第二图像特征的比对结果大于或等于第一预定阈值,则确定可见光图像的图像特征与预存可见光面部图 像的图像特征相匹配;
第二确定单元3024,用于若第一图像特征和第二图像特征的比对结果小于第一预定阈值,则确定可见光图像的图像特征与预存可见光面部图像的图像特征不匹配。
参照图6,第一确定模块302还包括:
第二提取单元3025,用于从红外光图像中提取第三图像特征,以及从预存红外光面部图像中提取第四图像特征;
第二比对单元3026,用于对第三图像特征和第四图像特征进行比对;
第三确定单元3027,用于若第三图像特征和第四图像特征的比对结果大于或等于第二预定阈值,则确定红外光图像的图像特征与预存红外光面部图像的图像特征相匹配;
第四确定单元3028,用于若第三图像特征和第四图像特征的比对结果小于第二预定阈值,则确定红外光图像的图像特征与预存红外光面部图像的图像特征不匹配。
本公开的一些实施例提供的移动终端能够实现图4中的方法实施例中移动终端实现的各个过程,为避免重复,这里不再赘述。通过对待验证人脸进行图像采集后的红外光图像的图像特征进行图像特征匹配,能够确定出摄像头前的人脸是否为一张真实的人脸,提高了人脸识别的安全性和准确性;并且,由于需要在可见光图像的图像特征和红外光图像的图像特征两种图像的图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
图7为实现本公开的一些实施例的一种移动终端的硬件结构示意图,
该移动终端400包括但不限于:射频单元401、网络模块402、音频输出单元403、输入单元404、传感器405、显示单元406、用户输入单元407、接口单元408、存储器409、处理器410、以及电源411等部件。本领域技术人员可以理解,图7中示出的移动终端结构并不构成对移动终端的限定,移动终端可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本公开的一些实施例中,移动终端包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,射频单元401,用于在处理器410的控制下收发数据;
处理器410,用于获取对待验证人脸进行图像采集后的可见光图像和红外光图像;若可见光图像与的图像特征预存可见光面部图像的图像特征相匹配,且红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对待验证人脸的验证为成功。
通过对待验证人脸进行图像采集后的可见光图像的图像特征和红外光图像的图像特征两种图像特征进行图像特征匹配,在两种图像特征均匹配成功后才确定对待验证人脸的验证成功,提高了人脸识别的准确性。
应理解的是,本公开的一些实施例中,射频单元401可用于收发信息或通话过程中,信号的接收和发送,具体的,将来自基站的下行数据接收后,给处理器410处理;另外,将上行的数据发送给基站。通常,射频单元401包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元401还可以通过无线通信系统与网络和其他设备通信。
移动终端通过网络模块402为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、浏览网页和访问流式媒体等。
音频输出单元403可以将射频单元401或网络模块402接收的或者在存储器409中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元403还可以提供与移动终端400执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元403包括扬声器、蜂鸣器以及受话器等。
输入单元404用于接收音频或视频信号。输入单元404可以包括图形处理器(Graphics Processing Unit,GPU)4041和麦克风4042,图形处理器4041对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元406上。经图形处理器4041处理后的图像帧可以存储在存储器409(或其它存储介质)中或者经由射频单元401或网络模块402进行发送。麦克风4042可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元401发送到移动通信基站的 格式输出。
移动终端400还包括至少一种传感器405,比如光传感器、运动传感器以及其他传感器。具体地,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板4061的亮度,接近传感器可在移动终端400移动到耳边时,关闭显示面板4061和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器405还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元406用于显示由用户输入的信息或提供给用户的信息。显示单元406可包括显示面板4061,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板4061。
用户输入单元407可用于接收输入的数字或字符信息,以及产生与移动终端的用户设置以及功能控制有关的键信号输入。具体地,用户输入单元407包括触控面板4071以及其他输入设备4072。触控面板4071,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板4071上或在触控面板4071附近的操作)。触控面板4071可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器410,接收处理器410发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板4071。除了触控面板4071,用户输入单元407还可以包括其他输入设备4072。具体地,其他输入设备4072可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
进一步的,触控面板4071可覆盖在显示面板4061上,当触控面板4071 检测到在其上或附近的触摸操作后,传送给处理器410以确定触摸事件的类型,随后处理器410根据触摸事件的类型在显示面板4061上提供相应的视觉输出。虽然在图7中,触控面板4071与显示面板4061是作为两个独立的部件来实现移动终端的输入和输出功能,但是在某些实施例中,可以将触控面板4071与显示面板4061集成而实现移动终端的输入和输出功能,具体此处不做限定。
接口单元408为外部装置与移动终端400连接的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元408可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端400内的一个或多个元件或者可以用于在移动终端400和外部装置之间传输数据。
存储器409可用于存储软件程序以及各种数据。存储器409可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器409可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器410是移动终端的控制中心,利用各种接口和线路连接整个移动终端的各个部分,通过运行或执行存储在存储器409内的软件程序和/或模块,以及调用存储在存储器409内的数据,执行移动终端的各种功能和处理数据,从而对移动终端进行整体监控。处理器410可包括一个或多个处理单元;可选的,处理器410可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器410中。
移动终端400还可以包括给各个部件供电的电源411(比如电池),可选的,电源411可以通过电源管理系统与处理器410逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
另外,移动终端400包括一些未示出的功能模块,在此不再赘述。
可选的,本公开的一些实施例还提供一种移动终端,包括处理器410,存储器409,存储在存储器409上并可在所述处理器410上运行的计算机程序,该计算机程序被处理器410执行时实现上述人脸识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。
本公开的一些实施例还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述人脸识别方法实施例的各个过程,且能达到相同的技术效果,为避免重复,这里不再赘述。其中,所述的计算机可读存储介质可以是易失性存储介质或非易失性存储介质,或可包括易失性存储介质和非易失性存储介质两者,如只读存储器(Read-Only Memory,简称ROM)、随机存取存储器(Random Access Memory,简称RAM)、磁碟或者光盘等。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件实现。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本公开各个实施例所述的方法。
上面结合附图对本公开的实施例进行了描述,但是本公开并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本公开的启示下,在不脱离本公开宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本公开的保护之内。
以上所述的是本公开的可选实施方式,应当指出对于本技术领域的普通人员来说,在不脱离本公开所述的原理前提下还可以作出若干改进和润饰,这些改进和润饰也在本公开的保护范围内。
Claims (17)
- 一种图像传感器,包括:微透镜层、感光元件层和设置于所述微透镜层和所述感光元件层之间的滤光层;其中,入射光线依次经过所述微透镜层和所述滤光层后,传输至所述感光元件层,所述感光元件层包括对应可见光的第一感光区域和对应红外光的第二感光区域。
- 根据权利要求1所述的图像传感器,其中,所述第二感光区域内设置有感光波长范围位于780nm至1mm之间的红外光感光二极管,所述滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域以及用于通过自然光的第一通过区域;第二滤光层,包括用于通过可见光的彩色滤光区域以及用于通过自然光的第二通过区域;其中,所述不可见光滤光区域、所述彩色滤光区域和所述第一感光区域相对应,所述第一通过区域、所述第二通过区域和所述第二感光区域相对应。
- 根据权利要求1所述的图像传感器,其中,所述滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域和用于通过自然光的第三通过区域;第二滤光层,包括用于通过可见光的彩色滤光区域和用于通过红外光的第一滤光区域;其中,所述不可见光滤光区域、所述彩色滤光区域和所述第一感光区域相对应,所述第三通过区域、所述第一滤光区域和所述第二感光区域相对应。
- 根据权利要求1所述的图像传感器,其中,所述滤光层包括:第一滤光层,包括用于通过可见光的不可见光滤光区域和用于通过红外光的第二滤光区域;第二滤光层,包括用于通过可见光的彩色滤光区域和用于通过红外光的第四通过区域;其中,所述不可见光滤光区域、所述彩色滤光区域和所述第一感光区域 相对应,所述第二滤光区域、所述第四通过区域和所述第二感光区域相对应。
- 根据权利要求3或4所述的图像传感器,其中,所述第一感光区域内设置有可见光感光二极管,所述第二感光区域内设置有红外光感光二极管,其中,所述可见光感光二极管与所述红外光感光二极管的感光波长范围不同。
- 根据权利要求1所述的图像传感器,其中,所述感光元件层包括多个像素单元,其中,每一像素单元均包括所述第一感光区域和所述第二感光区域。
- 根据权利要求1所述的图像传感器,其中,所述感光元件层包括多个像素单元,多个像素单元中的一部分像素单元包括所述第一感光区域和所述第二感光区域,另一部分像素单元仅包括所述第一感光区域。
- 一种镜头模组,包括:根据权利要求1至7中任一项所述的图像传感器。
- 根据权利要求8所述的镜头模组,其中,所述镜头模组的镜片组与图像传感器相邻设置。
- 一种移动终端,包括:根据权利要求8-9中任一项所述的镜头模组。
- 一种人脸识别方法,应用于如权利要求10所述的移动终端,所述人脸识别方法包括:获取对待验证人脸进行图像采集后的可见光图像和红外光图像;确定所述可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配,以及所述红外光图像的图像特征是否与预存红外光面部图像的图像特征相匹配;若所述可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,且所述红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对所述待验证人脸的验证为成功。
- 根据权利要求11所述的人脸识别方法,其中,所述获取对待验证人脸进行图像采集后的可见光图像和红外光图像的步骤包括:当移动终端处于可见光拍照模式时,通过摄像头拍照,获取所述摄像头进行拍照后的所述可见光图像;当移动终端处于红外光拍照模式时,通过摄像头拍照,获取所述摄像头进行拍照后的所述红外光图像。
- 根据权利要求11所述的人脸识别方法,其中,确定所述可见光图像的图像特征是否与预存可见光面部图像的图像特征相匹配的步骤包括:从所述可见光图像中提取第一图像特征,以及从所述预存可见光面部图像中提取第二图像特征;对所述第一图像特征和所述第二图像特征进行比对;若所述第一图像特征和所述第二图像特征的比对结果大于或等于第一预定阈值,则确定所述可见光图像的图像特征与预存可见光面部图像的图像特征相匹配;若所述第一图像特征和所述第二图像特征的比对结果小于所述第一预定阈值,则确定所述可见光图像的图像特征与预存可见光面部图像的图像特征不匹配。
- 一种人脸识别装置,包括:获取模块,用于获取对待验证人脸进行图像采集后的可见光图像和红外光图像;第一确定模块,用于确定所述可见光图像是否与预存可见光面部图像相匹配,以及所述红外光图像是否与预存红外光面部图像相匹配;第二确定模块,用于若确定所述可见光图像的图像特征与预存可见光面部图像的图像特征相匹配,且确定所述红外光图像的图像特征与预存红外光面部图像的图像特征相匹配,则确定对所述待验证人脸的验证为成功。
- 根据权利要求14所述的人脸识别装置,其中,所述获取模块包括:第一获取单元,用于当移动终端处于可见光拍照模式时,通过摄像头拍照,获取所述摄像头进行拍照后的所述可见光图像;第二获取单元,用于当移动终端处于红外光拍照模式时,通过摄像头拍照,获取所述摄像头进行拍照后的所述红外光图像。
- 根据权利要求14所述的人脸识别装置,其中,所述第一确定模块包括:第一提取单元,用于从所述可见光图像中提取第一图像特征,以及从所 述预存可见光面部图像中提取第二图像特征;第一比对单元,用于对所述第一图像特征和所述第二图像特征进行比对;第一确定单元,用于若所述第一图像特征和所述第二图像特征的比对结果大于或等于第一预定阈值,则确定所述可见光图像的图像特征与预存可见光面部图像的图像特征相匹配;第二确定单元,用于若所述第一图像特征和所述第二图像特征的比对结果小于所述第一预定阈值,则确定所述可见光图像的图像特征与预存可见光面部图像的图像特征不匹配。
- 一种移动终端,包括:处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时所述处理器实现如权利要求11至13中任一项所述的人脸识别方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810083033.5 | 2018-01-29 | ||
CN201810083033.5A CN108345845B (zh) | 2018-01-29 | 2018-01-29 | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019144956A1 true WO2019144956A1 (zh) | 2019-08-01 |
Family
ID=62961693
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/073403 WO2019144956A1 (zh) | 2018-01-29 | 2019-01-28 | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108345845B (zh) |
WO (1) | WO2019144956A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114449137A (zh) * | 2020-11-02 | 2022-05-06 | 北京小米移动软件有限公司 | 滤光片结构、拍摄方法、装置、终端及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345845B (zh) * | 2018-01-29 | 2020-09-25 | 维沃移动通信有限公司 | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 |
CN108897178A (zh) * | 2018-08-31 | 2018-11-27 | 武汉华星光电技术有限公司 | 彩色滤光片基板及显示面板 |
CN109143704B (zh) * | 2018-09-13 | 2021-03-12 | 合肥京东方光电科技有限公司 | 显示面板及终端设备 |
CN112131906A (zh) * | 2019-06-24 | 2020-12-25 | Oppo广东移动通信有限公司 | 光学指纹传感器及具有其的电子设备 |
CN110532992B (zh) * | 2019-09-04 | 2023-01-10 | 深圳市捷顺科技实业股份有限公司 | 一种基于可见光和近红外的人脸识别方法 |
CN111447423A (zh) * | 2020-03-25 | 2020-07-24 | 浙江大华技术股份有限公司 | 图像传感器、摄像装置及图像处理方法 |
CN113032597A (zh) * | 2021-03-31 | 2021-06-25 | 广东电网有限责任公司 | 一种基于图像处理的输电设备分类方法及系统 |
CN114143427A (zh) * | 2021-11-23 | 2022-03-04 | 歌尔科技有限公司 | 摄像头组件、移动终端和基于摄像头的体温测量方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104284179A (zh) * | 2013-07-01 | 2015-01-14 | 全视技术有限公司 | 用于提供三维彩色影像的多频带影像传感器 |
CN104394306A (zh) * | 2014-11-24 | 2015-03-04 | 北京中科虹霸科技有限公司 | 用于虹膜识别的多通道多区域镀膜的摄像头模组及设备 |
CN205666883U (zh) * | 2016-03-23 | 2016-10-26 | 徐鹤菲 | 支持近红外光与可见光成像的复合成像系统和移动终端 |
US20170064276A1 (en) * | 2015-05-01 | 2017-03-02 | Duelight Llc | Systems and methods for generating a digital image |
CN108345845A (zh) * | 2018-01-29 | 2018-07-31 | 维沃移动通信有限公司 | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101504721A (zh) * | 2009-03-13 | 2009-08-12 | 北京中星微电子有限公司 | 一种基于面部图像进行身份认证的方法及装置 |
US9348019B2 (en) * | 2012-11-20 | 2016-05-24 | Visera Technologies Company Limited | Hybrid image-sensing apparatus having filters permitting incident light in infrared region to be passed to time-of-flight pixel |
CN105023005B (zh) * | 2015-08-05 | 2018-12-07 | 王丽婷 | 人脸识别装置及其识别方法 |
CN206370880U (zh) * | 2017-01-25 | 2017-08-01 | 徐鹤菲 | 一种双摄像头成像系统和移动终端 |
CN106982329B (zh) * | 2017-04-28 | 2020-08-07 | Oppo广东移动通信有限公司 | 图像传感器、对焦控制方法、成像装置和移动终端 |
-
2018
- 2018-01-29 CN CN201810083033.5A patent/CN108345845B/zh active Active
-
2019
- 2019-01-28 WO PCT/CN2019/073403 patent/WO2019144956A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104284179A (zh) * | 2013-07-01 | 2015-01-14 | 全视技术有限公司 | 用于提供三维彩色影像的多频带影像传感器 |
CN104394306A (zh) * | 2014-11-24 | 2015-03-04 | 北京中科虹霸科技有限公司 | 用于虹膜识别的多通道多区域镀膜的摄像头模组及设备 |
US20170064276A1 (en) * | 2015-05-01 | 2017-03-02 | Duelight Llc | Systems and methods for generating a digital image |
CN205666883U (zh) * | 2016-03-23 | 2016-10-26 | 徐鹤菲 | 支持近红外光与可见光成像的复合成像系统和移动终端 |
CN108345845A (zh) * | 2018-01-29 | 2018-07-31 | 维沃移动通信有限公司 | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114449137A (zh) * | 2020-11-02 | 2022-05-06 | 北京小米移动软件有限公司 | 滤光片结构、拍摄方法、装置、终端及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN108345845A (zh) | 2018-07-31 |
CN108345845B (zh) | 2020-09-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019144956A1 (zh) | 图像传感器、镜头模组、移动终端、人脸识别方法及装置 | |
CN109934137A (zh) | 一种光电指纹识别装置、终端及指纹识别方法 | |
CN109190509B (zh) | 一种身份识别方法、装置和计算机可读存储介质 | |
US11962918B2 (en) | Image sensor, mobile terminal, and image photographing method | |
WO2015003522A1 (zh) | 人脸识别方法、装置和移动终端 | |
CN108900750B (zh) | 一种图像传感器及移动终端 | |
CN107846537B (zh) | 一种摄像头组件、图像获取方法及移动终端 | |
WO2020015538A1 (zh) | 图像数据处理方法及移动终端 | |
CN109246360A (zh) | 一种提示方法及移动终端 | |
CN110096865B (zh) | 下发验证方式的方法、装置、设备及存储介质 | |
CN108965666B (zh) | 一种移动终端及图像拍摄方法 | |
US20220367550A1 (en) | Mobile terminal and image photographing method | |
WO2020015532A1 (zh) | 图像传感器、移动终端及拍摄方法 | |
CN108229420A (zh) | 一种人脸识别方法、移动终端 | |
CN110855897B (zh) | 图像拍摄方法、装置、电子设备及存储介质 | |
US11996421B2 (en) | Image sensor, mobile terminal, and image capturing method | |
CN108446665B (zh) | 一种人脸识别方法和移动终端 | |
WO2022247762A1 (zh) | 电子设备及其指纹解锁方法、指纹解锁装置 | |
JP7541111B2 (ja) | 情報処理方法及び電子機器 | |
WO2022247761A1 (zh) | 电子设备及其指纹解锁方法、指纹解锁装置 | |
WO2021218823A1 (zh) | 指纹活体检测方法、设备及存储介质 | |
CN108875352B (zh) | 用户身份的验证方法、装置及移动终端 | |
CN110248050B (zh) | 一种摄像头模组及移动终端 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19744377 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19744377 Country of ref document: EP Kind code of ref document: A1 |