Nothing Special   »   [go: up one dir, main page]

CN110909568A - Image detection method, apparatus, electronic device, and medium for face recognition - Google Patents

Image detection method, apparatus, electronic device, and medium for face recognition Download PDF

Info

Publication number
CN110909568A
CN110909568A CN201811081015.XA CN201811081015A CN110909568A CN 110909568 A CN110909568 A CN 110909568A CN 201811081015 A CN201811081015 A CN 201811081015A CN 110909568 A CN110909568 A CN 110909568A
Authority
CN
China
Prior art keywords
detection
face
image
feature point
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811081015.XA
Other languages
Chinese (zh)
Inventor
陈劲
胡馨文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201811081015.XA priority Critical patent/CN110909568A/en
Publication of CN110909568A publication Critical patent/CN110909568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides an image detection method, an image detection device, electronic equipment and a medium for face recognition, and relates to the technical field of image processing. The method comprises the following steps: acquiring a target image, and carrying out face detection and marking of face characteristic points on the target image; performing multiple detection on a target image based on the labeled human face characteristic points, wherein the multiple detection comprises head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection; and when all the detection items in the multiple detections are qualified, determining that the target image is qualified. The technical scheme of the embodiment of the invention can comprehensively analyze the face image by synthesizing a plurality of factors influencing the face recognition result, and effectively screen out the high-quality sample image.

Description

Image detection method, apparatus, electronic device, and medium for face recognition
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image detection method, an image detection apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of image processing technology, the application field of the face recognition technology is more and more extensive. In the face recognition technology, the quality of a sample image, such as imaging quality, has a great influence on the accuracy of face recognition model for recognizing a face. Therefore, before training the face recognition model, the sample image needs to be processed to improve the recognition accuracy.
In one technical scheme, the content of a sample image is divided into a clothes area, a background area and a face area, the average brightness value of the face area and the brightness difference value of the left face and the right face are determined, and the quality of the sample image is judged based on the factors. However, in the technical scheme, on one hand, the pixels of each area of the sample image need to be stored and calculated, the calculation amount is huge, the image processing efficiency is low, and the method is difficult to be applied to a real-time detection system; on the other hand, it is difficult to accurately determine whether the face in the sample image is qualified by detecting only the brightness feature of the face.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the invention and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
An object of embodiments of the present invention is to provide an image detecting method, an image detecting apparatus, an electronic device, and a computer-readable storage medium, which overcome one or more of the problems due to the limitations and disadvantages of the related art, at least to some extent.
According to a first aspect of embodiments of the present invention, there is provided an image detection method for face recognition, including: acquiring a target image, and carrying out face detection and marking of face characteristic points on the target image; performing multiple detections on the target image based on the labeled human face feature points, wherein the multiple detections comprise head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection; and when all the detection items in the multiple detections are qualified, determining that the target image is qualified.
In some embodiments of the present invention, based on the foregoing scheme, the head lowering detection comprises: determining a ratio of a distance in a vertical direction between a first feature point and a second feature point and a distance in the vertical direction between the second feature point and a third feature point; judging whether the ratio is larger than a first preset threshold value or not; and if the judgment result is larger than the first preset threshold value, determining that the target image is unqualified in the low head detection, wherein the second feature point is positioned in a nose tip area, the first feature point is positioned above the second feature point, and the third feature point is positioned below the second feature point.
In some embodiments of the present invention, based on the foregoing scheme, the lateral head detection includes: determining a ratio of a distance in a horizontal direction between a fourth feature point and a fifth feature point and a distance in the horizontal direction between the fifth feature point and a sixth feature point; judging whether the ratio is larger than a second preset threshold value or not; and if the judgment result is larger than the second preset threshold, determining that the target image is unqualified in the lateral head detection, wherein the fifth feature point is located on the central axis of the face, and the fourth feature point and the sixth feature point are symmetrical about the fifth feature point.
In some embodiments of the present invention, based on the foregoing scheme, the face brightness detection includes: selecting a plurality of key points from the face in the target image; determining RGB values of each of the plurality of keypoints; determining Y values of the key points based on the RGB values of the key points, and calculating the average value of the Y values of the key points; and determining that the target image is unqualified in the face brightness detection when the average value is smaller than a third preset threshold value.
In some embodiments of the present invention, based on the foregoing scheme, the ambiguity detection comprises: converting the target image into a grey-scale map; performing convolution processing on one channel of the gray-scale image through a Laplace operator, and determining the variance of the gray-scale image after the convolution processing; judging whether the variance of the gray level image is larger than a variance threshold value; and if the variance threshold value is not larger than the variance threshold value, determining that the target image is unqualified in the ambiguity detection.
In some embodiments of the present invention, based on the foregoing scheme, the ambiguity detection further includes: determining the variance threshold based on a size of a face detection box of the target image.
In some embodiments of the present invention, based on the foregoing solution, the face integrity detection includes: judging whether the face detection frame is completely on the target image or not; and if the face detection frame is not completely on the target image, determining that the target image is unqualified in the face integrity detection.
In some embodiments of the present invention, based on the foregoing solution, the performing face detection and labeling of the face feature points on the target image includes: and when the face cannot be detected or the face characteristic points cannot be marked, determining that the target image is unqualified.
According to a second aspect of embodiments of the present invention, there is provided an image detection apparatus for face recognition, including: the labeling unit is used for acquiring a target image, and performing face detection and labeling of face characteristic points on the target image; the detection unit is used for carrying out multiple detections on the target image based on the labeled human face characteristic points, wherein the multiple detections comprise head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection; and the determining unit is used for determining that the target image is qualified when all the detection items in the multiple detections are qualified.
According to a third aspect of embodiments of the present invention, there is provided an electronic apparatus, including: a processor; and a memory having computer readable instructions stored thereon which, when executed by the processor, implement the image detection method as described above in the first aspect.
According to a fourth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image detection method as described in the first aspect above.
In the technical solutions provided by some embodiments of the present invention, on one hand, by performing face detection and labeling of face feature points on a target image, only pixels at the labeled face feature point positions need to be processed, so that the amount of computation can be significantly reduced, the image processing efficiency can be improved, and the method can be applied to a real-time detection system; on the other hand, the target image is subjected to multiple items of detection based on the labeled human face characteristic points, when all the items of detection in the multiple items of detection are qualified, the target image is determined to be qualified, and the human face image can be comprehensively analyzed by integrating multiple factors influencing the human face recognition result, so that whether the human face in the sample image is qualified or not can be determined more accurately, and the high-quality sample image can be effectively screened out.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 illustrates a flow diagram of an image detection method according to some embodiments of the invention;
FIG. 2 illustrates a flow diagram of a heads-down detection according to some embodiments of the invention;
FIG. 3 illustrates a schematic diagram of a heads-down detection according to some embodiments of the invention;
FIG. 4 illustrates a sample schematic view of face over-head according to some embodiments of the invention;
FIG. 5 illustrates a flow diagram of lateral head detection according to some embodiments of the invention;
FIG. 6 illustrates a schematic diagram of lateral head detection according to some embodiments of the invention;
FIG. 7 illustrates a sample schematic view of a human face with lateral head transition, according to some embodiments of the invention;
FIG. 8 illustrates a flow diagram of face luminance detection according to some embodiments of the invention;
FIG. 9 illustrates a schematic diagram of face luminance detection according to some embodiments of the invention;
FIG. 10 illustrates a sample schematic of face luminance insufficiency according to some embodiments of the invention;
FIG. 11 illustrates a flow diagram of face ambiguity detection according to some embodiments of the present invention;
FIG. 12 illustrates a schematic diagram of face ambiguity detection according to some embodiments of the present invention;
FIG. 13 illustrates a sample schematic of face blur transition according to some embodiments of the invention;
FIG. 14 illustrates a flow diagram of face integrity detection according to some embodiments of the invention;
FIG. 15 illustrates a schematic diagram of face integrity detection according to some embodiments of the invention;
FIG. 16 illustrates a sample schematic view of a face that is incomplete in accordance with some embodiments of the invention;
FIG. 17 shows a schematic flow diagram of an image detection method according to further embodiments of the invention;
FIG. 18 shows a schematic block diagram of an image detection apparatus according to some embodiments of the present invention;
FIG. 19 illustrates a schematic structural diagram of a computer system suitable for use with the electronic device to implement an embodiment of the invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
FIG. 1 illustrates a flow diagram of an image detection method according to some embodiments of the invention.
Referring to fig. 1, in step S110, a target image is obtained, and face detection and labeling of face feature points are performed on the target image.
In an example embodiment, the target image is a sample image in a training sample or a test image in a test sample of the face recognition model. After the target image is acquired, face detection is carried out on the target image, and the face, the eyes, the nose, the mouth, the face size and the like in the target image are determined.
Next, feature point labeling may be performed on the face, eyes, nose, mouth, and the like. The feature points of the face represent annotation points used for detecting various items of the face in the target image, such as feature points of the tip of the nose, the corner of the eyebrow, and the like. Further, if the human face cannot be detected or the key feature points of the human face cannot be labeled, the target image is judged to be a non-qualified image.
In step S120, multiple detections are performed on the target image based on the labeled face feature points, where the multiple detections include low head detection, side head detection, face brightness detection, face ambiguity detection, and face integrity detection.
In an example embodiment, the corresponding labeled human face feature point is selected based on the detection type to detect the target image. For example, when performing the head lowering detection, three feature points, namely the chin top, the nose top, and the eyebrow center, on the central axis of the face may be selected, and the head lowering detection may be performed on the face based on a ratio of a distance in the vertical direction between the eyebrow center and the nose top to a distance in the vertical direction between the nose top and the chin top. When performing lateral head detection, three feature points, namely a midpoint of the center of the left eye and the center of the right eye in the horizontal direction, a right corner of the left eye and a left corner of the right eye, may be selected, and lateral head detection may be performed based on a ratio of a distance between the right corner of the left eye and the midpoint in the horizontal direction and a distance between the midpoint and the left corner of the right eye in the horizontal direction. When the face brightness detection is performed, a plurality of feature points may be selected at a plurality of portions of the face, and the average value of the brightness of the plurality of feature points may be calculated to perform the face brightness detection.
In step S130, when each of the plurality of detections is qualified, it is determined that the target image is qualified.
In an example embodiment, if the low head detection, the side head detection, the face brightness detection, the face blurriness detection, and the face integrity detection are all qualified, it is determined that the target image is qualified, and the target image may be used as an image for the later face recognition. If one or more of the low head detection, the side head detection, the face brightness detection, the face blurriness detection and the face integrity detection is unqualified, the detection result of each detection can be output and the target image is determined to be an unqualified image.
According to the image detection method in the example embodiment of fig. 1, on one hand, by performing face detection and labeling of the face feature points on the target image, only the pixels at the labeled face feature point positions need to be processed, so that the calculation amount can be significantly reduced, the image processing efficiency can be improved, and the method can be applied to a real-time detection system; on the other hand, the target image is subjected to multiple items of detection based on the labeled human face characteristic points, when all the items of detection in the multiple items of detection are qualified, the target image is determined to be qualified, and the multiple items of factors influencing the accuracy of human face identification can be comprehensively analyzed to comprehensively determine whether the human face in the sample image is qualified or not, so that the high-quality sample image can be effectively screened out.
FIG. 2 illustrates a schematic diagram of a heads-down detection according to some embodiments of the invention.
Referring to fig. 2, in step S210, a ratio of a distance in a vertical direction between a first feature point and a second feature point and a distance in the vertical direction between the second feature point and a third feature point is determined.
In an example embodiment, the second feature point is located in a nose tip region, e.g., the second feature point is a nose tip, the first feature point is located above the second feature point, e.g., the first feature point is a right brow angle, and the third feature point is located below the second feature point, e.g., the third feature point is a chin tip. After the first feature point, the second feature point and the third feature point are selected, the distance between the first feature point and the second feature point in the vertical direction and the distance between the second feature point and the third feature point in the vertical direction are determined, and the ratio of the distance between the first feature point and the second feature point in the vertical direction and the distance between the second feature point and the third feature point in the vertical direction is calculated.
In step S220, it is determined whether the ratio is greater than a first predetermined threshold.
In an example embodiment, the first predetermined threshold is a corresponding empirical value obtained by selecting different feature points for performing a low head detection, for example, the first predetermined threshold is 1.35(± 0.1) when the first feature point is a right eyebrow angle, the second feature point is a nose tip, and the third feature point is a chin tip.
In step S230, if it is determined that the target image is greater than the first predetermined threshold, it is determined that the target image is not qualified in the low head detection.
In an example embodiment, if it is determined that the ratio of the distance in the vertical direction between the first feature point and the second feature point and the distance in the vertical direction between the second feature point and the third feature point is greater than the first predetermined threshold value, it is determined that the target image is not qualified in the low head detection. For example, if the first feature point is the right eyebrow angle, the second feature point is the nose tip, and the third feature point is the chin tip, the first predetermined threshold is 1.35(± 0.1), and if the ratio is greater than 1.35(± 0.1), it is determined that the face in the image is excessively low; if the ratio is less than 1.35 (+ -0.1), the image is judged to be qualified in the low head detection. If the face in the image cannot be recognized or labeled, the image is determined to be a defective image.
FIG. 3 illustrates a schematic diagram of a heads-down detection according to some embodiments of the invention.
Referring to fig. 3, calculating a ratio of a distance in the vertical direction, i.e., the y-direction, between the middle feature point (right eyebrow angle) and the middle feature point (nose tip) and a distance in the y-direction between the middle feature point (nose tip) and the lower feature point (chin tip), and determining whether the obtained ratio is greater than a first predetermined threshold; performing a head lowering detection experiment based on feature points of the right eyebrow corner, the nose tip and the chin tip to obtain that the first preset threshold is 1.35 (+ -0.1), and when the ratio is greater than 1.35 (+ -0.1), judging that the face in the image has excessive head lowering; when the ratio is lower than 1.35(± 0.1), the image is judged to be acceptable in the low head detection. Further, if the face in the image cannot be identified and labeled, the image is determined to be a defective image.
In fig. 3, the ratio of the distance in the vertical direction between the right eyebrow angle and the tip of the nose of the human face and the distance in the vertical direction between the tip of the nose and the tip of the chin in the left image, which is qualified in the low head detection, is 1.1, which is lower than 1.35(± 0.1); in the intermediate image, the ratio of the distance in the vertical direction between the right eyebrow angle and the nose tip of the human face to the distance in the vertical direction between the nose tip and the chin tip is 1.4, which is greater than 1.35(± 0.1), the intermediate image is judged to be disqualified in the low head detection; in the right image, the face is excessively low, the face and the mark point cannot be detected, and the right image is unqualified in low head detection. Fig. 4 illustrates a sample schematic of face-over according to some embodiments of the invention.
FIG. 5 illustrates a flow diagram of lateral head detection according to some embodiments of the invention.
Referring to fig. 5, in step S510, a ratio of a distance in the horizontal direction between the fourth feature point and the fifth feature point and a distance in the horizontal direction between the fifth feature point and the sixth feature point is determined.
In an example embodiment, the fifth feature point is located on a central axis of the human face, for example, a midpoint of a center of the left eye and a center of the right eye in a horizontal direction, and the fourth feature point and the sixth feature point are symmetric with respect to the fifth feature point, for example, the fourth feature point is a right corner of the left eye, and the sixth feature point is a left corner of the right eye. After the fourth feature point, the fifth feature point and the sixth feature point are selected, the distance between the fourth feature point and the fifth feature point in the horizontal direction and the distance between the fifth feature point and the sixth feature point in the horizontal direction are determined, and the ratio of the distance between the fourth feature point and the fifth feature point in the horizontal direction and the distance between the fifth feature point and the sixth feature point in the horizontal direction is calculated.
In step S520, it is determined whether the ratio is greater than a second predetermined threshold.
In an exemplary embodiment, the second predetermined threshold is a corresponding empirical value obtained by selecting different feature points for lateral head detection. For example, when the fifth feature point is a midpoint of the center of the left eye and the center of the right eye in the horizontal direction, the fourth feature point is a right corner of the left eye, and the sixth feature point is a left corner of the right eye, the second predetermined threshold is 1.67(± 0.1).
In step S530, if it is determined that the target image is greater than the second predetermined threshold, it is determined that the target image is not qualified in the lateral head detection.
And if the ratio of the distance between the fourth characteristic point and the fifth characteristic point in the horizontal direction to the distance between the fifth characteristic point and the sixth characteristic point in the horizontal direction is larger than a second preset threshold value, determining that the target image is unqualified in the lateral head detection. For example, when the fifth feature point is a midpoint of the center of the left eye and the center of the right eye in the horizontal direction, the fourth feature point is a right corner of the left eye, and the sixth feature point is a left corner of the right eye, the second predetermined threshold is set to be 1.67(± 0.1), and if the ratio is greater than 1.67(± 0.1), it is determined that the human face in the image is excessive; if the ratio is less than 1.67 (+ -0.1), the image is judged to be qualified in the left and right lateral head detection. If the face in the image cannot be recognized or labeled, the image is determined to be a defective image.
FIG. 6 illustrates a schematic diagram of lateral head detection according to some embodiments of the invention.
Referring to fig. 6, calculating a ratio of a distance in the horizontal direction between the left feature point (the right corner of the left eye) and the middle feature point (normally, a midpoint of a distance in the X direction between the left eye and the right eye) and a distance in the horizontal direction between the middle feature point and the right feature point (the left corner of the right eye), and determining whether the obtained ratio is greater than a second predetermined threshold; the method comprises the steps that lateral head detection is carried out on the basis of three feature points, namely a left eye right corner, a right eye left corner and a left eye midpoint, a second preset threshold value is 1.67 (+ -0.1), and when the ratio is larger than 1.67 (+ -0.1), the situation that the human face in an image is over-lateral head is judged, namely the human face is left inclined; if the ratio is lower than the second predetermined threshold value 1.67(± 0.1), the image is determined to be a good image in the left-right lateral head detection (similarly, if the distance between the right feature point and the middle feature point and the distance between the middle feature point and the left feature point are detected, the second predetermined threshold value is also 1.67(± 0.1), and if the ratio is larger than the second predetermined threshold value, the face in the image is right-leaning).
In fig. 6, when the ratio of the distance in the horizontal direction between the left feature point and the middle feature point of the left image face to the distance in the horizontal direction between the middle feature point and the right feature point is 1.1, and the second predetermined threshold is 1.67(± 0.1), it is determined that the image is qualified in the lateral head detection; the ratio of the distance between the left characteristic point and the middle characteristic point of the face in the middle image in the horizontal direction to the distance between the middle characteristic point and the right characteristic point in the horizontal direction is 2.5, the ratio is larger than 1.67 (+ -0.1), the face in the image is judged to be over-headed, namely the face is left-tilted, and the image is judged to be unqualified in the lateral head detection; and the right image is an unqualified image because the human face and the mark point cannot be detected due to excessive side head of the human face. Fig. 7 illustrates a sample schematic of a human face transition in lateral head according to some embodiments of the invention.
FIG. 8 illustrates a flow diagram of face luminance detection according to some embodiments of the invention.
Referring to fig. 8, in step S810, a plurality of key points are selected from the face of the person in the target image.
In an example embodiment, a plurality of key points may be selected on the face in the target image, and the plurality of key points may be key positions of feature extraction in the late-stage face recognition model, for example, feature points on the nose, eyebrows, eyes, and mouth may be selected as the key points.
In step S820, RGB values of each of the plurality of keypoints are determined.
In an example embodiment, RGB values for each keypoint of a plurality of keypoints are obtained. For example, a pixel corresponding to each key point is extracted, and the RGB value corresponding to the pixel is used as the RGB value of the key point.
In step S830, Y values of the respective keypoints are determined based on RGB values of the respective keypoints, and an average value of the Y values of the plurality of keypoints is calculated.
In an example embodiment, the RGB values of the respective key points are converted into YUV values, and the Y values are luminance values of the key points, and an average value of the Y values of the respective key points is calculated.
In step S840, it is determined that the target image is not qualified in the face brightness detection when the average value is smaller than a third predetermined threshold.
In an example embodiment, the third predetermined threshold is a threshold obtained by performing a face brightness detection experiment based on the selected key points. For example, if 10 keypoints are selected, e.g., one on each of the left and right eyebrows, one on each of the left and right eyes, one in the middle of each eye, three on the nose, one in each of the left and right mouth corners, the third predetermined threshold is 75. When the average value is lower than a third predetermined threshold 75, judging that the face brightness of the image is not enough; when the average value is higher than a third predetermined threshold value, the image is judged to be qualified in the face brightness detection.
FIG. 9 illustrates a schematic diagram of face luminance detection according to some embodiments of the invention.
Referring to fig. 9, 10 key points (one on each of the left and right eyebrows, one on each of the left and right eyes, one in the middle of each eye, three on the nose, and one in each of the left and right mouth angles) are selected from a human face, RGB values of the 10 key points are obtained, the RGB values of the key points are converted into YUV values, Y values thereof are luminance values of the corresponding key points, and the average value of the Y values of the 10 points is obtained, and a calculation formula of the Y value is as follows (1):
y=0.257*r+0.504*g+0.098*b+16 (1)
in the example embodiment of fig. 9, the positions of 10 points are selected from eyes, nose and mouth, and the positions of the 10 points are key positions for feature extraction in the later stage face recognition model. In addition, the brightness of the key position of 10 positions is extracted to screen the image, so that the image data amount needing to be identified in the later stage of face identification can be greatly reduced, the operation speed of the image identification is improved, and the storage pressure of equipment is reduced. The average brightness of the 10 points selected was:
Figure BDA0001801955890000111
experiments show that when the third preset threshold is 75, the optimal brightness detection accuracy can be obtained, when the brightness average value is lower than 75, the brightness of the face is judged to be insufficient, and when the brightness average value is higher than 75, the image is judged to be qualified in the face brightness detection. Similarly, if the face in the image cannot be identified and labeled, the image is classified as a non-compliant image.
In fig. 9, the average face luminance of the left image is 87.30, which is higher than the third predetermined threshold, and the left image is qualified in face luminance detection; the average face brightness of the intermediate image is 38.60, and is lower than a third preset threshold, and the intermediate image is unqualified in face brightness detection; the face light of the right image is excessively dark, the face and the mark points cannot be detected, and the right image is an unqualified image. FIG. 10 illustrates a sample schematic of face luminance insufficiency according to some embodiments of the invention.
FIG. 11 illustrates a flow diagram of face ambiguity detection according to some embodiments of the present invention.
Referring to fig. 11, in step S1110, the target image is converted into a gray scale image.
In an example embodiment, when the target image is a color image, the target image is subjected to graying processing to obtain a grayscale image of the target image. For example, when the target image is a color image, an average value of three components of R (Red), G (Green), and B (blue) of a pixel of the color image may be calculated by an averaging method, and the average value may be used as a gray value of the pixel. The target image may be grayed in another manner, and the present invention is not limited to this.
In step S1120, one channel of the gray map is convolved by the laplacian, and the variance of the gray map after the convolution is determined.
In an exemplary embodiment, the pixel variance value of the convolved gray map is calculated by convolving a certain channel of the gray map, for example, a gray value, by laplace computation.
In step S1130, it is determined whether the variance of the gray scale is greater than a variance threshold.
In an example embodiment, the greater the pixel variance value of the grayscale map, the sharper the image is relatively; the smaller the pixel variance value of the grayscale image, the more blurred the image is. And determining the variance threshold value based on the size of a face detection frame for carrying out fuzzy detection on the image and a face brightness detection experiment.
In step S1140, if it is determined that the variance is not greater than the variance threshold, it is determined that the target image is not qualified in the blur degree detection.
In an example embodiment, if it is determined that the variance of the grayscale map is not greater than the corresponding variance threshold, indicating that the target image is relatively blurred, it is determined that the target image is not qualified in the blur degree detection.
FIG. 12 illustrates a schematic diagram of face ambiguity detection according to some embodiments of the present invention.
Referring to fig. 12, the system detects a face in a target image and performs a blur degree calculation on the image in which the face is detected. When the original image is a color image, the original image is converted into a gray-scale image, a 3 × 3 laplacian operator is used for performing convolution operation on one channel of the image, and then the variance of the gray-scale image of the convolution operation result is calculated. If the variance of the gray scale map is larger, the image is relatively clearer, and if the variance of the gray scale map is smaller, the image is relatively fuzzy. In addition, for images with different sizes, different variance threshold values are adopted for analysis and comparison, so that the results are easier to compare, and the variance threshold values corresponding to face detection frames with various sizes can be determined through the following formula (3) by experiments:
Figure BDA0001801955890000121
for example, when the size of the face detection frame is smaller than 3600 and the ambiguity is smaller than 2000, the face image is judged to be unqualified; if the size of the face detection frame is smaller than 3600 and the ambiguity is larger than 2000, judging the face image to be qualified; the variance threshold for face detection boxes of other sizes is determined in a similar manner as described above. If the human face cannot be detected, the image is directly judged to be unqualified. The detection success rate of the method is 90% of the test accuracy rate on a single face fuzzy unqualified sample.
In fig. 12, if the area of the face detection frame is 3600 or more and 7225 or less, the variance threshold is 500, the face average ambiguity of the left image is 1648, and if the area is greater than 500, it is determined that the left image is qualified in the face ambiguity detection; if the average human face fuzziness of the intermediate image is 342 and is smaller than the variance threshold value 500, judging that the intermediate image is unqualified in the human face fuzziness detection; and the face of the right image is excessively blurred, so that the face and the mark points cannot be detected, and the right image is an unqualified face image. FIG. 13 illustrates a sample schematic of face blur transition according to some embodiments of the invention.
Figure 14 illustrates a flow diagram for face integrity detection according to some embodiments of the invention.
Referring to fig. 14, in step S1410, it is determined whether the face detection frame is completely on the target image.
In an example embodiment, it may be detected whether the coordinates of the face detection frame exceed the target image, and if the coordinates of the face detection frame exceed the target image, it is determined that the face detection frame is not completely on the target image.
In step S1420, if it is determined that the face detection frame is not completely on the target image, it is determined that the target image is not qualified in the face integrity detection.
In an example embodiment, if it is determined that the face detection frame is not completely on the target image, it indicates that the face on the target image is incomplete, and thus the face detection frame cannot detect a complete face, and it is determined that the target image is not qualified in the face integrity detection.
FIG. 15 illustrates a schematic diagram of face integrity detection according to some embodiments of the invention;
referring to fig. 15, the face integrity detection may be performed through the face detection box, and if the x coordinate on the left side of the face detection box is greater than 0, the left side of the face is in the picture; if the y coordinate on the upper side of the face detection frame is larger than 0, the upper side of the face is in the picture; if the x coordinate on the right side of the face detection frame is smaller than the maximum x coordinate of the image, the right side of the face is in the image; and if the y coordinate of the lower side of the face detection frame is smaller than the maximum y coordinate of the picture, the lower side of the face is in the picture. If the four sides of the face detection frame are all in the image, the image is qualified in the face integrity detection.
In fig. 15, four sides of the face detection frame in the left image are all in the image, and the left image is qualified in the face integrity detection; if the face detection frame in the intermediate image exceeds the image, the intermediate image is unqualified in the face integrity detection; and if the face is excessively lost in the right image, the face and the mark point cannot be detected, and the right image is an unqualified image. FIG. 16 illustrates a sample schematic view of a face that is incomplete in accordance with some embodiments of the invention;
FIG. 17 shows a flow diagram of an image detection method according to further embodiments of the present invention.
Referring to fig. 17, in step S1710, an image to be detected is read, and for example, the image to be detected may be read from an image database.
In step S1720, it is determined whether the read image is normal, for example, whether the read image is damaged, and if the read image is determined to be normal, the process proceeds to step S1730; if the reading is determined to be abnormal, the process proceeds to step S1740.
In step S1730, performing low head detection, side head detection, face brightness detection, face ambiguity detection, and face integrity detection on the image to be detected, determining whether all of the five detections pass, and if all of the five detections pass, performing step S1750; if it is determined that all the data do not pass, the process proceeds to step S1760.
In step S1740, the output image detection result is not qualified.
In step S1750, an image that is qualified in the detection result and can be used for face recognition in the later stage is output.
In step S1760, the image test result is rejected, and the pass and reject of each test in the five test results are output.
With the image detection method in the exemplary embodiment of the present invention, since only the pixels of the annotation point need to be processed, the amount of computation can be significantly reduced, the processing time of the CPU is extremely short (500ms) and the memory is small (100 MB). In addition, because the sample images are screened by adopting multiple detections, the accuracy rate of face recognition is improved to 96%.
Furthermore, in an embodiment of the present invention, there is also provided an image detection apparatus for face recognition. Referring to fig. 18, the image sensing apparatus 1800 may include: a labeling unit 1810, a detection unit 1820, and a determination unit 1830. The labeling unit 1810 is configured to acquire a target image, perform face detection on the target image, and label features of the face; the detection unit 1820 is configured to perform multiple detections on the target image based on the labeled facial feature points, where the multiple detections include low head detection, side head detection, face brightness detection, face ambiguity detection, and face integrity detection; the determining unit 1830 is configured to determine that the target image is qualified when each of the plurality of tests is qualified.
In some embodiments of the present invention, based on the foregoing scheme, the detecting unit 1820 is configured to: determining a ratio of a distance in a vertical direction between a first feature point and a second feature point and a distance in the vertical direction between the second feature point and a third feature point; judging whether the ratio is larger than a first preset threshold value or not; and if the judgment result is larger than the first preset threshold value, determining that the target image is unqualified in the low head detection, wherein the second feature point is positioned in a nose tip area, the first feature point is positioned above the second feature point, and the third feature point is positioned below the second feature point.
In some embodiments of the present invention, based on the foregoing scheme, the detecting unit 1820 is configured to: determining a ratio of a distance in a horizontal direction between a fourth feature point and a fifth feature point and a distance in the horizontal direction between the fifth feature point and a sixth feature point; judging whether the ratio is larger than a second preset threshold value or not; and if the judgment result is larger than the second preset threshold, determining that the target image is unqualified in the lateral head detection, wherein the fifth feature point is located on the central axis of the face, and the fourth feature point and the sixth feature point are symmetrical about the fifth feature point.
In some embodiments of the present invention, based on the foregoing scheme, the detecting unit 1820 is configured to: selecting a plurality of key points from the face in the target image; determining RGB values of each of the plurality of keypoints; determining Y values of the key points based on the RGB values of the key points, and calculating the average value of the Y values of the key points; and determining that the target image is unqualified in the face brightness detection when the average value is smaller than a third preset threshold value.
In some embodiments of the present invention, based on the foregoing scheme, the detecting unit 1820 is configured to: converting the target image into a grey-scale map; performing convolution processing on one channel of the gray-scale image through a Laplace operator, and determining the variance of the gray-scale image after the convolution processing; judging whether the variance of the gray level image is larger than a variance threshold value; and if the variance threshold value is not larger than the variance threshold value, determining that the target image is unqualified in the ambiguity detection.
In some embodiments of the present invention, based on the foregoing solution, the detecting unit 1820 is further configured to: determining the variance threshold based on a size of a face detection box of the target image.
In some embodiments of the present invention, based on the foregoing scheme, the detecting unit 1820 is configured to: judging whether the face detection frame is completely on the target image or not; and if the face detection frame is not completely on the target image, determining that the target image is unqualified in the face integrity detection.
In some embodiments of the present invention, based on the foregoing scheme, the labeling unit 1810 is configured to: and when the face cannot be detected or the face characteristic points cannot be marked, determining that the target image is unqualified.
Since each functional block of the image detection apparatus 1800 according to the exemplary embodiment of the present invention corresponds to the steps of the exemplary embodiment of the image detection method, the description thereof is omitted here.
In an exemplary embodiment of the present invention, there is also provided an electronic device capable of implementing the above method.
Referring now to FIG. 1900, there is illustrated a block diagram of a computer system 1900 suitable for use in implementing an electronic device of an embodiment of the invention. The computer system 1900 of the electronic device shown in fig. 19 is only an example, and should not bring any limitation to the function and the scope of the use of the embodiment of the present invention.
As shown in fig. 19, the computer system 1900 includes a Central Processing Unit (CPU)1901, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)1902 or a program loaded from a storage section 1908 into a Random Access Memory (RAM) 1903. In the RAM 1903, various programs and data necessary for system operation are also stored. The CPU 1901, ROM 1902, and RAM 1903 are connected to one another via a bus 1904. An input/output (I/O) interface 1905 is also connected to bus 1904.
The following components are connected to the I/O interface 1905: an input section 1906 including a keyboard, a mouse, and the like; an output section 1907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 1908 including a hard disk and the like; and a communication section 1909 including a network interface card such as a LAN card, a modem, or the like. The communication section 1909 performs communication processing via a network such as the internet. Drivers 1910 are also connected to I/O interface 1905 as needed. A removable medium 1911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1910 as necessary, so that a computer program read out therefrom is mounted in the storage section 1908 as necessary.
In particular, according to an embodiment of the present invention, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the invention include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications portion 1909 and/or installed from removable media 1911. The above-described functions defined in the system of the present application are executed when the computer program is executed by the Central Processing Unit (CPU) 1901.
It should be noted that the computer readable medium shown in the present invention can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present invention, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to implement the image detection method as described in the above embodiments.
For example, the electronic device may implement the following as shown in fig. 1: step S110, acquiring a target image, and carrying out face detection and marking of face characteristic points on the target image; step S120, carrying out multiple detection on the target image based on the labeled human face characteristic points, wherein the multiple detection comprises head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection; step S130, when all the detections in the multiple detections are qualified, determining that the target image is qualified.
It should be noted that although in the above detailed description several modules or units of a device or apparatus for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the invention. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiment of the present invention can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiment of the present invention.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (11)

1. An image detection method for face recognition, comprising:
acquiring a target image, and carrying out face detection and marking of face characteristic points on the target image;
performing multiple detections on the target image based on the labeled human face feature points, wherein the multiple detections comprise head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection;
and when all the detection items in the multiple detections are qualified, determining that the target image is qualified.
2. The image detection method according to claim 1, wherein the head lowering detection includes:
determining a ratio of a distance in a vertical direction between a first feature point and a second feature point and a distance in the vertical direction between the second feature point and a third feature point;
judging whether the ratio is larger than a first preset threshold value or not;
if the judgment result is larger than the first preset threshold value, determining that the target image is unqualified in the head-lowering detection,
the second feature point is located in a nose tip region, the first feature point is located above the second feature point, and the third feature point is located below the second feature point.
3. The image detection method according to claim 1, wherein the lateral-head detection includes:
determining a ratio of a distance in a horizontal direction between a fourth feature point and a fifth feature point and a distance in the horizontal direction between the fifth feature point and a sixth feature point;
judging whether the ratio is larger than a second preset threshold value or not;
if the image is judged to be larger than the second preset threshold value, determining that the target image is unqualified in the lateral head detection,
the fifth feature point is located on the central axis of the face, and the fourth feature point and the sixth feature point are symmetrical with respect to the fifth feature point.
4. The image detection method according to claim 1, wherein the face brightness detection comprises:
selecting a plurality of key points from the face in the target image;
determining RGB values of each of the plurality of keypoints;
determining Y values of the key points based on the RGB values of the key points, and calculating the average value of the Y values of the key points;
and determining that the target image is unqualified in the face brightness detection when the average value is smaller than a third preset threshold value.
5. The image detection method according to claim 1, wherein the blur degree detection includes:
converting the target image into a grey-scale map;
performing convolution processing on one channel of the gray-scale image through a Laplace operator, and determining the variance of the gray-scale image after the convolution processing;
judging whether the variance of the gray level image is larger than a variance threshold value;
and if the variance threshold value is not larger than the variance threshold value, determining that the target image is unqualified in the ambiguity detection.
6. The image detection method according to claim 5, wherein the blur degree detection further comprises:
determining the variance threshold based on a size of a face detection box of the target image.
7. The image detection method of claim 1, wherein the face integrity detection comprises:
judging whether the face detection frame is completely on the target image or not;
and if the face detection frame is not completely on the target image, determining that the target image is unqualified in the face integrity detection.
8. The image detection method according to claim 1, wherein the face detection and labeling of the face feature points for the target image comprises:
and when the face cannot be detected or the face characteristic points cannot be marked, determining that the target image is unqualified.
9. An image detection apparatus for face recognition, characterized by comprising:
the labeling unit is used for acquiring a target image, and performing face detection and labeling of face characteristic points on the target image;
the detection unit is used for carrying out multiple detections on the target image based on the labeled human face characteristic points, wherein the multiple detections comprise head lowering detection, side head detection, human face brightness detection, human face ambiguity detection and human face integrity detection;
and the determining unit is used for determining that the target image is qualified when all the detection items in the multiple detections are qualified.
10. An electronic device, comprising:
a processor; and
a memory having computer readable instructions stored thereon which, when executed by the processor, implement the image detection method of any one of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image detection method according to any one of claims 1 to 8.
CN201811081015.XA 2018-09-17 2018-09-17 Image detection method, apparatus, electronic device, and medium for face recognition Pending CN110909568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811081015.XA CN110909568A (en) 2018-09-17 2018-09-17 Image detection method, apparatus, electronic device, and medium for face recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811081015.XA CN110909568A (en) 2018-09-17 2018-09-17 Image detection method, apparatus, electronic device, and medium for face recognition

Publications (1)

Publication Number Publication Date
CN110909568A true CN110909568A (en) 2020-03-24

Family

ID=69813280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811081015.XA Pending CN110909568A (en) 2018-09-17 2018-09-17 Image detection method, apparatus, electronic device, and medium for face recognition

Country Status (1)

Country Link
CN (1) CN110909568A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444856A (en) * 2020-03-27 2020-07-24 广东博智林机器人有限公司 Image analysis method, model training method, device, equipment and storage medium
CN112183421A (en) * 2020-10-09 2021-01-05 江苏提米智能科技有限公司 Face image evaluation method and device, electronic equipment and storage medium
CN112836656A (en) * 2021-02-07 2021-05-25 北京迈格威科技有限公司 Equipment control method and device and image acquisition system
CN114863472A (en) * 2022-03-28 2022-08-05 深圳海翼智新科技有限公司 Multi-stage pedestrian detection method, device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924894A (en) * 2006-09-27 2007-03-07 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
CN105608448A (en) * 2016-02-22 2016-05-25 海信集团有限公司 LBP characteristic extraction method based on face key points and LBP characteristic extraction device based on face key points
DE102016104487A1 (en) * 2016-03-11 2017-09-14 Dermalog Identification Systems Gmbh Mobile electronic device with facial recognition
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing
CN107679504A (en) * 2017-10-13 2018-02-09 北京奇虎科技有限公司 Face identification method, device, equipment and storage medium based on camera scene
CN107945106A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium
CN108230293A (en) * 2017-05-31 2018-06-29 深圳市商汤科技有限公司 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
US20180225842A1 (en) * 2016-01-21 2018-08-09 Tencent Technology (Shenzhen) Company Limited Method and apparatus for determining facial pose angle, and computer storage medium
CN108428214A (en) * 2017-02-13 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1924894A (en) * 2006-09-27 2007-03-07 北京中星微电子有限公司 Multiple attitude human face detection and track system and method
US20180225842A1 (en) * 2016-01-21 2018-08-09 Tencent Technology (Shenzhen) Company Limited Method and apparatus for determining facial pose angle, and computer storage medium
CN105608448A (en) * 2016-02-22 2016-05-25 海信集团有限公司 LBP characteristic extraction method based on face key points and LBP characteristic extraction device based on face key points
DE102016104487A1 (en) * 2016-03-11 2017-09-14 Dermalog Identification Systems Gmbh Mobile electronic device with facial recognition
CN108428214A (en) * 2017-02-13 2018-08-21 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN108230293A (en) * 2017-05-31 2018-06-29 深圳市商汤科技有限公司 Determine method and apparatus, electronic equipment and the computer storage media of quality of human face image
CN107330378A (en) * 2017-06-09 2017-11-07 湖北天业云商网络科技有限公司 A kind of driving behavior detecting system based on embedded image processing
CN107679504A (en) * 2017-10-13 2018-02-09 北京奇虎科技有限公司 Face identification method, device, equipment and storage medium based on camera scene
CN107945106A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107945107A (en) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108229330A (en) * 2017-12-07 2018-06-29 深圳市商汤科技有限公司 Face fusion recognition methods and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱同辉 等: "多摄像机协同的最优人脸采集算法", 计算机工程, vol. 39, no. 10, pages 212 - 216 *
许立升 等: "基于卷积神经网络多方位的人脸检测", 信息技术, no. 03, pages 45 - 49 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444856A (en) * 2020-03-27 2020-07-24 广东博智林机器人有限公司 Image analysis method, model training method, device, equipment and storage medium
CN112183421A (en) * 2020-10-09 2021-01-05 江苏提米智能科技有限公司 Face image evaluation method and device, electronic equipment and storage medium
CN112836656A (en) * 2021-02-07 2021-05-25 北京迈格威科技有限公司 Equipment control method and device and image acquisition system
CN114863472A (en) * 2022-03-28 2022-08-05 深圳海翼智新科技有限公司 Multi-stage pedestrian detection method, device and storage medium

Similar Documents

Publication Publication Date Title
US11410277B2 (en) Method and device for blurring image background, storage medium and electronic apparatus
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US9530045B2 (en) Method, system and non-transitory computer storage medium for face detection
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
US10740912B2 (en) Detection of humans in images using depth information
TWI435288B (en) Image processing apparatus and method, and program product
US8103058B2 (en) Detecting and tracking objects in digital images
CN110909568A (en) Image detection method, apparatus, electronic device, and medium for face recognition
US11386710B2 (en) Eye state detection method, electronic device, detecting apparatus and computer readable storage medium
WO2023115409A1 (en) Pad detection method and apparatus, and computer device and storage medium
CN112749696B (en) Text detection method and device
US9082019B2 (en) Method of establishing adjustable-block background model for detecting real-time image object
CN110826372A (en) Method and device for detecting human face characteristic points
CN110969046A (en) Face recognition method, face recognition device and computer-readable storage medium
CN112396050B (en) Image processing method, device and storage medium
CN111784658B (en) Quality analysis method and system for face image
CN110414522A (en) A kind of character identifying method and device
EP2919149A2 (en) Image processing apparatus and image processing method
CN110889470A (en) Method and apparatus for processing image
US20240127404A1 (en) Image content extraction method and apparatus, terminal, and storage medium
JP2012048326A (en) Image processor and program
CN110348353B (en) Image processing method and device
CN112329554A (en) Low-resolution image helmet identification method and device
US20240320806A1 (en) Image processing method and apparatus, electronic device and computer readable storage medium
KR20060064801A (en) Apparatus and method for recognizing an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination