Nothing Special   »   [go: up one dir, main page]

CN113128373B - Image processing-based color spot scoring method, color spot scoring device and terminal equipment - Google Patents

Image processing-based color spot scoring method, color spot scoring device and terminal equipment Download PDF

Info

Publication number
CN113128373B
CN113128373B CN202110363504.XA CN202110363504A CN113128373B CN 113128373 B CN113128373 B CN 113128373B CN 202110363504 A CN202110363504 A CN 202110363504A CN 113128373 B CN113128373 B CN 113128373B
Authority
CN
China
Prior art keywords
target area
image
color spot
area image
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110363504.XA
Other languages
Chinese (zh)
Other versions
CN113128373A (en
Inventor
乔峤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Rongzhifu Technology Co ltd
Original Assignee
Xi'an Rongzhifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Rongzhifu Technology Co ltd filed Critical Xi'an Rongzhifu Technology Co ltd
Priority to CN202110363504.XA priority Critical patent/CN113128373B/en
Publication of CN113128373A publication Critical patent/CN113128373A/en
Application granted granted Critical
Publication of CN113128373B publication Critical patent/CN113128373B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a stain scoring method, a stain scoring device and terminal equipment based on image processing, which are used for obtaining a stain score with higher accuracy by the terminal equipment, so that the accuracy of evaluating facial stains by the terminal equipment is effectively improved. The method of the embodiment of the invention comprises the following steps: acquiring an image to be identified; extracting target area images corresponding to each key area from the images to be identified; wherein each key region at least comprises a facial region, a forehead region, a mandibular region and an eye region; determining the color spot score corresponding to each target area image by using a preset color spot score model; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.

Description

Image processing-based color spot scoring method, color spot scoring device and terminal equipment
Technical Field
The present invention relates to the field of terminal devices, and in particular, to a stain scoring method, a stain scoring device, and a terminal device based on image processing.
Background
With the improvement of the life quality of people, people attach more and more importance to the skin quality of the people, and often hope to obtain accurate evaluation of the skin quality of the people, so that targeted maintenance and treatment measures are adopted.
In practice, it is found that the current evaluation methods for facial stains can be largely classified into sensor-based evaluation methods and image processing-based evaluation methods. The sensor-based evaluation method generally adopts contact measurement, the realized functions are single, the required equipment is complex, the home operation cannot be realized, and the evaluation method must be performed in professional institutions such as beauty parlors and hospitals. The evaluation method based on image processing generally utilizes image acquisition equipment to shoot human skin, and utilizes computer technology to process and analyze skin images so as to give an evaluation result.
In most of the existing evaluation methods based on image processing, the number of color spots in an image is obtained by a traditional image processing technology, and then the evaluation is performed based on the number of color spots in the image, so that the evaluation result is often inaccurate.
Disclosure of Invention
The embodiment of the invention provides a stain scoring method, a stain scoring device and terminal equipment based on image processing, which are used for obtaining a stain score with higher accuracy by the terminal equipment, so that the accuracy of evaluating facial stains by the terminal equipment is effectively improved.
An embodiment of the present invention provides a method for scoring a color spot based on image processing, which may include:
acquiring an image to be identified;
extracting target area images corresponding to each key area from the image to be identified; wherein each key region at least comprises a face region, a forehead region, a mandible region and an eye region;
determining the color spot score corresponding to each target area image by using a preset color spot score model;
and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.
Optionally, the extracting the target area image corresponding to each key area from the image to be identified includes: determining a first face feature point in the image to be identified through a preset algorithm; determining a target area image corresponding to the face area according to the first face feature points; determining a second face feature point in a target area image corresponding to the face area through the preset algorithm; and determining a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
Optionally, the preset stain score model includes a lightweight convolutional neural network, the lightweight convolutional neural network includes a plurality of preset stain grades, and determining the stain score corresponding to each target area image by using the preset stain score model includes: extracting the characteristics of each target area image by using the lightweight convolutional neural network to obtain the characteristic vector of each target area image; determining the probability value of the color spot grade of each target area image as each preset color spot grade according to the normalized exponential function and the feature vector; and determining the color spot score corresponding to each target area image according to the probability value.
Optionally, the plurality of preset stain levels includes a first stain level, a second stain level, and a third stain level, the probability value includes a first probability value that the stain level of each of the target area images is the first stain level, a second probability value that the stain level of each of the target area images is the second stain level, and a third probability value that the stain level of each of the target area images is the third stain level; the determining, according to the probability value, a stain score corresponding to each target area image includes: and obtaining the color spot score corresponding to each target area image according to the first probability value, the second probability value and the third probability value.
Optionally, the obtaining the stain score corresponding to each target area image according to the first probability value, the second probability value and the third probability value includes: obtaining each target according to a first formulaScoring the color spots corresponding to the regional images; the first formula is a=αp 0 +βP 1 -γP 2 +η; wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 The second probability value is represented, P2 represents the third probability value, alpha represents the score coefficient corresponding to the first probability value, beta represents the score coefficient corresponding to the second probability value, gamma represents the score coefficient corresponding to the third probability value, eta is a score constant, and alpha, beta, gamma and eta are not negative numbers.
Optionally, the obtaining the stain score for the image to be identified according to the stain score corresponding to each target area image includes: accumulating the color spot scores corresponding to each target area image to obtain a first calculated value; obtaining a color spot score aiming at the image to be identified according to the number of the target area images and the first calculated value; or, obtaining a weight coefficient of the color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
Optionally, the method further comprises: carrying out gray scale processing on each target area image to obtain a gray scale image corresponding to each target area image; in each gray level graph, acquiring position information of a color spot area; or converting the RGB color space of each target area image into HSV color space; extracting a first feature map corresponding to a B component from each target area image of the RGB color space; extracting a second feature map corresponding to an S component from each target area image of the HSV color space; determining a target feature map according to the first feature map and the second feature map; acquiring position information of a color spot area in each target feature map; and marking the color spots in the image to be identified according to the position information.
Optionally, labeling the color spots in the image to be identified according to the position information includes: determining a mark area corresponding to the color spot area in the image to be identified according to the position information; marking the boundary of the marking area according to a preset mode.
A second aspect of an embodiment of the present invention provides a stain scoring apparatus, which may include:
the acquisition module is used for acquiring the image to be identified; extracting target area images corresponding to each key area from the image to be identified; wherein each key region at least comprises a face region, a forehead region, a mandible region and an eye region;
The processing module is used for determining the color spot score corresponding to each target area image by utilizing a preset color spot score model;
the obtaining module is further configured to obtain a stain score for the image to be identified according to the stain score corresponding to each target area image.
Optionally, the processing module is specifically configured to determine, by using a preset algorithm, a first face feature point in the image to be identified; determining a target area image corresponding to the face area according to the first face feature points; determining a second face feature point in a target area image corresponding to the face area through the preset algorithm; and determining a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
Optionally, the preset stain scoring model includes a lightweight convolutional neural network, the lightweight convolutional neural network includes a plurality of preset stain grades, and the obtaining module is specifically configured to perform feature extraction on each target area image by using the lightweight convolutional neural network to obtain a feature vector of each target area image;
The processing module is specifically configured to determine, according to the normalized exponential function and the feature vector, a probability value that a stain level of each target area image is each preset stain level;
the obtaining module is specifically configured to determine a stain score corresponding to each target area image according to the probability value.
Optionally, the plurality of preset stain levels includes a first stain level, a second stain level, and a third stain level, the probability value includes a first probability value that the stain level of each of the target area images is the first stain level, a second probability value that the stain level of each of the target area images is the second stain level, and a third probability value that the stain level of each of the target area images is the third stain level;
the processing module is specifically configured to obtain a stain score corresponding to each target area image according to the first probability value, the second probability value, and the third probability value.
Optionally, the processing module is specifically configured to obtain, according to a first formula, a stain score corresponding to each target area image; the first formula is a=αp 0 +βP 1 -γP 2 +η; wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 Representing the second probability value, P 2 And alpha represents a score coefficient corresponding to the first probability value, beta represents a score coefficient corresponding to the second probability value, gamma represents a score coefficient corresponding to the third probability value, eta is a score constant, and alpha, beta, gamma and eta are not negative numbers.
Optionally, the processing module is specifically configured to accumulate the score of the color spot corresponding to each target area image to obtain a first calculated value; the acquisition module is specifically configured to obtain a stain score for the image to be identified according to the number of the target area images and the first calculated value; or alternatively, the first and second heat exchangers may be,
the acquisition module is specifically used for acquiring a weight coefficient of the color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
Optionally, the processing module is further configured to perform gray scale processing on each target area image to obtain a gray scale image corresponding to each target area image; the acquisition module is also used for acquiring the position information of the color spot area in each gray level image; or alternatively, the first and second heat exchangers may be,
The processing module is further used for converting the RGB color space of each target area image into an HSV color space; extracting a first feature map corresponding to a B component from each target area image of the RGB color space; extracting a second feature map corresponding to an S component from each target area image of the HSV color space; determining a target feature map according to the first feature map and the second feature map; the acquisition module is also used for acquiring the position information of the color spot area in each target feature map;
the processing module is also used for marking the color spots in the image to be identified according to the position information.
Optionally, the processing module is specifically configured to determine, according to the position information, a marking area corresponding to the color spot area in the image to be identified; marking the boundary of the marking area according to a preset mode.
A third aspect of an embodiment of the present invention provides a stain scoring apparatus, which may include:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invokes the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method according to the first aspect of the embodiment of the present invention.
In still another aspect, a terminal device is provided in an embodiment of the present invention, which may include: the stain scoring device according to the second aspect or the third aspect of the embodiments of the present invention.
In yet another aspect, an embodiment of the present invention provides a computer readable storage medium having executable program code stored thereon, the executable program code implementing the method according to the first aspect of the embodiment of the present invention when executed by a processor.
In yet another aspect, embodiments of the present invention disclose a computer program product which, when run on a computer, causes the computer to perform any of the methods disclosed in the first aspect of the embodiments of the present invention.
In yet another aspect, an embodiment of the present invention discloses an application publishing platform, which is configured to publish a computer program product, where the computer program product, when run on a computer, causes the computer to perform any one of the methods disclosed in the first aspect of the embodiment of the present invention.
From the above technical solutions, the embodiment of the present invention has the following advantages:
in the embodiment of the invention, an image to be identified is acquired; extracting target area images corresponding to each key area from the images to be identified; wherein each key region at least comprises a facial region, a forehead region, a mandibular region and an eye region; determining the color spot score corresponding to each target area image by using a preset color spot score model; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image. The terminal equipment can use a preset color spot scoring model to score the color spots of the target area images corresponding to the key areas extracted from the images to be identified, and obtain the color spot score for the images to be identified according to the color spot score corresponding to each target area image. Therefore, the preset color spot scoring model can be obtained by training a large number of samples, and reliability is high, so that images corresponding to color spot areas are analyzed based on the color spot scoring model, and the terminal equipment can obtain color spot scores with high accuracy, so that the accuracy of evaluating the facial color spots by the terminal equipment is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments and the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings.
FIG. 1 is a diagram showing an embodiment of a stain scoring method based on image processing in an embodiment of the present invention;
FIG. 2a is a schematic diagram of another embodiment of a stain scoring method based on image processing according to an embodiment of the present invention;
FIG. 2b is a schematic diagram of an embodiment of the first parameter according to the present invention;
FIG. 2c is a schematic diagram of an embodiment of a gray scale according to the present invention;
FIG. 2d is a schematic diagram of one embodiment of a stain marking in an embodiment of the present invention;
FIG. 2e is a schematic diagram of another embodiment of a stain marking in an embodiment of the present invention;
FIG. 3a is a schematic diagram of another embodiment of a stain scoring method based on image processing according to an embodiment of the present invention;
FIG. 3b is a diagram of an embodiment of a second parameter according to an embodiment of the present invention;
FIG. 3c is a schematic diagram of an embodiment of a residual map in an embodiment of the present invention;
FIG. 4 is a schematic view of an embodiment of a stain scoring apparatus according to an embodiment of the present invention;
FIG. 5 is a schematic view of another embodiment of a stain scoring apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a stain scoring method, a stain scoring device and terminal equipment based on image processing, which are used for obtaining a stain score with higher accuracy by the terminal equipment, so that the accuracy of evaluating facial stains by the terminal equipment is effectively improved.
In order that those skilled in the art will better understand the present invention, reference will now be made to the accompanying drawings in which embodiments of the invention are illustrated, it being apparent that the embodiments described are only some, but not all, of the embodiments of the invention. Based on the embodiments of the present invention, it should be understood that the present invention is within the scope of protection.
It will be appreciated that the terminal devices involved in embodiments of the present invention may include general hand-held, on-screen electronic terminal devices such as cell phones, smart phones, portable terminals, personal digital assistants (Personal Digital Assistant, PDA), portable multimedia player (Personal Media Player, PMP) devices, notebook computers, notebook (Note Pad), wireless broadband (Wireless Broadband, wibro) terminals, tablet computers (Personal Computer, PC), smart PCs, point of sale (POS), and car computers, among others.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or be a portable electronic device integrated into the user's clothing or accessories. The wearable device is not only a hardware device, but also can realize powerful intelligent functions through software support and data interaction and cloud interaction, such as: the mobile phone terminal has the advantages of calculating function, positioning function and alarming function, and can be connected with mobile phones and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., watches, wrist products, etc.), foot-supported shoes (e.g., shoes, socks, or other leg wear products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and smart apparel, school bags, crutches, accessories, etc. in various non-mainstream product forms.
It should be noted that the terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present invention are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the execution body of the embodiment of the present invention may be a stain scoring device, or may be a terminal device. The technical scheme of the invention is further described below by taking terminal equipment as an example.
As shown in fig. 1, an embodiment of a stain scoring method based on image processing according to an embodiment of the present invention is shown, which may include:
101. and acquiring an image to be identified.
It should be noted that the image to be recognized may be an image of a face area of the user, and may be an image of the face and other parts (for example, neck and shoulder) of the user; the facial image may include a forehead area image, a mandibular area image, and an eye area image; the image to be identified may be obtained by photographing through a camera in the terminal device, or may be obtained by photographing through other photographing means in the terminal device, which is not specifically limited herein.
Optionally, the terminal device obtains the image to be identified, which may include, but is not limited to, the following implementation manners:
implementation 1: the terminal equipment detects the distance between the user and the terminal equipment; and when the distance is within the preset distance range, the terminal equipment acquires the image to be identified.
The preset distance range is a section constructed by the first distance threshold and the second distance threshold. The distance is within a preset distance range, that is, the distance is greater than the first distance threshold and less than or equal to the second distance threshold.
By way of example, assuming a first distance threshold of 10 centimeters (cm), and a second distance threshold of 25cm, the preset distance range is (10 cm,25 cm). The terminal equipment detects that the distance between the user and the terminal equipment is 18cm, the 18cm is located in a preset distance setting range (10 cm,25 cm), and at the moment, the terminal equipment acquires an image to be identified.
Implementation 2: the terminal equipment detects the current environment brightness value; when the current ambient brightness value is within a preset brightness range, the terminal equipment acquires an image to be identified.
The preset luminance range is a section in which the first luminance threshold value and the second luminance threshold value are constructed. The current ambient brightness value is within a preset brightness range, i.e. the current ambient brightness value is greater than the first brightness threshold and less than or equal to the second brightness threshold.
Exemplary, assume that the first luminance threshold is 120 candelas per square meter (simply: cd/m) 2 ) The second brightness threshold is 150cd/m 2 Then, the preset luminance range is (120 cd/m 2 ,150cd/m 2 ). The terminal equipment detects that the current environment brightness value is 136cd/m 2 136cd/m 2 Is located within a preset brightness setting range (120 cd/m) 2 ,150cd/m 2 ) At this time, the terminal device acquires an image to be recognized.
It can be understood that the image to be identified obtained by the terminal device in the preset distance range or the preset brightness range is clearer, so that the image to be identified can be conveniently processed later.
102. And extracting target area images corresponding to the key areas from the images to be identified.
Wherein each key region includes at least a facial region, a forehead region, a mandibular region, and an eye region.
Optionally, the extracting, by the terminal device, a target area image corresponding to each key area from the image to be identified may include: the terminal equipment determines a first face feature point in the image to be identified through a preset algorithm; the terminal equipment determines a target area image corresponding to the face area according to the first face feature points; the terminal equipment determines a second face feature point in the target area image corresponding to the face area through the preset algorithm; and the terminal equipment determines a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
It can be understood that, when the image to be recognized is a target area image corresponding to the face area, the terminal device only needs to determine a second face feature point in the target area image corresponding to the face area through a preset algorithm, and determine a target area image corresponding to the forehead area, a target area image corresponding to the mandibular area and a target area image corresponding to the eye area according to the second face feature point.
It should be noted that the preset algorithm may be at least one of a cross-platform computer vision function library (Open Source Computer Vision Library, openCV function library), an edge detection algorithm, a sobel algorithm, and an active contour model. The face feature points can be extracted from a preset algorithm.
Optionally, the terminal device determines a target area image corresponding to the face area according to the first face feature point, where the first face feature point includes a first feature point, a second feature point, a third feature point and a fourth feature point.
For example, the first feature point is a feature point No. 47 of the OpenCV function library, the second feature point is a feature point No. 50, the third feature point is a feature point No. 1, and the fourth feature point is a feature point No. 15. The terminal device may determine the target area image corresponding to the face area according to the ordinate of the 47 th feature point, the ordinate of the 50 th feature point, the abscissa of the 1 st feature point, and the abscissa of the 15 th feature point.
Optionally, the determining, by the terminal device, the target area image corresponding to the forehead area, the target area image corresponding to the mandible area, and the target area image corresponding to the eye area according to the second face feature point may include: the terminal equipment detects the highest point X on the left side of the face area, detects the highest point Y on the right side of the face area, and determines a target area image corresponding to the forehead area according to the highest point X, the highest point Y and the first sub-face characteristic points; determining a target area image corresponding to the mandible area according to the second sub-face feature points; and determining a target area image corresponding to the face area according to the third sub-face feature points.
For example, in the OpenCV function library, the first sub-face feature point may include a feature point No. 0 and a feature point No. 16, and the terminal device may determine the target area image corresponding to the forehead area according to the ordinate of the highest point X, the ordinate of the highest point Y, the abscissa of the feature point No. 0, and the abscissa of the feature point No. 16; the second sub-face feature points may include a feature point No. 3, a feature point No. 8, a feature point No. 13, and a feature point No. 33, and the terminal device may determine a target area image corresponding to the mandibular area according to an ordinate of the feature point No. 3, an ordinate of the feature point No. 8, an abscissa of the feature point No. 13, and an abscissa of the feature point No. 33; the third sub-face feature points may include 17 th feature point, 29 th feature point, 0 th feature point and 41 th feature point, and the terminal device may determine a target area image corresponding to the first eye area (for example, left eye) according to an ordinate of the 17 th feature point, an ordinate of the 29 th feature point, an abscissa of the 0 th feature point and an abscissa of the 41 th feature point; the third sub-face feature points may further include a number 25 feature point, a number 46 feature point, and a number 16 feature point, and the terminal device may determine a target area image corresponding to the second eye area (e.g., right eye) according to an ordinate of the number 25 feature point, an ordinate of the number 29 feature point, an abscissa of the number 46 feature point, and an abscissa of the number 16 feature point.
103. And determining the color spot score corresponding to each target area image by using a preset color spot score model.
It should be noted that, the stain scoring model preset by the terminal device may be a convolutional neural network model, which may be obtained by training a large number of samples, so as to be beneficial to improving the accuracy of the stain score corresponding to each target area image. The convolutional neural network model may be a two-dimensional convolutional neural network model or a three-dimensional convolutional neural network model, which is not limited herein.
Optionally, the preset stain scoring model includes a lightweight convolutional neural network including a plurality of preset stain levels; the determining, by the terminal device, the stain score corresponding to each target area image using a preset stain score model may include: the terminal equipment utilizes the lightweight convolutional neural network to extract the characteristics of each target area image so as to obtain the characteristic vector of each target area image; the terminal equipment determines the probability value that the color spot grade of each target area image is each preset color spot grade according to the normalized exponential function and the feature vector; and the terminal equipment determines the color spot score corresponding to each target area image according to the probability value.
It should be noted that, the lightweight convolutional neural network, that is, the micro neural network, refers to a neural network model that requires a smaller number of parameters and a smaller calculation cost. Due to the small computing overhead nature, the micro-neural network may be deployed on devices with limited computing resources, such as smartphones, tablet computers, or other embedded devices. The feature vector for each target region image may be a multi-dimensional based feature vector.
Wherein the probability value of each preset color spot level is a positive number.
It can be understood that the process of training data in the preset lightweight convolutional neural network by the terminal device is as follows:
(1) The terminal device marks the color spot grade of the images in the training sample set.
The color spot grade marked by the terminal equipment can be 3 grades, and the marking standard is as follows: for each area image, the mark with no color stain or little light color stain was 0, the mark with color stain or large range of color stain was 1, and the mark with multiple dark color stains or piece-forming color stain was 2.
It should be noted that, the images in the sample set may include 4 types, the first type is an image corresponding to a forehead region, the second type is an image corresponding to a face region, the third type is an image corresponding to a mandibular region, and the fourth type is an image corresponding to an eye region.
Optionally, the size of the image in the sample set may be scaled to 512×512 pixels, so that the terminal device may conveniently label the target area image with a stain level.
(2) And the terminal equipment carries out network model training on the images of each level.
The terminal equipment divides 80% of images of each level into training sets and 20% into test sets aiming at each type of images, and model training is carried out by utilizing the training sets of each type of images, so that the lightweight convolutional neural network can be obtained. The terminal device may repeat the training of the network model multiple times (e.g., 10 times) to obtain more accurate data.
Optionally, the terminal device may select mobilenet v2 as the classification network, and may load the pre-trained parameters on ImageNet, so that the network model may be better deployed on the terminal device, and the data obtained by training may be converged efficiently.
Exemplary, the parameters set in the network model training process are as follows: using the cross entropy loss function, the batch-size was set to 32, the number of iterations was 40, the initial learning rate was 0.0006, and the decay was 0.8 every 20 iterations.
Optionally, the plurality of preset stain levels includes a first stain level, a second stain level, and a third stain level, the probability value includes a first probability value that the stain level of each of the target area images is the first stain level, a second probability value that the stain level of each of the target area images is the second stain level, and a third probability value that the stain level of each of the target area images is the third stain level. The determining, by the terminal device, a score of the stain corresponding to each target area image according to the probability value includes: and the terminal equipment obtains the color spot score corresponding to each target area image according to the first probability value, the second probability value and the third probability value.
It should be noted that the first probability value, the second probability value, and the third probability value may be the same or different, and are not specifically limited herein.
Optionally, the obtaining, by the terminal device, the stain score corresponding to each target area image according to the first probability value, the second probability value and the third probability value may include: and the terminal equipment obtains the color spot score corresponding to each target area image according to the first formula.
The first formula is a=αp 0 +βP 1 -γP 2 +η。
Wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 Representing the second probability value, P 2 Representing the third probability value, alpha representing the score coefficient corresponding to the first probability value, beta representing the score coefficient corresponding to the second probability value, gamma representing the score coefficient corresponding to the third probability value, eta being the score constant, alphaThe values of beta, gamma and eta are not negative numbers.
P is the same as 0 、P 1 And P 2 But only positive numbers. The terminal equipment corresponds to the values of alpha, beta and gamma, and has a certain corresponding relation with the value of eta, wherein the value of alpha cannot be 0. When β=0, η is a first score constant; when γ=0, η is a second score constant, where β and γ cannot be both 0, and the first score constant is greater than the second score constant.
Exemplary, a=20p 0 +0P 1 -20P 2 +80; or, a=20p 0 +20P 1 -0P 2 +70。
Optionally, the obtaining, by the terminal device, the stain score corresponding to each target area image according to the first probability value, the second probability value and the third probability value may include: and the terminal equipment obtains the color spot score corresponding to each target area image according to a second formula.
The second formula is a=αp 0 -βP 1 -γP 2 +η。
Note that when α=0, η is a third score constant; the values of beta and gamma cannot be 0. Wherein the third score constant is greater than the first score constant and the third score constant is greater than the second score constant.
Exemplary, a=0p 0 -20P 1 -30P 2 +100。
It will be appreciated that each of a plurality of predetermined stain levels corresponds to a scoring value. Wherein, the color spot grade and the grading value are in positive correlation, and the higher the color spot grade is, the larger the corresponding grading value is. Illustratively, the first stain level has a score value less than the score value of the second stain level, and the second stain level has a score value less than the score value of the third stain level.
104. And obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.
Optionally, obtaining the stain score for the image to be identified according to the stain score corresponding to each target area image may include, but is not limited to, the following implementation manners:
Implementation 1: accumulating the color spot scores corresponding to each target area image to obtain a first calculated value; and obtaining the color spot score aiming at the image to be identified according to the number of the target area images and the first calculated value.
Illustratively, assuming that the face area image corresponds to a score of 60 points, the forehead area image corresponds to a score of 65 points, the mandibular area image corresponds to a score of 68 points, and the eye area image corresponds to a score of 62 points, the stain score of the image to be recognized is (60+65+68+62)/4=63.75 points.
Implementation 2: obtaining a weight coefficient of a color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
It should be noted that the weight coefficient may be set by the terminal device before leaving the factory, or may be set by the user according to experimental data, which is not specifically limited herein.
Illustratively, assume that the facial area image corresponds to a stain score of 60 points, the forehead area image corresponds to a stain score of 65 points, the mandibular area image corresponds to a stain score of 68 points, and the eye area image corresponds to a stain score of 62 points; the weight coefficient of the stain score corresponding to the facial area image is 0.3, the weight coefficient of the stain score corresponding to the forehead area image is 0.2, the weight coefficient of the stain score corresponding to the mandibular area image is 0.2, and the weight coefficient of the stain score corresponding to the eye area image is 0.3, then the stain score of the image to be identified is
60 x 0.3+65 x 0.2+68 x 0.2+62 x 0.3=18+13+13.6+18.6=63.2 minutes.
In the embodiment of the invention, an image to be identified is acquired; extracting target area images corresponding to each key area from the images to be identified; wherein each key region at least comprises a facial region, a forehead region, a mandibular region and an eye region; determining the color spot score corresponding to each target area image by using a preset color spot score model; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image. The terminal equipment can use a preset color spot scoring model to score the color spots of the target area images corresponding to the key areas extracted from the images to be identified, and obtain the color spot score for the images to be identified according to the color spot score corresponding to each target area image. Therefore, the preset color spot scoring model can be obtained by training a large number of samples, and reliability is high, so that images corresponding to color spot areas are analyzed based on the color spot scoring model, and the terminal equipment can obtain color spot scores with high accuracy, so that the accuracy of evaluating the facial color spots by the terminal equipment is effectively improved.
As shown in fig. 2a, another embodiment of the method for scoring a stain based on image processing according to the embodiment of the present invention is shown, which may include:
201. and acquiring an image to be identified.
202. And extracting target area images corresponding to the key areas from the images to be identified.
Wherein each key region includes at least a facial region, a forehead region, a mandibular region, and an eye region.
203. And determining the color spot score corresponding to each target area image by using a preset color spot score model.
204. And obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.
It should be noted that steps 201 to 204 are similar to steps 101 to 104 shown in fig. 1 in this embodiment, and will not be described here again.
Optionally, after step 204, the method may further include: and generating and outputting skin quality suggestions according to the color spot scores of the images to be identified.
It should be noted that the skin quality suggestion refers to that the terminal device may provide a targeted suggestion for how to fade the stain for the user according to the stain score of the image to be identified.
205. And carrying out gray scale processing on each target area image to obtain a gray scale image corresponding to each target area image.
It may be understood that the gray processing refers to processing, by the terminal device, three color components on the target area image, where the three color components include: red (Red, R), green (G) and Blue (B). The gray scale process may include, but is not limited to, the following four methods: component method, maximum method, average method, and weighted average method.
The classification method refers to that the terminal equipment determines RGB on the target area image as three target color components, for example: determining R as a first target color component having a gray scale N; determining G as a second target color component, the gray scale of the second target color component being P; b is determined as a third target color component, the gray scale of which is Q. The maximum value method is that the terminal equipment determines a color component with the largest brightness value in RGB on the target area image as a maximum target color component, and the gray level of the maximum target color component is M. The average method is that the terminal equipment averages three brightness values corresponding to RGB on the target area image to obtain a fourth target color component, and the gray value of the fourth target color component is the average gray value of RGB. The weighted average method is that the terminal equipment performs weighted average on three brightness values corresponding to RGB on the target area image according to different weight proportions to obtain a fifth target color component, and the gray value of the fifth target color component is the weighted average gray value H of RGB.
Note that N, P, Q, M and H each represent a gradation value different from R, G and B; n, P, Q, M and H may be the same or different, and are not particularly limited herein.
Optionally, the step of performing gray processing on each target area image by the terminal device to obtain a gray map corresponding to each target area image may include: and the terminal equipment carries out gray scale processing on each target area image according to the first parameter to obtain a gray scale image corresponding to each target area image. Wherein the first parameter may be a best-optimized parameter derived from a large number of experimental data.
Exemplary, as shown in fig. 2b, a first parameter is shown in an embodiment of the present invention.
Alternatively, as shown in fig. 2c, an embodiment of the gray scale map in the embodiment of the present invention is shown.
206. In each gray scale map, positional information of a color spot area is acquired.
It should be noted that the color of the color spot in the gray level chart is obvious, so that the terminal equipment can conveniently acquire the position information of the area where the color spot is located.
207. And marking the color spots in the image to be identified according to the position information.
It should be noted that the image to be recognized is a color image.
Optionally, the labeling of the color spots in the image to be identified by the terminal device according to the position information may include, but is not limited to, the following implementation manners:
implementation 1: the terminal equipment determines a mark area corresponding to the color spot area in the image to be identified according to the position information; the terminal device marks the boundary of the marking area according to a preset mode.
The position information may include coordinate points corresponding to each color spot area, and the terminal device may determine a mark area corresponding to the color spot area according to the coordinate points corresponding to each color spot area; the preset shape may be a circle, triangle, rectangle, or the like, and is not particularly limited herein.
Implementation 2: the terminal equipment performs threshold segmentation on the image to be identified to obtain a first image; and the terminal equipment marks the color spots in the first image according to the position information.
It should be noted that, the terminal device may divide the threshold, that is, reserve the pixel value located in the preset pixel value range, and filter the pixel value located outside the preset pixel value range to obtain the first image; and the terminal equipment marks the color spots in the first image according to the position information. Therefore, the influence of hair, five sense organs and the like on the color spot marking result can be effectively avoided.
The preset pixel value range is a section constructed by the first pixel threshold value and the second pixel threshold value. The pixel value is in the preset pixel value range, that is, the pixel is larger than the first pixel threshold value and smaller than or equal to the second pixel threshold value.
Implementation 3: and the terminal equipment marks the color spots in each target area image according to the position information.
It should be noted that each target area image is a color image.
Optionally, after step 207, the method may further include: and the terminal equipment determines the number of the color spots in the face area of the user according to the number of the coordinate points.
Optionally, as shown in fig. 2d, a color spot labeling embodiment of the present invention is shown; FIG. 2e is a schematic diagram of another embodiment of the color spot labeling according to the present invention.
It should be noted that, in the embodiment, the drawings may include, but are not limited to: stain score, stain grade, and number of stains.
In the embodiment of the invention, namely, the terminal equipment can utilize a preset color spot scoring model to score the color spots of the target area images corresponding to the key areas extracted from the image to be identified, and obtain the color spot score for the image to be identified according to the color spot score corresponding to each target area image. Therefore, the preset color spot scoring model can be obtained by training a large number of samples, and the reliability is high, so that the color spot scoring model is based on the analysis of the image corresponding to the color spot region, the terminal equipment can obtain the color spot score with high accuracy, the accuracy of the terminal equipment in evaluating the face color spots is effectively improved, and the user can timely grasp the occurrence condition of the face color spots of the user.
As shown in fig. 3a, another embodiment of the method for scoring a stain based on image processing according to the embodiment of the present invention is shown, which may include:
301. and acquiring an image to be identified.
302. And extracting target area images corresponding to the key areas from the images to be identified.
Wherein each key region includes at least a facial region, a forehead region, a mandibular region, and an eye region.
303. And determining the color spot score corresponding to each target area image by using a preset color spot score model.
304. And obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.
It should be noted that steps 301 to 304 are similar to steps 201 to 204 shown in fig. 2 in this embodiment, and will not be described here again.
305. The RGB color space of each target area image is converted into HSV color space.
The HSV color space is a color pattern based on three parameters, i.e., H (hue), S (saturation), and V (brightness). Wherein, H is measured by an angle, the value range of H is 0-360 degrees, and the red is 0 degrees, the green is 120 degrees and the blue is 240 degrees; s represents the degree to which the color is close to the spectral color; v represents the degree of brightness of the color.
306. And extracting a first feature map corresponding to the B component from each target area image of the RGB color space.
It will be appreciated that, since the majority of the pixels of the face region are concentrated in the B component in the image in the RGB color space, the background pixels are the least in the pixels of the B component, and therefore, the face region is more prominent in the boundary map obtained by performing the edge enhancement processing on the pixels of the B component in the third image in the RGB color space.
307. And extracting a second feature map corresponding to an S component from each target area image of the HSV color space.
308. And determining a target feature map according to the first feature map and the second feature map.
Optionally, the determining, by the terminal device, the target feature map according to the first feature map and the second feature map may include: and the terminal equipment processes the first feature map and the second feature map according to the second parameter to obtain a target feature map. Wherein the second parameter may be a best-optimized parameter derived from a large amount of experimental data.
Exemplary, as shown in fig. 3b, a second parameter is shown in an embodiment of the present invention.
Optionally, the determining, by the terminal device, the target feature map according to the first feature map and the second feature map may include: and the terminal equipment determines a residual diagram according to the difference value of the S component and the B component.
Optionally, as shown in fig. 3c, an embodiment of the residual map in the embodiment of the present invention is shown.
309. And acquiring the position information of the color spot area in each target feature map.
It should be noted that the color difference of the color spots in the target feature map is obvious, so that the terminal equipment can conveniently acquire the position information of the area where the color spots are located.
Optionally, the obtaining, by the terminal device, the location information of the color spot area in each target feature map may include: the terminal equipment carries out local histogram equalization processing on each target feature map to obtain a second image corresponding to each target feature map; and the terminal equipment acquires the position information of the color spot area in each second image.
Therefore, the terminal equipment can enlarge the difference between the color spots and normal skin, and is convenient for acquiring the position information of the color spot areas.
The local histogram equalization means smoothing the target feature map (abbreviated as image smoothing) and sharpening the target feature map (abbreviated as image sharpening). Among these, image smoothing is a low frequency enhanced spatial filtering technique. Image smoothing may blur the target feature map, or may eliminate noise from the target feature map. The image smoothing generally adopts a simple average method, i.e. an average brightness value between two adjacent pixel points is calculated. The size of the neighborhood between two adjacent pixel points is directly related to the smoothing effect of the target feature map, the larger the neighborhood is, the better the smoothing effect is, but the larger the neighborhood is, the larger the edge information loss of the target feature map is, so that the second image output later is blurred, namely the smoothing effect of the target feature map is poor, and therefore, the terminal equipment needs to set the proper neighborhood size to ensure the definition of the second image. However, image sharpening is an inverse image equalization technique to image smoothing, which is a high frequency enhanced spatial filtering technique. The image sharpening is to reduce the ambiguity in the target feature map by enhancing the high frequency component, namely enhancing the detail edge and the contour of the target feature map, and simultaneously enhancing the gray contrast on the target feature map, so as to obtain a clearer second image. However, image sharpening, while enhancing the edges of the target feature map detail, also increases the noise of the target feature map. Therefore, the terminal device performs local histogram equalization on the target feature map in combination with image smoothing and image sharpening to obtain the second image.
310. And marking the color spots in the image to be identified according to the position information.
It should be noted that, step 310 is similar to step 207 shown in fig. 2 in the present embodiment, and will not be described herein.
In the embodiment of the invention, namely, the terminal equipment can utilize a preset color spot scoring model to score color spots of target area images corresponding to each key area extracted from the image to be identified, and obtain the color spot score for the image to be identified according to the color spot score corresponding to each target area image. Therefore, the preset color spot scoring model can be obtained by training a large number of samples, and the reliability is high, so that the color spot scoring model is based on the analysis of the image corresponding to the color spot region, the terminal equipment can obtain the color spot score with high accuracy, the accuracy of the terminal equipment in evaluating the face color spots is effectively improved, and the user can timely grasp the occurrence condition of the face color spots of the user.
As shown in fig. 4, which is a schematic diagram of an embodiment of a stain scoring device according to an embodiment of the present invention, the stain scoring device may include: an acquisition module 401 and a processing module 402.
An acquisition module 401, configured to acquire an image to be identified; extracting target area images corresponding to each key area from the image to be identified; wherein each key region at least comprises a face region, a forehead region, a mandible region and an eye region;
a processing module 402, configured to determine a stain score corresponding to each of the target area images using a preset stain score model;
the obtaining module 401 is further configured to obtain a stain score for the image to be identified according to the stain score corresponding to each target area image.
Alternatively, in some embodiments of the invention,
the processing module 402 is specifically configured to determine, by using a preset algorithm, a first face feature point in the image to be identified; determining a target area image corresponding to the face area according to the first face feature points; determining a second face feature point in a target area image corresponding to the face area through the preset algorithm; and determining a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
Alternatively, in some embodiments of the invention,
the preset color spot scoring model comprises a lightweight convolutional neural network, the lightweight convolutional neural network comprises a plurality of preset color spot grades, and the acquisition module is specifically used for extracting features of each target area image by using the lightweight convolutional neural network to obtain feature vectors of each target area image;
the processing module 402 is specifically configured to determine, according to the normalized exponential function and the feature vector, a probability value that the stain level of each target area image is each preset stain level;
the obtaining module 401 is specifically configured to determine a stain score corresponding to each target area image according to the probability value.
Alternatively, in some embodiments of the invention,
the plurality of preset color spot levels comprise a first color spot level, a second color spot level and a third color spot level, the probability value comprises a first probability value that the color spot level of each target area image is the first color spot level, a second probability value that the color spot level of each target area image is the second color spot level, and a third probability value that the color spot level of each target area image is the third color spot level;
The processing module 402 is specifically configured to obtain a stain score corresponding to each of the target area images according to the first probability value, the second probability value, and the third probability value.
Alternatively, in some embodiments of the invention,
the processing module 402 is specifically configured to obtain a stain score corresponding to each of the target area images according to a first formula; the first formula is a=αp 0 +βP 1 -γP 2 +η; wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 Representing the second probability value, P 2 And alpha represents a score coefficient corresponding to the first probability value, beta represents a score coefficient corresponding to the second probability value, gamma represents a score coefficient corresponding to the third probability value, eta is a score constant, and alpha, beta, gamma and eta are not negative numbers.
Alternatively, in some embodiments of the invention,
the processing module 402 is specifically configured to accumulate the score of the color spot corresponding to each target area image to obtain a first calculated value; the obtaining module 401 is specifically configured to obtain a stain score for the image to be identified according to the number of the target area images and the first calculated value; or alternatively, the first and second heat exchangers may be,
the obtaining module 401 is specifically configured to obtain a weight coefficient of a stain score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
Alternatively, in some embodiments of the invention,
the processing module 402 is further configured to perform gray-scale processing on each of the target area images to obtain a gray-scale image corresponding to each of the target area images; the obtaining module 401 is further configured to obtain, in each gray scale map, position information of the color spot area; or alternatively, the first and second heat exchangers may be,
a processing module 402, configured to convert the RGB color space of each of the target area images into an HSV color space; extracting a first feature map corresponding to a B component from each target area image of the RGB color space; extracting a second feature map corresponding to an S component from each target area image of the HSV color space; determining a target feature map according to the first feature map and the second feature map; the obtaining module 401 is further configured to obtain, in each target feature map, location information of the color spot area;
the processing module 402 is further configured to label the color spots in the image to be identified according to the position information.
Alternatively, in some embodiments of the invention,
the processing module 402 is specifically configured to determine, according to the position information, a marking area corresponding to the color spot area in the image to be identified; marking the boundary of the marking area according to a preset mode.
As shown in fig. 5, which is a schematic diagram of another embodiment of the stain scoring device in the embodiment of the present invention, the stain scoring device may include: a processor 501 and a memory 502.
In the present embodiment, the processor 501 has the following functions:
acquiring an image to be identified;
extracting target area images corresponding to each key area from the image to be identified; wherein each key region at least comprises a face region, a forehead region, a mandible region and an eye region;
determining the color spot score corresponding to each target area image by using a preset color spot score model;
and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image.
Optionally, the processor 501 also has the following functions:
determining a first face feature point in the image to be identified through a preset algorithm; determining a target area image corresponding to the face area according to the first face feature points; determining a second face feature point in a target area image corresponding to the face area through the preset algorithm; and determining a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
Optionally, the processor 501 also has the following functions:
the preset color spot scoring model comprises a lightweight convolutional neural network, the lightweight convolutional neural network comprises a plurality of preset color spot grades, and the lightweight convolutional neural network is utilized to perform feature extraction on each target area image to obtain a feature vector of each target area image; determining the probability value of the color spot grade of each target area image as each preset color spot grade according to the normalized exponential function and the feature vector; and determining the color spot score corresponding to each target area image according to the probability value.
Optionally, the processor 501 also has the following functions:
the plurality of preset color spot levels comprise a first color spot level, a second color spot level and a third color spot level, the probability value comprises a first probability value that the color spot level of each target area image is the first color spot level, a second probability value that the color spot level of each target area image is the second color spot level, and a third probability value that the color spot level of each target area image is the third color spot level; and obtaining the color spot score corresponding to each target area image according to the first probability value, the second probability value and the third probability value.
Optionally, the processor 501 also has the following functions:
obtaining a color spot score corresponding to each target area image according to a first formula; the first formula is a=αp 0 +βP 1 -γP 2 +η; wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 Representing the second probability value, P 2 And alpha represents a score coefficient corresponding to the first probability value, beta represents a score coefficient corresponding to the second probability value, gamma represents a score coefficient corresponding to the third probability value, eta is a score constant, and alpha, beta, gamma and eta are not negative numbers.
Optionally, the processor 501 also has the following functions:
accumulating the color spot scores corresponding to each target area image to obtain a first calculated value; obtaining a color spot score aiming at the image to be identified according to the number of the target area images and the first calculated value; or, obtaining a weight coefficient of the color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
Optionally, the processor 501 also has the following functions:
Carrying out gray scale processing on each target area image to obtain a gray scale image corresponding to each target area image; in each gray level graph, acquiring position information of a color spot area; or converting the RGB color space of each target area image into HSV color space; extracting a first feature map corresponding to a B component from each target area image of the RGB color space; extracting a second feature map corresponding to an S component from each target area image of the HSV color space; determining a target feature map according to the first feature map and the second feature map; acquiring position information of a color spot area in each target feature map; and marking the color spots in the image to be identified according to the position information.
Optionally, the processor 501 also has the following functions:
determining a mark area corresponding to the color spot area in the image to be identified according to the position information; marking the boundary of the marking area according to a preset mode.
In the present embodiment, the memory 502 has the following functions:
for storing the processing procedure and processing result of the processor 501.
Fig. 6 is a schematic diagram of an embodiment of a terminal device according to the embodiment of the present invention, which may include a stain scoring apparatus as shown in fig. 4 or fig. 5.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), etc.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for scoring a stain based on image processing, comprising:
acquiring an image to be identified;
extracting target area images corresponding to each key area from the images to be identified; wherein each key region at least comprises a facial region, a forehead region, a mandibular region and an eye region;
determining the color spot score corresponding to each target area image by using a preset color spot score model;
obtaining a color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image;
the preset stain scoring model comprises a lightweight convolutional neural network, the lightweight convolutional neural network comprises a plurality of preset stain grades, and the method for determining the stain score corresponding to each target area image by using the preset stain scoring model comprises the following steps:
Extracting the characteristics of each target area image by using the lightweight convolutional neural network to obtain the characteristic vector of each target area image;
determining the probability value of the color spot level of each target area image as each preset color spot level according to the normalized exponential function and the feature vector;
determining the color spot score corresponding to each target area image according to the probability value;
the step of obtaining the stain score for the image to be identified according to the stain score corresponding to each target area image comprises the following steps:
accumulating the color spot scores corresponding to each target area image to obtain a first calculated value; obtaining a color spot score aiming at the image to be identified according to the number of the target area images and the first calculated value; or alternatively, the first and second heat exchangers may be,
obtaining a weight coefficient of the color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
2. The method according to claim 1, wherein the extracting the target area image corresponding to each key area from the image to be identified includes:
Determining a first face feature point in the image to be identified through a preset algorithm;
determining a target area image corresponding to the face area according to the first face feature points;
determining a second face feature point in the target area image corresponding to the face area through the preset algorithm;
and determining a target area image corresponding to the forehead area, a target area image corresponding to the mandible area and a target area image corresponding to the eye area according to the second face feature points.
3. The method of claim 1, wherein the plurality of preset stain levels includes a first stain level, a second stain level, and a third stain level, the probability values including a first probability value for each of the target area images having a stain level of the first stain level, a second probability value for each of the target area images having a stain level of the second stain level, and a third probability value for each of the target area images having a stain level of the third stain level;
the determining, according to the probability value, the score of the color spot corresponding to each target area image includes:
And obtaining the color spot score corresponding to each target area image according to the first probability value, the second probability value and the third probability value.
4. The method of claim 3, wherein the obtaining a stain score corresponding to each of the target area images based on the first probability value, the second probability value, and the third probability value comprises:
obtaining a color spot score corresponding to each target area image according to a first formula;
the first formula is a=alpha P 0 +βP 1 -γP 2 +η; wherein A is the score of the corresponding color spot of each target area image, P 0 Representing the first probability value, P 1 And P2 represents the second probability value, alpha represents the score coefficient corresponding to the first probability value, beta represents the score coefficient corresponding to the second probability value, gamma represents the score coefficient corresponding to the third probability value, eta is a score constant, and alpha, beta, gamma and eta are not negative numbers.
5. The method according to claim 1, wherein the method further comprises:
carrying out gray scale processing on each target area image to obtain a gray scale image corresponding to each target area image; in each gray level graph, acquiring position information of a color spot area; or alternatively, the first and second heat exchangers may be,
Converting the RGB color space of each target area image into an HSV color space; extracting a first feature map corresponding to a B component from each target area image of the RGB color space; extracting a second feature map corresponding to an S component from each target area image of the HSV color space; determining a target feature map according to the first feature map and the second feature map; acquiring position information of a color spot area in each target feature map;
and marking the color spots in the image to be identified according to the position information.
6. The method of claim 5, wherein labeling the stain in the image to be identified based on the location information comprises:
determining a mark area corresponding to the color spot area in the image to be identified according to the position information;
marking the boundary of the marking area according to a preset mode.
7. A stain scoring device, comprising:
the acquisition module is used for acquiring the image to be identified; extracting target area images corresponding to each key area from the images to be identified; wherein each key region at least comprises a facial region, a forehead region, a mandibular region and an eye region;
The processing module is used for determining the color spot score corresponding to each target area image by utilizing a preset color spot score model;
the acquisition module is further used for obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image;
the preset color spot scoring model comprises a lightweight convolutional neural network, the lightweight convolutional neural network comprises a plurality of preset color spot grades, and the processing module is specifically used for extracting features of each target area image by using the lightweight convolutional neural network to obtain feature vectors of each target area image; determining the probability value of the color spot level of each target area image as each preset color spot level according to the normalized exponential function and the feature vector; determining the color spot score corresponding to each target area image according to the probability value;
the acquisition module is specifically configured to accumulate the color spot scores corresponding to each target area image to obtain a first calculated value; obtaining a color spot score aiming at the image to be identified according to the number of the target area images and the first calculated value; or alternatively, the first and second heat exchangers may be,
The acquisition module is specifically used for acquiring a weight coefficient of the color spot score corresponding to each target area image; and obtaining the color spot score aiming at the image to be identified according to the color spot score corresponding to each target area image and the weight coefficient of the color spot score corresponding to each target area image.
8. A stain scoring device, comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invoking the executable program code stored in the memory, which when executed by the processor, causes the processor to implement the method of any of claims 1-6.
CN202110363504.XA 2021-04-02 2021-04-02 Image processing-based color spot scoring method, color spot scoring device and terminal equipment Active CN113128373B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110363504.XA CN113128373B (en) 2021-04-02 2021-04-02 Image processing-based color spot scoring method, color spot scoring device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110363504.XA CN113128373B (en) 2021-04-02 2021-04-02 Image processing-based color spot scoring method, color spot scoring device and terminal equipment

Publications (2)

Publication Number Publication Date
CN113128373A CN113128373A (en) 2021-07-16
CN113128373B true CN113128373B (en) 2024-04-09

Family

ID=76774790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110363504.XA Active CN113128373B (en) 2021-04-02 2021-04-02 Image processing-based color spot scoring method, color spot scoring device and terminal equipment

Country Status (1)

Country Link
CN (1) CN113128373B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128375B (en) * 2021-04-02 2024-05-10 西安融智芙科技有限责任公司 Image recognition method, electronic device, and computer-readable storage medium
CN113743284B (en) * 2021-08-30 2024-08-13 杭州海康威视数字技术股份有限公司 Image recognition method, device, equipment, camera and access control equipment
CN115131822B (en) * 2022-07-06 2024-10-29 上海睿触科技有限公司 Skin stain identification method based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN109844804A (en) * 2017-08-24 2019-06-04 华为技术有限公司 A kind of method, apparatus and terminal of image detection
CN110097034A (en) * 2019-05-15 2019-08-06 广州纳丽生物科技有限公司 A kind of identification and appraisal procedure of Intelligent human-face health degree
CN110458829A (en) * 2019-08-13 2019-11-15 腾讯医疗健康(深圳)有限公司 Image quality control method, device, equipment and storage medium based on artificial intelligence
CN110473199A (en) * 2019-08-21 2019-11-19 广州纳丽生物科技有限公司 A kind of detection of color spot acne and health assessment method based on the segmentation of deep learning example
CN111428553A (en) * 2019-12-31 2020-07-17 深圳数联天下智能科技有限公司 Face pigment spot recognition method and device, computer equipment and storage medium
CN112037162A (en) * 2019-05-17 2020-12-04 华为技术有限公司 Facial acne detection method and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9697595B2 (en) * 2014-11-26 2017-07-04 Adobe Systems Incorporated Content aware fill based on similar images

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529429A (en) * 2016-10-27 2017-03-22 中国计量大学 Image recognition-based facial skin analysis system
CN109844804A (en) * 2017-08-24 2019-06-04 华为技术有限公司 A kind of method, apparatus and terminal of image detection
CN110097034A (en) * 2019-05-15 2019-08-06 广州纳丽生物科技有限公司 A kind of identification and appraisal procedure of Intelligent human-face health degree
CN112037162A (en) * 2019-05-17 2020-12-04 华为技术有限公司 Facial acne detection method and equipment
CN110458829A (en) * 2019-08-13 2019-11-15 腾讯医疗健康(深圳)有限公司 Image quality control method, device, equipment and storage medium based on artificial intelligence
CN110473199A (en) * 2019-08-21 2019-11-19 广州纳丽生物科技有限公司 A kind of detection of color spot acne and health assessment method based on the segmentation of deep learning example
CN111428553A (en) * 2019-12-31 2020-07-17 深圳数联天下智能科技有限公司 Face pigment spot recognition method and device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Biometric identification via retina scanning with liveness detection using speckle contrast imaging;Nazariy K. Shaydyuk等;《2016 IEEE International Carnahan Conference on Security Technology (ICCST)》;第1-5页 *
基于图像处理的皮肤健康检测研究;李顾全等;《电子世界》(第21期);第5-7页 *

Also Published As

Publication number Publication date
CN113128373A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN113128373B (en) Image processing-based color spot scoring method, color spot scoring device and terminal equipment
Aquino et al. vitisBerry: An Android-smartphone application to early evaluate the number of grapevine berries by means of image analysis
Kim et al. Salient region detection via high-dimensional color transform
CN106056064B (en) A kind of face identification method and face identification device
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
CN107633204A (en) Face occlusion detection method, apparatus and storage medium
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
CN104123543A (en) Eyeball movement identification method based on face identification
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
CN110807427A (en) Sight tracking method and device, computer equipment and storage medium
CN110415212A (en) Abnormal cell detection method, device and computer readable storage medium
WO2019061659A1 (en) Method and device for removing eyeglasses from facial image, and storage medium
US20200058136A1 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
WO2022227547A1 (en) Method and apparatus for image processing, electronic device, and storage medium
CN109886195B (en) Skin identification method based on near-infrared monochromatic gray-scale image of depth camera
CN110036407B (en) System and method for correcting digital image color based on human sclera and pupil
JPWO2017061106A1 (en) Information processing apparatus, image processing system, image processing method, and program
CN112102207A (en) Method and device for determining temperature, electronic equipment and readable storage medium
CN113128374B (en) Sensitive skin detection method and sensitive skin detection device based on image processing
Fathee et al. Iris segmentation in uncooperative and unconstrained environments: state-of-the-art, datasets and future research directions
CN107491718A (en) The method that human hand Face Detection is carried out under different lightness environment
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN113128372B (en) Blackhead identification method and blackhead identification device based on image processing and terminal equipment
CN107563362B (en) Method, client and system for evaluation operation
CN114648512A (en) Sublingual image analysis method, apparatus, computer device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant