Nothing Special   »   [go: up one dir, main page]

CN117274673A - Correlation calculation method of human face and macaque face based on acne - Google Patents

Correlation calculation method of human face and macaque face based on acne Download PDF

Info

Publication number
CN117274673A
CN117274673A CN202311127893.1A CN202311127893A CN117274673A CN 117274673 A CN117274673 A CN 117274673A CN 202311127893 A CN202311127893 A CN 202311127893A CN 117274673 A CN117274673 A CN 117274673A
Authority
CN
China
Prior art keywords
face
image
acne
calculating
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311127893.1A
Other languages
Chinese (zh)
Inventor
刘娟秀
闫咏梅
贺涛
刘伟
张静
刘霖
杜晓辉
郝如茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202311127893.1A priority Critical patent/CN117274673A/en
Publication of CN117274673A publication Critical patent/CN117274673A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a correlation calculation method of a human face and a macaque face based on acne, belonging to the field of digital image processing and deep learning. The invention designs image processing, neural network, multi-mode fusion, contour similarity calculation, and data set production and acquisition. According to the invention, through expanding the data set, the multi-person face segmentation results are subjected to superposition fusion and contour fitting, the uniqueness of an individual is eliminated, and the problem of commonality is explored. And (5) calculating the similarity of the fitting contour to obtain a conclusion of whether the association exists between the acne occurrence area of the human face and the hair region of the macaque face.

Description

Correlation calculation method of human face and macaque face based on acne
Technical Field
The invention belongs to the field of digital image processing and deep learning, and particularly relates to an automatic segmentation technology of human face acne and macaque bare surface area outline based on multi-mode fusion deep learning, and a correlation verification method of acne distribution and macaque face fuzzed area.
Background
The face is used as the first name card of people's daily communication, and the aesthetic degree directly influences the first impression in the mind of other people, and the importance is self-evident. The guide for treating acne in China (revised 2019) indicates that acne is a chronic skin inflammation which is good in puberty, more than 95% of people in China can generate acne with different degrees, part of patients can leave scars, negative psychology such as anxiety, pressure, depression and the like can be easily caused, and treatment methods need to be emphasized and standardized.
However, the pathogenesis of acne has not yet been fully elucidated, and its inducers include, but are not limited to, genetic factors, hormonal secretion, sebum hypersecretion, abnormal propionibacterium acnes reproduction, etc. It is currently widely accepted by clinical dermatology that the premise of acne is that sebaceous glands secrete large amounts of lipids, which lead to inflammation and immune responses caused by abnormal proliferation of hair follicle microorganisms.
The T area (the area formed by forehead and nose) of the face has more sweat gland pores and serious daily oil discharge; the skin of the U-shaped region (the left cheek region and the right cheek region) of the face is fine and smooth, and the oil output is small. Then, according to the pathogenesis, the T area is the acne frequent area, and the U area is the acne minor area. However, according to clinical diagnosis and treatment of acne patients, the occurrence probability of acne in the U area of severe acne patients is very high, and scars such as acne pits often appear in the U area. Meanwhile, after observing the acne occurrence area of a plurality of people, the outline of the acne-prone area is very similar to the outline of the macaque hair area. So that the outline of the frequent facial acne area is similar to the hair area of the macaque, which is suspected to be influenced by the genetic evolution of human beings.
Therefore, a new verification method is needed to solve the above problems, which can effectively verify the above guesses, supplement the pathogenesis of acne, establish the association with macaque, and enrich the follow-up experiment and diagnosis and treatment means.
Disclosure of Invention
Aiming at the guess that the facial acne distribution area of the acne patient is similar to the macaque face hair area in clinic at present, the invention designs a verification method combining an image color space and multi-mode deep learning, thereby achieving the purpose of objectively describing the relevance between the facial acne occurrence area and the macaque hair area.
The technical scheme of the invention is a correlation calculation method of human faces and macaque faces based on acne, which comprises the following steps:
step 1: acquiring face data; collecting a human face parallel polarization spectrum image and a near infrared spectrum shooting image by using a high-resolution camera;
step 2: screening the sample images acquired in the step 1, removing patient images with obvious skin diseases (such as vitiligo, lupus erythematosus and the like), and dividing a training set and a testing set according to the proportion of 1:9;
step 3: preprocessing the parallel polarization spectrum image processed in the step 2;
step 3-1: carrying out graying treatment on the parallel polarization spectrum image obtained in the step (2);
step 3-2: performing face key point calibration on the face information in the image by using OpenCV Dlib on the image processed in the step 3-1;
step 3-3: determining the contours of eyes, nose and mouth of the face by using the obtained face key points and OpenCV filter Poly functions processed in the step 3-2, and obtaining masks of the facial organs;
step 4: performing facial acne labeling on the near infrared spectrum image obtained in the step (2);
step 4-1: the facial organ mask obtained in the step 3-3 is overlapped on the near infrared spectrum image obtained in the step 1;
step 4-2: converting the superimposed mask image obtained in the step 4-1 into an image represented by HSV color space, and setting a threshold range according to the difference between the acne inflammation area and normal skin;
step 4-3: threshold segmentation is carried out on the image obtained in the step 4-2, a segmented image is obtained, the corresponding segmentation area is set to be 1, and the background area is set to be 0;
step 4-4: carrying out connected domain calculation on the image obtained in the step 4-3, and carrying out external rectangle on each connected domain;
step 4-5: converting the external rectangular data obtained in the step 4-4 into a JSON format file;
step 4-6: correcting the JSON file obtained in the step 4-5 and the parallel polarization spectrum image obtained in the step 2 by combining labeling software labelme, so as to ensure that no label leakage and false label exist;
step 5: processing the marked JSON file obtained in the step 4, and converting the marked JSON file into a VOC (volatile organic compound) format file;
step 6: fusing the parallel polarization spectrum image obtained in the step 1 with the near infrared spectrum image in the channel direction, and changing the image size from (2000,3000,3) to (2000,3000,6);
step 7: constructing a multi-mode convolutional neural network structure based on YOLOv5 adjustment;
step 8: the VOC format labeling file obtained in the step 5 and the fusion image obtained in the step 6 are sent to the multi-mode neural network built in the step 7 for training until the training is successful;
step 9: detecting the test set obtained in the step 2 by adopting the multi-mode convolutional neural network model which is trained in the step 8 and adjusted based on the YOLOv5, and additionally storing the acne area obtained by detection as a binary image, wherein the acne area is 1, and the background area is 0;
step 10: combining the binary images obtained in the step 9;
step 10-1: determining a face template M, and adjusting other images based on the face template M to ensure the facial five sense organs of all people to be basically aligned;
step 10-2: carrying out graying treatment on the parallel polarization spectrum image of the test set obtained in the step (2);
step 10-3: performing face key point calibration on the face information in the image by using OpenCV Dlib on the image processed in the step 10-2;
step 10-4: calculating the circumscribed rectangle centers O of all the feature points of the image processed in the step 10-3, and calculating the difference between the circumscribed rectangle centers O and the feature points of the template M;
step 10-5: calculating the included angle between the connecting line and the vertical line for the two feature points at the uppermost part and the lowermost part of the center axis of the connecting nose of the image processed in the step 10-3;
step 10-6: calculating the length and width of the circumscribed rectangle of all the feature points of the image processed in the step 10-3, and calculating the length ratio of the circumscribed rectangle of the feature points of the template M;
step 10-7: translating the binary image obtained in the step 9 according to the position difference obtained in the step 10-4, rotating the image according to the included angle obtained in the step 10-5, and scaling according to the aspect ratio obtained in the step 10-6;
step 10-8: superposing all the binary images processed in the step 10-7 to obtain a multi-person acne occurrence position diagram, so that the specificity is reduced;
step 11: performing contour fitting on the images obtained in the step 10-8 to obtain the internal contour of acne distribution;
step 12: obtaining macaque surface data;
step 13: labeling the monkey face data obtained in the step 12, and marking the boundary between the monkey face fur area and the bare surface area;
step 14: training the neural network model based on the U-net by adopting the file obtained in the step 13 until the training is successful;
step 15: dividing a rear test set by adopting the neural network model based on the U-net trained in the step 14 to obtain a monkey face bare surface area;
step 16: boundary extraction is carried out on all the monkey face bare surface areas obtained in the step 15, and the monkey face bare surface areas are fused into a boundary and filled, so that the specificity is reduced;
step 16-1: making external rectangles for all the monkey face bare surface areas obtained in the step 15;
step 16-2: defining a rectangular template, and determining the size of the rectangle and the degree of an included angle between the central axis and the coordinate axis;
step 16-3: performing rotation and scaling operations on the circumscribed rectangle and the internal image thereof obtained in the step 16-1, so that the circumscribed rectangle is consistent with the template defined in the step 16-2;
step 16-4: calculating all monkey face bare surface connected domains obtained in the step 16-3 respectively to obtain a central point O of the monkey face bare surface connected domains, simulating a polar coordinate system, taking O as an original point, taking 1 degree as an angle difference, and transmitting 360 rays to the periphery, wherein points where the rays intersect with the edges of the connected domains are sampling points;
step 16-5: taking the sampling points with the same corresponding angles obtained in the step 16-4 as a group of data, and respectively calculating and averaging each group of data to obtain 360 average value points;
step 16-6: smoothly connecting and filling the mean points obtained in the step 16-5 in sequence to obtain a monkey face bare surface area fusion map, and reducing the specificity;
step 17: and (3) carrying out similarity calculation on the distribution profile of the acne on the human face and the bare surface area of the monkey face, which are respectively obtained in the step (11) and the step (16), and respectively using three calculation methods of Euclidean distance, cosine distance and Haosdorf distance to calculate, wherein the obtained result is used for objectively evaluating the similarity degree of the profiles of the two.
According to the invention, the multi-modal neural network is adopted to extract the acne features fusing the parallel polarization spectrum image and the near infrared spectrum image, the recognition precision is higher, the means of image rotation, translation and scaling are adopted to enable the five sense organs and cheeks of different people to be distributed in the same area as much as possible, the method of image fusion and sample size increase is adopted to reduce the difference between different individuals, and a plurality of evaluation parameters are used for calculation, so that the similarity evaluation of the two is more objective and accurate.
Drawings
FIG. 1 is an overall workflow of the present invention.
Fig. 2 is a convolutional neural network flow for intelligent monitoring of facial acne in accordance with the present invention.
Fig. 3 is a flow chart of facial acne dataset production in accordance with the present invention.
FIG. 4 is a flow chart of the monkey face contour fitting of the present invention.
Detailed Description
In order to make the objects, technical solutions and innovative features of the present invention more apparent, the present invention will be described in more detail with reference to the accompanying drawings and embodiments, it being understood that the embodiments described herein are for illustrative purposes only and are not limiting of the invention.
Referring to fig. 1 to 4, the invention relates to a method for exploring the association between a facial acne occurrence area and a macaque hair area based on an image color and convolutional neural network, which comprises an automatic segmentation and extraction module for human facial acne, and is used for automatically and intelligently segmenting and extracting a plurality of human facial acnes; the macaque face bare surface area segmentation module is used for segmenting and extracting macaque face areas from a plurality of macaque face images; the image merging contour fitting module is used for respectively carrying out contour fitting on the facial acne segmentation image and the macaque nude segmentation image; and the similarity calculation module is used for calculating the similarity degree of the fitting contour based on the acne and the fitting contour of the macaque surface.
A method for calculating the relevance between a facial acne occurrence area and a macaque hair area based on an image color and convolutional neural network comprises the following steps:
step 1-1: acquiring face data, namely acquiring a multi-face parallel polarization spectrum image and a near infrared spectrum shooting image by using two different light sources respectively by using a high-resolution camera capable of fixing faces;
step 1-2: screening the sample images acquired in the step 1-1, and dividing a training set and a testing set according to the proportion of 1:9;
step 1-3: carrying out graying treatment on the parallel polarization spectrum image obtained in the step 1-2, carrying out face key point calibration, and obtaining masks of eyes, nose and mouth of the face according to the key points;
step 1-4: after the near infrared spectrum image is overlapped with a mask, converting the whole image into an image represented by HSV color space by RGB representation, and setting a threshold range to extract acne preliminarily;
step 1-5: performing manual correction on the basis of the coarse mark to ensure that no missing mark or false mark exists, and manufacturing the coarse mark into a VOC (volatile organic compound) format file;
step 1-6: training the input data by utilizing a multi-mode convolutional neural network structure based on YOLOv5 adjustment until the loss drop tends to be stable;
step 1-7: detecting the test set obtained in the step 1-2 by adopting the network model trained in the step 1-6, and additionally storing the acne area obtained by detection as a binary image;
step 1-8: combining the binary images obtained in the steps 1-7, carrying out translation, rotation and scaling on other images according to a template to enable all faces to be basically aligned, and then superposing the binary images obtained by model segmentation to obtain a multi-person acne occurrence position diagram, so that the specificity is reduced;
step 2-1: obtaining monkey face data and marking a monkey face bare surface area;
step 2-2: training the input macaque face data by using a neural network model based on U-net until the training is successful;
step 2-3: dividing a large number of macaque face images by adopting the network model trained in the step 2-2 to obtain a macaque face bare surface area;
step 2-4: carrying out connected domain calculation on all monkey face bare surface areas obtained in the step 2-3, making external rectangles, and rotating and scaling according to a defined rectangular template until all monkey face connected domain external rectangles are consistent;
step 2-5: calculating the monkey face connected domain processed in the step 2-4 to obtain a central point O, simulating a polar coordinate system, taking O as an original point, taking 1 degree as an angle interval, and taking rays to the periphery, wherein the point where the rays intersect with the edge of the connected domain is the sampling point of the angle;
step 2-6: averaging the sampling points corresponding to the same angle obtained in the step 2-5 to obtain 360 average value points, sequentially smoothly linking and filling to obtain a fitted monkey face bare surface area, and reducing the specificity;
step 3: and (3) carrying out similarity calculation on the distribution profile of the acne on the human face and the bare surface area of the monkey face, which are respectively obtained in the steps (1-8) and (2-6), and respectively using three calculation methods of Euclidean distance, cosine distance and Haosdorf distance to calculate, wherein the obtained result is used for objectively evaluating the similarity degree of the profiles of the two.
The invention automatically identifies the characteristics of human acne and the characteristics of the bare surface-fuzzed boundary of the macaque face by using a deep learning network, thereby realizing the possibility of batch processing of data and reducing the subjectivity of manual division; increasing the sample size, adopting a proper method to respectively perform image fusion on the human face and the monkey face, and the like to reduce the difference between different individuals; and a plurality of evaluation parameters are used for calculation, so that the similarity evaluation of the two parameters is more objective and accurate.

Claims (2)

1. A method of calculating the association of a human face with a macaque face based on acne, the method comprising:
step 1: acquiring face data; collecting a human face parallel polarization spectrum image and a near infrared spectrum shooting image by using a high-resolution camera;
step 2: screening the sample images acquired in the step 1, removing the patient images with obvious skin diseases, and dividing a training set and a testing set according to the proportion of 1:9;
step 3: preprocessing the parallel polarization spectrum image processed in the step 2;
step 3-1: carrying out graying treatment on the parallel polarization spectrum image obtained in the step (2);
step 3-2: performing face key point calibration on the face information in the image by using OpenCV Dlib on the image processed in the step 3-1;
step 3-3: determining the contours of eyes, nose and mouth of the face by using the obtained face key points and OpenCV filter Poly functions processed in the step 3-2, and obtaining masks of the facial organs;
step 4: performing facial acne labeling on the near infrared spectrum image obtained in the step (2);
step 4-1: the facial organ mask obtained in the step 3-3 is overlapped on the near infrared spectrum image obtained in the step 1;
step 4-2: converting the superimposed mask image obtained in the step 4-1 into an image represented by HSV color space, and setting a threshold range according to the difference between the acne inflammation area and normal skin;
step 4-3: threshold segmentation is carried out on the image obtained in the step 4-2, a segmented image is obtained, the corresponding segmentation area is set to be 1, and the background area is set to be 0;
step 4-4: carrying out connected domain calculation on the image obtained in the step 4-3, and carrying out external rectangle on each connected domain;
step 4-5: converting the external rectangular data obtained in the step 4-4 into a JSON format file;
step 4-6: correcting the JSON file obtained in the step 4-5 and the parallel polarization spectrum image obtained in the step 2 by combining labeling software labelme, so as to ensure that no label leakage and false label exist;
step 5: processing the marked JSON file obtained in the step 4, and converting the marked JSON file into a VOC (volatile organic compound) format file;
step 6: fusing the parallel polarization spectrum image obtained in the step 1 with the near infrared spectrum image in the channel direction, and changing the image size from (2000,3000,3) to (2000,3000,6);
step 7: constructing a multi-mode convolutional neural network structure based on YOLOv5 adjustment;
step 8: the VOC format labeling file obtained in the step 5 and the fusion image obtained in the step 6 are sent to the multi-mode neural network built in the step 7 for training until the training is successful;
step 9: detecting the test set obtained in the step 2 by adopting the multi-mode convolutional neural network model which is trained in the step 8 and adjusted based on the YOLOv5, and additionally storing the acne area obtained by detection as a binary image, wherein the acne area is 1, and the background area is 0;
step 10: combining the binary images obtained in the step 9;
step 10-1: determining a face template M, and adjusting other images on a sequential basis to ensure the facial five sense organs of all people to be basically aligned;
step 10-2: carrying out graying treatment on the parallel polarization spectrum image of the test set obtained in the step (2);
step 10-3: performing face key point calibration on the face information in the image by using OpenCV Dlib on the image processed in the step 10-2;
step 10-4: calculating the circumscribed rectangle centers O of all the feature points of the image processed in the step 10-3, and calculating the difference between the circumscribed rectangle centers O and the feature points of the template M;
step 10-5: calculating the included angle between the connecting line and the vertical line for the two feature points at the uppermost part and the lowermost part of the center axis of the connecting nose of the image processed in the step 10-3;
step 10-6: calculating the length and width of the circumscribed rectangle of all the feature points of the image processed in the step 10-3, and calculating the length ratio of the circumscribed rectangle of the feature points of the template M;
step 10-7: translating the binary image obtained in the step 9 according to the position difference obtained in the step 10-4, rotating the image according to the included angle obtained in the step 10-5, and scaling according to the aspect ratio obtained in the step 10-6;
step 10-8: superposing all the binary images processed in the step 10-7 to obtain a multi-person acne occurrence position diagram, so that the specificity is reduced;
step 11: performing contour fitting on the images obtained in the step 10-8 to obtain the internal contour of acne distribution;
step 12: obtaining macaque surface data;
step 13: labeling the monkey face data obtained in the step 12, and marking the boundary between the monkey face fur area and the bare surface area;
step 14: training the neural network model based on the U-net by adopting the file obtained in the step 13 until the training is successful;
step 15: dividing a rear test set by adopting the neural network model based on the U-net trained in the step 14 to obtain a monkey face bare surface area;
step 16: boundary extraction is carried out on all the monkey face bare surface areas obtained in the step 15, and the monkey face bare surface areas are fused into a boundary and filled, so that the specificity is reduced;
step 17: and (3) carrying out similarity calculation on the distribution profile of the acne on the human face and the bare surface area of the monkey face, which are respectively obtained in the step (11) and the step (16), and respectively using three calculation methods of Euclidean distance, cosine distance and Haosdorf distance to calculate, wherein the obtained result is used for objectively evaluating the similarity degree of the profiles of the two.
2. A method for calculating the association between the face of a person and the face of a macaque based on acne according to claim 1, wherein the specific method of step 16 is as follows: step 16-1: making external rectangles for all the monkey face bare surface areas obtained in the step 15;
step 16-2: defining a rectangular template, and determining the size of the rectangle and the degree of an included angle between the central axis and the coordinate axis;
step 16-3: performing rotation and scaling operations on the circumscribed rectangle and the internal image thereof obtained in the step 16-1, so that the circumscribed rectangle is consistent with the template defined in the step 16-2;
step 16-4: calculating all monkey face bare surface connected domains obtained in the step 16-3 respectively to obtain a central point O of the monkey face bare surface connected domains, simulating a polar coordinate system, taking O as an original point, taking 1 degree as an angle difference, and transmitting 360 rays to the periphery, wherein points where the rays intersect with the edges of the connected domains are sampling points;
step 16-5: taking the sampling points with the same corresponding angles obtained in the step 16-4 as a group of data, and respectively calculating and averaging each group of data to obtain 360 average value points;
step 16-6: and (3) sequentially and smoothly connecting and filling the mean value points obtained in the step (16-5) to obtain a monkey face bare surface area fusion map, and reducing the specificity.
CN202311127893.1A 2023-09-01 2023-09-01 Correlation calculation method of human face and macaque face based on acne Pending CN117274673A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311127893.1A CN117274673A (en) 2023-09-01 2023-09-01 Correlation calculation method of human face and macaque face based on acne

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311127893.1A CN117274673A (en) 2023-09-01 2023-09-01 Correlation calculation method of human face and macaque face based on acne

Publications (1)

Publication Number Publication Date
CN117274673A true CN117274673A (en) 2023-12-22

Family

ID=89215097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311127893.1A Pending CN117274673A (en) 2023-09-01 2023-09-01 Correlation calculation method of human face and macaque face based on acne

Country Status (1)

Country Link
CN (1) CN117274673A (en)

Similar Documents

Publication Publication Date Title
CN110619301B (en) Emotion automatic identification method based on bimodal signals
CN112967285A (en) Chloasma image recognition method, system and device based on deep learning
CN112070772A (en) Blood leukocyte image segmentation method based on UNet + + and ResNet
CN110458831B (en) Scoliosis image processing method based on deep learning
CN111666845B (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
WO2011074014A2 (en) A system for lip corner detection using vision based approach
CN108615239A (en) Tongue image dividing method based on threshold technology and Gray Projection
Monwar et al. Pain recognition using artificial neural network
CN106778047A (en) A kind of traditional Chinese medical science facial diagnosis integrated system based on various dimensions medical image
CN104933723B (en) Tongue image dividing method based on rarefaction representation
Xue et al. Optic disk detection and segmentation for retinal images using saliency model based on clustering
CN109711306B (en) Method and equipment for obtaining facial features based on deep convolutional neural network
CN117274673A (en) Correlation calculation method of human face and macaque face based on acne
CN117809030A (en) Breast cancer CT image identification and segmentation method based on artificial neural network
CN108629780B (en) Tongue image segmentation method based on color decomposition and threshold technology
CN111814738A (en) Human face recognition method, human face recognition device, computer equipment and medium based on artificial intelligence
CN113160224B (en) Artificial intelligence-based skin aging degree identification method, system and device
CN115601339A (en) Tongue picture inquiry moxibustion scheme generation system and method based on artificial intelligence
Lin et al. Glandular Cell Image Segmentation Method based on Improved SegNet Neural Network
CN110751661A (en) Clustering algorithm-based facial chloasma region automatic segmentation method
Rani et al. Automatic Segmentation and Classification of Skin Cancer Cells using Thresholding and Deep Learning based Techniques
CN116258697B (en) Automatic classification device and method for child skin disease images based on rough labeling
Chen et al. The development of a skin inspection imaging system on an Android device
Xin et al. A fast tongue image color correction method based on gray world method
CN111914632B (en) Face recognition method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination