CN103249353B - Method for determining physical constitutions using integrated information - Google Patents
Method for determining physical constitutions using integrated information Download PDFInfo
- Publication number
- CN103249353B CN103249353B CN201080070478.1A CN201080070478A CN103249353B CN 103249353 B CN103249353 B CN 103249353B CN 201080070478 A CN201080070478 A CN 201080070478A CN 103249353 B CN103249353 B CN 103249353B
- Authority
- CN
- China
- Prior art keywords
- mrow
- constitution
- msub
- variables
- score
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000012545 processing Methods 0.000 claims description 49
- 238000003745 diagnosis Methods 0.000 claims description 28
- 238000012216 screening Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 19
- 230000000144 pharmacologic effect Effects 0.000 claims description 10
- 238000000540 analysis of variance Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000010832 independent-sample T-test Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 238000010149 post-hoc-test Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 claims description 2
- 238000012360 testing method Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims 1
- 238000002405 diagnostic procedure Methods 0.000 abstract description 2
- 230000008859 change Effects 0.000 description 9
- 230000001815 facial effect Effects 0.000 description 5
- 210000001260 vocal cord Anatomy 0.000 description 5
- 230000001755 vocal effect Effects 0.000 description 5
- 238000007796 conventional method Methods 0.000 description 3
- 210000000056 organ Anatomy 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000001575 pathological effect Effects 0.000 description 2
- 238000010187 selection method Methods 0.000 description 2
- 208000011580 syndromic disease Diseases 0.000 description 2
- 241000283080 Proboscidea <mammal> Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4854—Diagnosis based on concepts of traditional oriental medicine
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Alternative & Traditional Medicine (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
Abstract
The present invention relates to a method for determining physical constitutions of the human body, which are classified according to Sasang medical science, and more specifically, to a method for determining physical constitutions using integrated information which can more accurately determine constitutions using comprehensive information from human faces, surveys, body types, and voices or the like. According to the present invention, the invention provides a method for determining the constitutions using the integrated information, which can more objectively and accurately distinguish the constitutions with the comprehensive information from human faces, surveys, body types, and voices or the like by solving the problems of existing constitution diagnostic methods, which can not help but have limitations on objectivity and accuracy in the determined results since the constitutions were determined only on the basis of information from one particular part of the human body and from surveyed results inputted by a user.
Description
Technical Field
The present invention relates to a method for determining human body constitution (physical constitution) classified based on the four-quadrant medical (SCM), and more particularly, to a method for determining constitution using integrated information, which uses information on a person's face, questionnaire, body type, and sound in an integrated manner, so that constitution can be more accurately determined.
Background
Traditionally, SCM-based physical classification stems from the same name medical theory first proposed by roommass li in 1894. The main content is that human constitutions can be classified into four types, i.e., taiyang (Tae-yangin), taiyin (Tae-eumin), shaoyang (SO-yangin), and shaoyin (SO-eumin), according to differences between the five zang-organs and six fu-organs of the human body, and applied therapies vary depending on constitutions, even though effective for the same disease or symptom.
Furthermore, SCM-based predisposition determinations are performed using, for example, the following conventional methods: constitution is determined by sensing the pulse or based on the features of appearance, temperament and talent, the characteristics and manner of speech, and pathological syndrome and pharmacology. Recently, a method of determining the nature of physical illness by "determination of four-elephant medical constitution item with Questionnaire (QSSCCII)" certified by four-elephant medical association in 1997 has been adopted.
However, since the above conventional methods mainly rely on subjective judgment of a diagnostician, objective reliability of a diagnosis result is limited; studies to determine constitutions based on more objective facts are actively being conducted.
In the prior art, for example, a "quadrant voice analyzer" disclosed in Korean patent application publication No.10-2004-0028895 (published in 4.3.2004) is an example of a study of the constitution determination method.
More specifically, the sound characteristic components representing the characteristics of the individual may be divided into an innate component and an acquired component. The innate component is revealed by the anatomical features of the vocal organs, whereas the acquired component is revealed by the phonetic features obtained during the learning of the language.
Among the components revealed by the anatomical features of the vocal organs, vocal tract formant frequencies that can be measured using a sound spectrogram and fundamental frequencies affected by vocal cord features are significant components that determine the constitution of the four-quadrant due to the individual characteristics of sound.
That is, the formant frequency found by the speech analysis is a component highly influenced by the anatomical structure of vocal tract (vocal cords) and the movement of vocal organs, and the fundamental frequency is mainly determined by the vocal cord structure.
In addition, since the vocal cord structure varies with gender or age, the vocal cord structure is a significant factor for detecting a change in the body state.
Therefore, the above patent document aims to provide a voice analyzer that collects different sound characteristics for each four-quadrant constitution (based on the above voice analysis characteristics), and then objectively diagnoses the constitution.
However, since the method described in the above patent document relies on only sound, it is impossible to cope with a case where sound is abnormal, such as a case where a person has a cold, or a case where a person's sound is intentionally changed, because the judgment criterion is limited to a single type of information, with the result that the accuracy of the method is limited.
Further, for example, "a four-quadrant constitution processing apparatus and method using a mobile terminal" as described in korean patent application laid-open No.10-2005-0110093 (published on 11/22/2005) is another example of the conventional constitution determination method.
The method described in this patent document has an object to provide an SCM constitution processing apparatus and method using a mobile terminal, the processing apparatus and method being configured to include: a camera unit configured to capture and input a face in order to determine an SCM constitution; a control unit configured to detect a contour by processing a face capture signal input from the camera unit, and determine an SCM constitution by performing a matching search; an SCM unit configured to record, store and output SCM-based physique analysis reference data and music signal and color signal data suitable for each physique; a keyboard unit configured to input a control instruction to perform an SCM analysis and to determine a physical constitution; and a display unit configured to display and output a questionnaire survey for determining the SCM constitution; the processing apparatus and method analyze an SCM constitution using a mobile terminal and output a sound signal and a light signal suitable for the analyzed constitution, thereby promoting psychological stability and health.
However, the method described in the above patent document determines the constitution based only on the facial contour and the result of the questionnaire input by the user, and thus the objectivity and accuracy of the determination result are also limited.
Thereafter, for example, "a user diagnosis method using body information" as described in korean patent application laid-open No. 10-2007-.
Although the methods described in these patent documents relate to methods for determining constitutions based on fingerprint recognition and iris recognition, respectively, the objectivity and accuracy of the determination results are limited because in the above-described conventional methods, the determination criteria are limited to one single type of physical information.
In order to overcome the above-mentioned limitations, for example, korean patent application laid-open No.10-2009-0101557 discloses an "apparatus for automatically diagnosing the constitution of four elephants", which includes: a camera configured to capture a three-dimensional (3D) image of a subject and to transmit the image in the form of digital information; a questionnaire survey information input unit configured to receive questionnaire survey answers in a computerized manner; a database configured to store information about determining the constitution of the quadrant; a control unit configured to determine and output the speaking characteristics and manner, appearance, temperament and talent, and pathological syndrome and pharmacology of the subject by analyzing and calculating the information transmitted from the camera and questionnaire information input unit and comparing the information with information in the database; a display configured to display an output of the control unit; and a memory configured to store information from the control unit.
That is, the apparatus for automatically diagnosing the four-quadrant constitution disclosed in the above patent document is characterized in that it is capable of calculating 3D coordinates of facial standard points using an automatic 3D face detector and then detecting morphological features of a face based on the calculated coordinates. Therefore, the method of determining constitutions based on only information on facial contours and questionnaire survey results, as disclosed in korean patent application laid-open No.10-2005-0110093, also has a problem that objectivity and accuracy of determination results are limited.
Therefore, although it is preferable to provide a new constitution determination method or apparatus which overcomes the problems of the conventional constitution diagnosis method and can determine the constitution more objectively and accurately, an apparatus or method which satisfies the above-described requirements has not been provided so far.
Disclosure of Invention
Technical problem
The present invention has been made in view of the above-mentioned problems of the conventional constitution diagnosis method, and it is an object of the present invention to provide a method for determining constitution using integrated information, which uses information on a person's face, questionnaire, body type, and sound in an integrated manner, so that constitution can be determined more accurately.
Technical scheme
In order to achieve the above object, the present invention provides a method for determining constitutions using integrated information, which uses information on a person's face, questionnaire, body type, and voice in an integrated manner so that constitutions can be determined more objectively and accurately, the method comprising the steps of: collecting and processing information about a face; collecting and processing information about questionnaires; collecting and processing information about body types; collecting and processing information about sound; and determining the final constitution by integrating the processing results obtained at the respective steps.
Here, the step of collecting and processing information on the face may include: an extraction step of extracting feature points from the photograph; a screening step of screening a subject who is measured in common for the front and the side and falls within the top 90% of the excellent sample score (G-score) calculated using the pharmacological score, expert confidence score, questionnaire survey score, and experience score of the subject; an estimation step of correcting the angle of the tilted front photograph and estimating a face reference line; a calculation step of calculating distance, angle, inclination, ratio, area, and curvature information based on the corrected photograph; an integration step of integrating the variables automatically generated for each sample and removing one or more erroneous samples generated in the automatic generation process; a missing filling step of performing missing filling on one or more variables and samples in which the missing has occurred; a outlier detecting step of detecting one or more values deviating from the measurement range; and an adjusting step of normalizing each variable and adjusting the influence of age.
In addition, in order to correct the pattern in which the values of the variables increase or decrease with age, the adjusting step may include calculating a Z-score normalized for each age range and performing correction using the Z-score, in which case the representative value of the variables of each age range subjected to full-range normalization is set to take the mean value and the standard deviation within the range of the corresponding age ± 2.
Further, the step of collecting and processing information about questionnaires may comprise: a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores; extracting a significant variable; calculating a weight; calculating the score of the constitutional significant variable; and a discriminant analysis step of calculating a typical constitution questionnaire score.
The step of extracting the significant variables may include extracting only variables in which the p-value is less than a certain value using equation 1:
wherein, OiDenotes the observed frequency, EiRepresenting the expected frequency and n representing the number of possible outcomes for each event.
In addition, the step of calculating the weight may include using equation logX2A specific score is assigned.
If the observed frequency is greater than the expected frequency, resulting in a positive value "a", and if the observed frequency is less than the expected frequency, resulting in a negative value "b", then the step of calculating the fitness salient variable score may be performed using equation 2 below:
in addition, the discriminant analysis step of calculating a typical questionnaire score can be performed by calculating an a-B score for each of TE, SE, and SY.
Further, the step of collecting and processing information on body types may include: a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores; a separate sample T-test step; and a discriminant analysis step.
Here, the independent sample T-test step may be performed using the following equation 3:
wherein,is the average of the values of the samples 1,is the mean value of sample 2, s1 2And s2 2Are the variance of the samples, n, respectively1And n2The sizes of the corresponding sample 1 and sample 2, respectively.
In addition, the discriminant analysis step may include proposing a typical body conformation type as a percentage for each of TE, SE, and SY.
Further, the collecting and processing of the information on the sound may include: a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores; a step of screening sound feature candidate variables; screening variables and converting the variables for discriminant analysis; and determining and analyzing the constitution.
Further, the step of screening the sound feature candidate variables may include: extracting pitch, intensity, formants, and MDVP parameters; and a step of extracting the MFCC parameters.
Here, the extracting of the pitch, intensity, formant, and MDVP parameters may include: calculating the pitch, intensity, primary and secondary formants, and associated-3 dB bandwidth values, maximum amplitude values, and minimum amplitude values of each vowel; obtaining pitch and intensity from the sentence, and calculating 10 th, 50 th, and 90 th percentile values, and ratios and correlations between the 10 th, 50 th, and 90 th percentile values; and extracting the MDVP parameter by extracting a Jitter system variable (Jitter, RAP, and PPQ) representing a degree of pitch change during the utterance and a Shim system variable (ShdB, Shim, and APQ) representing a degree of amplitude change during the utterance while changing the window size using a pitch and a maximum/minimum amplitude signal of the designated vowel range.
Further, the step of extracting the MFCC parameters may be set to extract 12 MFCC parameters and one energy variable for a valid segment range (sectional range) of each vowel.
Further, the steps of screening variables and converting variables for discriminant analysis may include: statistically significant variables were extracted by performing analysis of variance ANOVA and post hoc tests that verified the difference between the means of two or more populations, whereby for males, the F0 series variable and MFCC parameter were used as significant variables for determining fitness, and for females, the formant series variable and MFCC parameter were used as significant variables for determining fitness.
Further, the step of determining and analyzing the constitutions may include performing a normalization conversion so as to adjust the difference of the value ranges of the variables for the constitutional classification; and is
The normalized conversion of variables can be performed using equation 4 below:
where i denotes the name of the variable, Zi denotes the normalized categorical variable of constitutions, Xi denotes the categorical variable of constitutions, and mi and σ i denote the mean and standard deviation of each variable, respectively.
Further, the step of determining the final constitution may be set to determine, as the final constitution, the constitution having the maximum value obtained by adding probability values of TE, SE, and SY calculated by performing a corresponding algorithm for each constitution.
Here, the step of determining the final constitution may be set such that, in the individual diagnosis results regarding questionnaire, body type, face, and voice, the constitution in question is determined as the constitution corresponding to the maximum value only when the maximum value of the sum of the probability values of the respective constitutions is greater than 1.2 (in the case where there is no result regarding voice) or 1.6 (in the case where there is a result regarding voice). Advantageous effects
As described above, according to the present invention, it is possible to provide a method for determining physical constitutions using integrated information, which can overcome the disadvantages of the conventional physical constitution diagnostic method (which inevitably causes limitations on objectivity and accuracy of determination results because it determines physical constitutions based only on information on specific parts of the human body and questionnaire survey results input by a user), and which uses information on faces, questionnaires, body types, and sounds in an integrated manner, so that physical constitutions can be determined more objectively and accurately.
Drawings
FIG. 1 is a flow chart illustrating an overall process of a method for determining constitutions using integrated information according to the present invention;
fig. 2 is a flowchart illustrating a step of collecting and processing information on a face in the method for determining a physical constitution using integrated information according to the present invention;
fig. 3 is a graph showing age ranges with respect to variables that vary with age (in the figure, age 15 represents an age range of 15 to 19 years, and, for example, age 30 represents an age range of 30 to 34 years);
fig. 4 is a flowchart schematically illustrating a process of information on a questionnaire in the method for determining physical fitness using integrated information according to the present invention;
FIG. 5 is a flowchart schematically showing a process of information on body types in the method for determining physical constitutions using integrated information according to the present invention;
fig. 6 is a flowchart schematically showing processing of information on sound in the method for determining physical fitness using integrated information according to the present invention.
Detailed Description
The method for determining constitutions using integrated information according to the present invention will be described in detail with reference to the accompanying drawings.
Here, it should be noted that the following description is only for the embodiment for implementing the present invention, and the present invention is not limited only to the contents of the embodiment to be described below.
First, referring to fig. 1, an overall process of the method for determining constitutions using integrated information according to the present invention will be described.
That is, the present invention relates to a method for determining the constitution of a human body using integrated information, which uses information on the face, questionnaire, body type, and voice of a person in an integrated manner, so that the constitution can be determined more objectively and accurately. As shown in fig. 1, the method includes: a step S11 of acquiring and processing information about a face; a step S12 of collecting and processing information on questionnaire survey; a step S13 of collecting and processing information on body types; a step S14 of collecting and processing information about sound; and a step S15 of determining a final constitution by integrating the processing results obtained in the above steps.
Next, referring to fig. 2, the processing of information on a face in the method for determining a physical constitution using integrated information according to the present invention will be described below.
Referring to fig. 2, fig. 2 is a flowchart schematically illustrating a process on face-related information in the method for determining a physical constitution using integrated information according to the present invention.
As shown in fig. 2, the processing of information on a face includes: an extraction step S21 of extracting feature points from the photograph; a screening step S22 of screening a subject who is measured together for the front and the side and falls within the top 90% of the excellent sample score (G-score) calculated using the pharmacological score, expert confidence score, questionnaire survey score, and experience score of the subject; an estimation step S23 of correcting the angle of the tilted front photograph and estimating a face reference line thereof; a calculation step S24 of calculating distance, angle, inclination, ratio, area, and curvature information based on the corrected photograph; an integration step S25 of integrating the variables automatically generated for each sample and removing one or more erroneous samples generated in the automatic generation process; a missing filling step S26 of performing missing filling on one or more variables and samples in which a missing has occurred; a outlier detecting step S27 of detecting one or more values deviating from the measurement range; and an adjusting step S28 of normalizing each variable and adjusting the influence of age.
Here, the reason for adjusting the influence of age is that, due to contour change caused by aging, the face of a person exhibits a pattern in which the values of the variables increase or decrease with his or her age, as shown in fig. 3, and thus the pattern needs to be corrected for correct determination.
Then, in order to correct a pattern in which the variable values increase or decrease with age, a Z score that normalizes each age range is calculated, and correction is performed using the Z score; in this case, the representative value of the variable for each age range subjected to the full-range normalization employs the mean value and the standard deviation within the range of ± 2 from the corresponding age.
Further, although in the present embodiment, the present age change brings about a large difference in representative values, it is expected that the difference range of the respective age ranges will decrease when a sufficient number of samples are collected later.
The diagnosis is performed using the facial variables as described above, and in this case, the discriminant analysis performed on males and females uses 62 variables corrected for the age pattern, and the variables for the model are selected in a stepwise selection method.
As a result, the determination accuracy was somewhat low in terms of the discrimination function result and the determination accuracy result expressed as the selection variables for males and females, 51.8% for males and 53.5% for females. However, it was determined that the reason these results were somewhat low was that they were obtained using only the face.
Examples
The method for determining constitutions using integrated information according to the present invention will be described in detail with reference to the accompanying drawings.
Here, it should be noted that the following description is only for the embodiment for implementing the present invention, and the present invention is not limited only to the contents of the embodiment to be described below.
First, referring to fig. 1, an overall process of the method for determining constitutions using integrated information according to the present invention will be described.
That is, the present invention relates to a method for determining a physical constitution using integrated information, which uses information on a person's face, questionnaire, body type, and voice in an integrated manner, so that the physical constitution can be determined more objectively and accurately. As shown in fig. 1, the method includes: a step S11 of acquiring and processing information about a face; a step S12 of collecting and processing information on questionnaire survey; a step S13 of collecting and processing information on body types; a step S14 of collecting and processing information about sound; and a step S15 of determining a final constitution by integrating the processing results obtained in the above steps.
Next, referring to fig. 2, the processing of information on a face in the method for determining a physical constitution using integrated information according to the present invention will be described below.
Referring to fig. 2, fig. 2 is a flowchart schematically illustrating a process of information on a face in the method for determining a physical constitution using integrated information according to the present invention.
As shown in fig. 2, the processing of information on a face includes: an extraction step S21 of extracting feature points from the photograph; a screening step S22 of screening a subject who is measured together for the front and the side and falls within the top 90% of the excellent sample score (G-score) calculated using the pharmacological score, expert confidence score, questionnaire survey score, and experience score of the subject; an estimation step S23 of correcting the angle of the tilted front photograph and estimating a face reference line thereof; a calculation step S24 of calculating distance, angle, inclination, ratio, area, and curvature information based on the corrected photograph; an integration step S25 of integrating the variables automatically generated for each sample and removing one or more erroneous samples generated in the automatic generation process; a missing filling step S26 of performing missing filling on one or more variables and samples in which a missing has occurred; a outlier detecting step S27 of detecting one or more values deviating from the measurement range; and an adjusting step S28 of normalizing each variable and adjusting the influence of age.
Here, the reason for adjusting the influence of age is that, due to contour change caused by aging, the face of a person exhibits a pattern in which the values of the variables increase or decrease with his or her age, as shown in fig. 3, and thus the pattern needs to be corrected for correct determination.
Then, in order to correct a pattern in which the variable values increase or decrease with age, a Z score that normalizes each age range is calculated, and correction is performed using the Z score; in this case, the representative value of the variable for each age range subjected to the full-range normalization employs the mean value and the standard deviation within the range of ± 2 from the corresponding age.
Further, although in the present embodiment, the present age change brings about a large difference in representative values, it is expected that the difference range of the respective age ranges will decrease when a sufficient number of samples are collected later.
The diagnosis is performed using the facial variables as described above, and in this case, the discriminant analysis performed on males and females uses 62 variables corrected for the age pattern, and the variables for the model are selected in a stepwise selection method.
As a result, the determination accuracy was somewhat low in terms of the discrimination function result and the determination accuracy result expressed as the selection variables for males and females, 51.8% for males and 53.5% for females. However, it was determined that the reason these results were somewhat low was that they were obtained using only the face.
Next, referring to fig. 4, the processing of information on a questionnaire in the method for determining physical constitutions using integrated information according to the present invention will be described.
Referring to fig. 4, fig. 4 is a flowchart schematically illustrating the processing of information on a questionnaire in the method for determining physical fitness using integrated information according to the present invention.
As shown in fig. 4, the processing of information on questionnaires includes: a step S41 of extracting a significant variable; a step S42 of calculating a weight; a step S43 of calculating a score of a constitutional significant variable; and a discriminant analysis step S44 of calculating a typical questionnaire score.
In FIG. 4, TE represents Taiyin (Tae-eumin), SE represents Shaoyin (SO-eumin), SY represents Shaoyang (SO-yangin), and the population is set to Taiyin and non-Taiyin groups, Shaoyin and non-Shaoyin groups, and Shaoyang and non-Shaoyang groups.
Here, as described previously, the processing of the information on the questionnaire may further include a step of screening the examinees who fall within the top 90% range using the G-score value, like the processing of the information on the face.
Further, the step S41 of extracting the significant variables includes extracting only variables in which the value p is smaller than a certain value using the following equation 1, where Oi denotes the observed frequency, Ei denotes the expected frequency, and n denotes the number of possible outcomes for each event:
next, as shown in FIG. 4, the step S42 of calculating weights includes using the equation "-log X2"assign a particular score if the observed frequency Oi is greater thanThe expected frequency Ei is weighted positively, while if the observed frequency Oi is less than the expected frequency Ei, the weight is negative.
Next, if the observed frequency is greater than the expected frequency, a positive value "a" is obtained, and if the observed frequency is less than the expected frequency, a negative value "b" is obtained, then the step S43 of calculating the score of the constitutional significant variable is performed using the following equation 2:
thus, a is a positive sum and B is a negative sum.
Next, the discriminant analysis step S44 of calculating a typical questionnaire score can be performed by calculating an a-B score for each of TE, SE, and SY.
In addition, using the questionnaire diagnosis model as described above, the result was that the compliance rate for men was 64.3% and the compliance rate for women was 57.5%.
Next, referring to fig. 5, the processing of information on body types in the method for determining physical constitutions using integrated information according to the present invention will be described.
Referring to fig. 5, fig. 5 is a flowchart schematically illustrating a process of information on body types in the method for determining body constitutions using integrated information according to the present invention.
As shown in fig. 5, a body type diagnosis model is developed by processing information on body types by dividing a target group into taiyin person: non-taiyin person, shaoyin person: non-shaoyin person, and shaoyang person: non-shaoyang person, thereby obtaining significant items using a width of 5 regions, a circumference of 8 regions, a height, a weight, and a Body Mass Index (BMI), and obtaining a body constitution probability value (a probability value representing typicality of each body constitution) by discriminant analysis of the significant items.
That is, the process of developing the body type model includes an independent sample T-test step S51 and a decisive analysis step S52.
Further, as described above, the process of developing body type patterns may further include a step of screening examinees who fall within the top 90% range using a G-score, like the process of processing information on faces and the process of processing information on questionnaires.
Here, the independent sample T-test step S51 is performed using the following equation 3, in whichIs the average of the values of the samples 1,is the mean value of sample 2, s1 2And s2 2Are the variance of the samples, n, respectively1And n2The sizes of the corresponding sample 1 and sample 2, respectively:
here, as shown in fig. 4, variables are proposed for males and females. In the decisive analysis step S52, typical body types are presented as percentages for each of TE, SE, and SY.
Further, using the body type model as described above, the result was that the fitting rate for men was 60.8%, and the fitting rate for women was 57.0%.
Next, referring to fig. 6, the processing of information on sound in the method for determining physical fitness using integrated information according to the present invention will be described.
Referring to fig. 6, fig. 6 is a flowchart schematically illustrating processing of information on sound in the method for determining a physical fitness using integrated information according to the present invention.
As shown in fig. 6, the processing of the information on the sound includes extracting 144 sound feature variables in total from 5 vowels and sentences, and developing a sound constitutional diagnosis model from the extracted sound feature candidate variables.
That is, more specifically, the process of developing the acoustic physical fitness diagnosis model includes: a step S61 of screening sound feature candidate variables; a step S62 of screening variables and converting the variables for discriminant analysis; and a step S63 of determining and analyzing the constitution.
Further, as with the process described above, the process of developing a sound physical diagnosis model may further include the step of screening examinees who fall within the top 90% range using the G-score.
Here, the step S61 of screening the sound feature candidate variables includes: extracting pitch, intensity, formants, and MDVP parameters; and a step of extracting the MFCC parameters. The step of extracting pitch, intensity, formants, and MDVP parameters comprises: calculating the pitch, intensity, primary and secondary formants, and associated-3 dB bandwidth values, maximum amplitude values, and minimum amplitude values of each vowel; obtaining pitch and intensity from the sentence, and calculating 10 th, 50 th, and 90 th percentile values, and ratios and correlations between the 10 th, 50 th, and 90 th percentile values; and extracting the MDVP parameter by extracting a Jitter system variable (Jitter, RAP, and PPQ) representing a degree of pitch change during the utterance and a Shim system variable (ShdB, Shim, and APQ) representing a degree of amplitude change during the utterance while changing the window size using a pitch and a maximum/minimum amplitude signal of the designated vowel range.
Further, the step of extracting the MFCC parameters is set to extract 12 MFCC parameters and one energy variable for each valid segment range of vowels.
Next, the step S62 of screening variables and converting the variables for discriminant analysis includes: statistically significant (p < 0.05) variables were extracted by performing analysis of variance ANOVA and post hoc tests that verified the difference between the means of two or more populations.
As a result, for males, the F0 family variable and the MFCC parameter were used as significant variables for determining constitutions, while for females, the formant family variable and the MFCC parameter were used as significant variables for determining constitutions.
Next, the step S63 of determining and analyzing the constitutions includes performing a standardized conversion of parameters and calculating a Z score using the following equation 4 in order to adjust the difference in the value ranges of the variables for the constitution classification:
in equation 4 above, i represents the name of the variable, Zi represents the normalized categorical variable of constitutions, Xi represents the categorical variable of constitutions, and mi and σ i represent the mean and standard deviation of each variable, respectively.
Further, using the Z score calculated as described above, a decisive analysis was performed based on the voice diagnosis model, and as a result, the fitting rate of men was 51.7%, and that of women was 43.3%.
Next, an integrated physical constitution diagnosis method that provides a method for determining physical constitution using integrated information by an integrated determination method using various types of information that have been described above will be described in detail below.
That is, the present invention aims to provide a method for physical constitution diagnosis using integrated information based on the above-described individual diagnosis results, which is set to more accurately determine physical constitution using the corresponding individual diagnosis results.
First, in the case where two or more physical constitution diagnosis results are matched among the individual diagnosis results described above with respect to the questionnaire survey, body type, and face, the matching rate and the determination accuracy of the diagnosis results are calculated, and as a result, it is determined that the matching rate of men is 85.8% and the matching rate of women is 86.8% when two or more diagnosis results are matched; while the accuracy for males is 68.5% and for females 67.1%.
Further, when all the individual diagnosis results on the questionnaire, the body type, and the face are conformed, the case percentage in which all the diagnosis results are conformed is determined to be less than 50%, but the determination accuracy thereof is improved to 80% or more.
Therefore, based on the above-described results, a method for determining constitutions using integrated information based on individual constitutional diagnosis possibilities can be provided by: the constitution with the maximum value obtained by adding the probability values of TE, SE, and SY calculated by performing the corresponding algorithm for each constitution is determined as the final constitution.
Further, in this case, in the individual diagnosis results concerning the questionnaire, the body type, the face, and the voice, only when the maximum value of the sum of the individual physique probability values is greater than 1.2 (in the case where there is no result concerning the voice) or 1.6 (in the case where there is a result concerning the voice), the physique in question is determined to be the physique corresponding to the maximum value.
That is, as described above, if the constitution under consideration is determined to be the constitution corresponding to the maximum value only when the maximum value of the sum of the respective constitution probability values is greater than 1.2 (in the case where there is no result on sound) or 1.6 (in the case where there is a result on sound), it is found that the determination rate and the determination accuracy are very high: the determination rate of male is 88.1%, and the determination rate of female is 83.0%; the accuracy of determination was 74.9% for males and 70.8% for females.
Therefore, according to the present invention, a method for determining a physical constitution using integrated information may be provided by: the constitution with the maximum value obtained by adding the probability values of TE, SE, and SY calculated by performing the corresponding algorithm for each constitution is determined as the final constitution.
Although the method for determining physical constitutions using integrated information according to the present invention has been described by way of examples of the present invention, the present invention is not limited to the description of examples. It is therefore evident that those skilled in the art to which the present invention pertains may make modifications, variations, combinations, and alterations of the present invention depending on design requirements or other factors.
Industrial applicability
Since the present invention can overcome the problems of the conventional constitution diagnosis method, which inevitably causes limitations on objectivity and accuracy of determination results because the constitution is determined based only on information on specific parts of the human body and questionnaire survey results input by the user, and makes it possible to more objectively and accurately determine the constitution by using information on the face, questionnaire, body type, and sound in an integrated manner, the present invention can be effectively applied to the field related to the constitution determination.
Claims (17)
1. A method for determining constitutions using integrated information, which uses information on a person's face, questionnaire, body type, and voice in an integrated manner, comprising the steps of:
collecting and processing information about a face;
collecting and processing information about questionnaires;
collecting and processing information about body types;
collecting and processing information about sound; and
the final constitution was determined by integrating the treatment results obtained at each step.
2. The method of claim 1, wherein the step of collecting and processing information about the face comprises:
an extraction step of extracting feature points from the photograph;
a screening step of screening a subject who is measured together in front and side and falls within the top 90% of a G-score calculated using a pharmacological score, an expert confidence score, a questionnaire survey score, and an experience score of the subject;
an estimation step of correcting the angle of the tilted front photograph and estimating a face reference line;
a calculation step of calculating distance, angle, inclination, ratio, area, and curvature information based on the corrected photograph;
an integration step of integrating the variables automatically generated for each sample and removing one or more erroneous samples generated in the automatic generation process;
a missing filling step of performing missing filling on one or more variables and samples in which the missing has occurred;
a outlier detecting step of detecting one or more values deviating from the measurement range; and
an adjustment step of normalizing each variable and adjusting the effect of age.
3. The method according to claim 2, wherein, in order to correct a pattern in which the variable values increase or decrease with age, the adjusting step includes calculating a Z-score normalized for each age range and performing correction using the Z-score,
in this case, the representative value of the variables for each age range subjected to the full-range normalization is set to take the average value and the standard deviation within the range of ± 2 from the corresponding age.
4. The method of claim 1, wherein the step of collecting and processing information about questionnaires comprises:
a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores;
extracting a significant variable;
calculating a weight;
calculating the score of the constitutional significant variable; and
and (3) a discriminant analysis step of calculating a typical constitution questionnaire score.
5. The method of claim 4, wherein the step of extracting the significant variables is performed by using the following equation 1:
wherein, OiDenotes the observed frequency, EiRepresenting the expected frequency and n representing the number of possible outcomes for each event.
6. The method of claim 5, wherein the step of calculating weights comprises assigning a particular score using the following equation: weight ═ log X2。
7. The method according to claim 5, wherein if said observed frequency is greater than said expected frequency, a positive value "a" is obtained, and if said observed frequency is less than said expected frequency, a negative value "b" is obtained, then said step of calculating the fitness salient variable score is performed using the following equation 2:
where A represents the sum of positive values, B represents the sum of negative values, and i represents the order of events.
8. The method according to claim 7, characterized in that said discriminant analysis step of calculating a typical fitness questionnaire score is carried out by calculating the difference of A and B for each of TE, SE, and SY, wherein TE represents too negative type, SE represents less negative type, and SY represents less positive type.
9. The method of claim 1, wherein the step of collecting and processing information about body types comprises:
a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores;
a separate sample T-test step; and
and (5) judging and analyzing.
10. The method of claim 9, wherein the independent sample T-test step is performed using equation 3 below:
wherein,is the average of the values of the samples 1,is the mean value of sample 2, s1 2And s2 2Are the variance of the samples, n, respectively1And n2The sizes of the corresponding sample 1 and sample 2, respectively.
11. The method of claim 9, wherein said discriminant analysis step comprises proposing a typical body conformation as a percentage for each of TE, SE, and SY, wherein TE represents a too negative type, SE represents a less negative type, and SY represents a less positive type.
12. The method of claim 1, wherein the step of collecting and processing information about sound comprises:
a step of screening examinees who fall within the top 90% of G scores calculated using pharmacological scores, expert confidence scores, questionnaire survey scores, and experience scores;
a step of screening sound feature candidate variables;
screening variables and converting the variables for discriminant analysis; and
and (5) determining and analyzing the constitution.
13. The method of claim 12, wherein the steps of screening variables and converting variables for discriminant analysis comprise:
statistically significant variables were extracted by performing analysis of variance ANOVA and post hoc tests that verified the differences between the means of two or more populations.
14. The method of claim 12, wherein:
said step of determining and analyzing the constitutions includes performing a normalization transformation so as to adjust the difference of the value ranges of the variables for the constitutional classification; and
the normalized conversion of the variables is performed using equation 4 below:
where i denotes the name of the variable, Zi denotes the normalized categorical variable of constitutions, Xi denotes the categorical variable of constitutions, and mi and σ i denote the mean and standard deviation of each variable, respectively.
15. The method according to claim 1, characterized in that said step of determining a final constitution is configured to determine, as a final constitution, a constitution having a maximum value obtained by adding probability values of TE, SE, and SY calculated by performing a corresponding algorithm for each constitution, wherein TE represents a too negative type, SE represents a less negative type, and SY represents a less positive type.
16. The method according to claim 15, wherein in the absence of a result related to a sound, the step of determining the final constitution is set to determine the constitution under consideration as the constitution corresponding to the maximum value only when the maximum value of the sum of the respective constitution probability values is greater than 1.2 among the individual diagnosis results related to questionnaires, body types, faces, and sounds.
17. The method according to claim 15, wherein in the case where there is a result related to sound, the step of determining the final constitution is set to determine the constitution under consideration as the constitution corresponding to the maximum value only when the maximum value of the sum of the respective constitution probability values is greater than 1.6 among the individual diagnosis results related to questionnaire, body type, face, and sound.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2010-0096068 | 2010-10-01 | ||
KR20100096068A KR101220399B1 (en) | 2010-10-01 | 2010-10-01 | A physical constitution information analysis method using integrated information |
PCT/KR2010/009264 WO2012043935A1 (en) | 2010-10-01 | 2010-12-23 | Method for determining physical constitutions using integrated information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103249353A CN103249353A (en) | 2013-08-14 |
CN103249353B true CN103249353B (en) | 2015-04-22 |
Family
ID=45893354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201080070478.1A Active CN103249353B (en) | 2010-10-01 | 2010-12-23 | Method for determining physical constitutions using integrated information |
Country Status (3)
Country | Link |
---|---|
KR (1) | KR101220399B1 (en) |
CN (1) | CN103249353B (en) |
WO (1) | WO2012043935A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101387896B1 (en) * | 2012-09-24 | 2014-04-21 | 한국 한의학 연구원 | Apparatus and method for deciding physical constitution and health |
CN106137128A (en) * | 2016-06-24 | 2016-11-23 | 何颖 | Health detection method |
KR101715567B1 (en) * | 2016-07-21 | 2017-03-13 | 동국대학교 경주캠퍼스 산학협력단 | Method for facial analysis for correction of anthroposcopic errors from Sasang constitutional specialists |
KR102002279B1 (en) * | 2017-04-06 | 2019-07-23 | 한국한의학연구원 | Apparatus for diaagnosing three dimensional face |
KR102156699B1 (en) | 2018-11-14 | 2020-09-17 | 한국한의학연구원 | Composition for determining Soeumin |
KR102181022B1 (en) | 2018-12-03 | 2020-11-19 | 박상순 | Sasang constitution check method and program |
US11779222B2 (en) | 2019-07-10 | 2023-10-10 | Compal Electronics, Inc. | Method of and imaging system for clinical sign detection |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1120923A (en) * | 1993-09-18 | 1996-04-24 | 孙绍山 | Portable constitution identification and health card and its application |
JPH09231413A (en) * | 1996-02-22 | 1997-09-05 | Suzuki Nobuo | Physical constitution processor |
KR100427243B1 (en) * | 2002-06-10 | 2004-04-14 | 휴먼씽크(주) | Method and apparatus for analysing a pitch, method and system for discriminating a corporal punishment, and computer readable medium storing a program thereof |
KR101141398B1 (en) * | 2004-05-17 | 2012-05-03 | 엘지전자 주식회사 | A method and a apparatus of operating sasang-medical constitutional for mobile phone |
CN1716267A (en) * | 2005-07-21 | 2006-01-04 | 高春平 | Individual stereo health preserving method and device |
KR20070031145A (en) * | 2005-09-14 | 2007-03-19 | 엘지전자 주식회사 | method for diagnosing user using body information |
KR100979506B1 (en) * | 2008-03-24 | 2010-09-02 | 학교법인 동의학원 | Diagnosis device of Sasang constitution |
KR100919838B1 (en) * | 2009-05-14 | 2009-10-01 | 한국 한의학 연구원 | Apparatus for managing health and control method thereof |
-
2010
- 2010-10-01 KR KR20100096068A patent/KR101220399B1/en active IP Right Grant
- 2010-12-23 WO PCT/KR2010/009264 patent/WO2012043935A1/en active Application Filing
- 2010-12-23 CN CN201080070478.1A patent/CN103249353B/en active Active
Also Published As
Publication number | Publication date |
---|---|
WO2012043935A1 (en) | 2012-04-05 |
KR101220399B1 (en) | 2013-01-09 |
CN103249353A (en) | 2013-08-14 |
KR20120034475A (en) | 2012-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Al-Nasheri et al. | An investigation of multidimensional voice program parameters in three different databases for voice pathology detection and classification | |
CN103249353B (en) | Method for determining physical constitutions using integrated information | |
Saenz-Lechon et al. | Methodological issues in the development of automatic systems for voice pathology detection | |
CN103730130B (en) | A kind of detection system of pathological voice | |
US20210071401A1 (en) | Smart toilet and electric appliance system | |
CN106073706B (en) | A kind of customized information and audio data analysis method and system towards Mini-mental Status Examination | |
Wang et al. | Towards Automatic Detection of Amyotrophic Lateral Sclerosis from Speech Acoustic and Articulatory Samples. | |
Martínez et al. | Intelligibility assessment and speech recognizer word accuracy rate prediction for dysarthric speakers in a factor analysis subspace | |
CN107657964A (en) | Depression aided detection method and grader based on acoustic feature and sparse mathematics | |
Sáenz-Lechón et al. | Automatic assessment of voice quality according to the GRBAS scale | |
Amara et al. | An improved GMM-SVM system based on distance metric for voice pathology detection | |
Ali et al. | Intra-and inter-database study for Arabic, English, and German databases: do conventional speech features detect voice pathology? | |
Anupam et al. | Preliminary diagnosis of COVID-19 based on cough sounds using machine learning algorithms | |
Perero-Codosero et al. | Modeling obstructive sleep apnea voices using deep neural network embeddings and domain-adversarial training | |
Kim et al. | Automatic estimation of parkinson's disease severity from diverse speech tasks. | |
Cordella et al. | Classification-based screening of Parkinson’s disease patients through voice signal | |
Hammami et al. | Pathological voices detection using support vector machine | |
CN112190253A (en) | Classification method for severity of obstructive sleep apnea | |
Majda-Zdancewicz et al. | Deep learning vs feature engineering in the assessment of voice signals for diagnosis in Parkinson’s disease | |
CN116895287A (en) | SHAP value-based depression voice phenotype analysis method | |
Gidaye et al. | Application of glottal flow descriptors for pathological voice diagnosis | |
Yu et al. | Multidimensional acoustic analysis for voice quality assessment based on the GRBAS scale | |
Vieira et al. | Combining entropy measures and cepstral analysis for pathological voices assessment | |
Kurt et al. | Musical feature based classification of Parkinson's disease using dysphonic speech | |
Camnos-Roca et al. | Computational diagnosis of Parkinson's disease from speech based on regularization methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |