WO2023189309A1 - コンピュータプログラム、情報処理方法及び情報処理装置 - Google Patents
コンピュータプログラム、情報処理方法及び情報処理装置 Download PDFInfo
- Publication number
- WO2023189309A1 WO2023189309A1 PCT/JP2023/008702 JP2023008702W WO2023189309A1 WO 2023189309 A1 WO2023189309 A1 WO 2023189309A1 JP 2023008702 W JP2023008702 W JP 2023008702W WO 2023189309 A1 WO2023189309 A1 WO 2023189309A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- face
- subject
- facial
- neurological disorder
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 58
- 238000004590 computer program Methods 0.000 title claims abstract description 28
- 238000003672 processing method Methods 0.000 title claims abstract description 5
- 230000001815 facial effect Effects 0.000 claims abstract description 118
- 238000003860 storage Methods 0.000 claims abstract description 58
- 208000012902 Nervous system disease Diseases 0.000 claims description 157
- 238000000034 method Methods 0.000 claims description 82
- 230000008569 process Effects 0.000 claims description 62
- 238000002405 diagnostic procedure Methods 0.000 claims description 42
- 230000001502 supplementing effect Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 115
- 201000001119 neuropathy Diseases 0.000 abstract description 3
- 230000007823 neuropathy Effects 0.000 abstract description 3
- 208000033808 peripheral neuropathy Diseases 0.000 abstract description 3
- 238000005259 measurement Methods 0.000 description 35
- 230000008921 facial expression Effects 0.000 description 20
- 230000000295 complement effect Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 16
- 239000000284 extract Substances 0.000 description 11
- 238000007781 pre-processing Methods 0.000 description 11
- 238000010801 machine learning Methods 0.000 description 9
- 208000006011 Stroke Diseases 0.000 description 8
- 210000004709 eyebrow Anatomy 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 6
- 208000006373 Bell palsy Diseases 0.000 description 5
- 208000025966 Neurological disease Diseases 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 206010044565 Tremor Diseases 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 208000026106 cerebrovascular disease Diseases 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 208000024891 symptom Diseases 0.000 description 2
- 206010005746 Blood pressure fluctuation Diseases 0.000 description 1
- 206010008190 Cerebrovascular accident Diseases 0.000 description 1
- 206010013887 Dysarthria Diseases 0.000 description 1
- 208000004929 Facial Paralysis Diseases 0.000 description 1
- 208000036826 VIIth nerve paralysis Diseases 0.000 description 1
- 238000013019 agitation Methods 0.000 description 1
- 201000007201 aphasia Diseases 0.000 description 1
- 208000027904 arm weakness Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 206010008118 cerebral infarction Diseases 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000474 nursing effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B10/00—Other methods or instruments for diagnosis, e.g. instruments for taking a cell sample, for biopsy, for vaccination diagnosis; Sex determination; Ovulation-period determination; Throat striking implements
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/107—Measuring physical dimensions, e.g. size of the entire body or parts thereof
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Definitions
- the present disclosure relates to a computer program, an information processing method, and an information processing device for estimating the presence or absence of a neurological disorder in a subject.
- Patent Document 1 a facial image including the subject's face is acquired, and a trained model that has been deep learned in advance is used to determine whether the subject has facial nerve paralysis based on the facial image, and to ask questions regarding stroke.
- a stroke determination device has been proposed that is presented to a subject, obtains answers to interview items, and determines the possibility of a stroke in the subject based on the presence or absence of facial nerve paralysis and the answers.
- the present disclosure has been made in view of such circumstances, and its purpose is to accurately estimate the presence or absence of a neurological disorder in a subject based on facial information obtained from a direction other than the front view. It is an object of the present invention to provide a computer program, an information processing method, and an information processing device that can be expected to have the following advantages.
- a computer program causes a computer to acquire facial information of a target person detected by a sensor, generate facial structure information of the target person based on the acquired facial information, and generate facial structure information of the generated facial structure. Based on the information, missing face information of the subject is complemented with respect to the acquired face information, and a process of estimating the presence or absence of a neurological disorder of the subject is executed based on the supplemented face information.
- it can be expected to accurately estimate the presence or absence of a neurological disorder in a subject based on face information acquired from a direction other than the front view.
- FIG. 1 is a schematic diagram for explaining an overview of an information processing system according to the present embodiment.
- 1 is a block diagram showing the configuration of a server device according to Embodiment 1.
- FIG. FIG. 2 is a block diagram showing the configuration of a terminal device according to the present embodiment.
- 3 is a flowchart for explaining the procedure of pre-processing performed in the information processing system according to the first embodiment.
- FIG. 3 is a schematic diagram for explaining pre-processing performed in the information processing system according to the first embodiment.
- 2 is a flowchart illustrating a procedure for estimating the presence or absence of a neurological disorder performed by the server device according to the present embodiment.
- 7 is a flowchart illustrating a procedure of facial image complementation processing performed by the server device according to the first embodiment.
- FIG. 1 is a block diagram showing the configuration of a server device according to Embodiment 1.
- FIG. FIG. 2 is a block diagram showing the configuration of a terminal device according to the present embodiment.
- 3 is
- FIG. 2 is a schematic diagram for explaining a method for estimating the presence or absence of a neurological disorder performed by the server device according to the present embodiment. It is a schematic diagram which shows an example of the notification screen of the estimation result of a neurological disorder.
- FIG. 7 is a schematic diagram for explaining the configuration of a learning model included in the server device according to Embodiment 2.
- FIG. 3 is a schematic diagram showing an example of shape information output by a shape estimation model. 7 is a flowchart illustrating the procedure of facial image complementation processing performed by the server device according to the second embodiment.
- FIG. 1 is a schematic diagram for explaining an overview of an information processing system according to this embodiment.
- the information processing system according to the present embodiment includes a server device 1 that performs processing for estimating the presence or absence of a neurological disorder in a subject, and a device that performs processing such as photographing a face image of the subject necessary for the estimation processing performed by the server device 1. or a plurality of terminal devices 3.
- the terminal device 3 is installed in a residence where a subject lives (for example, a living room or a dining room), a nursing care facility, a medical facility, or the like.
- the terminal device 3 is equipped with a camera, and regularly photographs the subject inside the residence, and transmits the photographed images to the server device 1.
- the server device 1 Based on the photographed image of the subject obtained from the terminal device 3, the server device 1 performs a process of estimating whether the subject has a neurological disorder such as apoplexy (cerebrovascular disorder), cerebral infarction, or facial nerve paralysis.
- the server device 1 estimates the presence or absence of a neurological disorder based on the subject's face included in the photographed image. For example, when the server device 1 estimates that the subject has a neurological disorder, the server device 1 transmits this estimation result to the terminal device 3.
- the terminal device 3 that has received this estimation result displays a warning message on the display unit or outputs a voice, etc., to notify the target person or other users related to the target person (family members or medical professionals). etc.) can be notified of the risk of neurological disorders. Note that even when the server device 1 estimates that the subject does not have a neurological disorder, it may transmit this to the terminal device 3 as the estimation result.
- the information processing system estimates the presence or absence of a neurological disorder based on the image taken of the subject by the camera, and when it is estimated that the subject has a neurological disorder, the information processing system installed in the terminal device 3
- Information on the subject may be detected using a sensor other than a camera, such as a distance measuring sensor or a microphone, and the presence or absence of a neurological disorder may be further estimated based on the detected information.
- the terminal device 3 that has received from the server device 1 the estimation result that there is a neurological disorder based on the captured image of the camera detects information using a sensor different from the camera, and transmits the detected information to the server device 1.
- the server device 1 further performs a process of estimating the presence or absence of a neurological disorder in the subject based on the information received from the terminal device 3.
- Detection of information about a subject includes, for example, detection of abnormal biological information, facial paralysis, mental abnormality, fall, agitation or tremor, weakness, abnormal speech, and the like.
- Specific symptoms of bioinformation abnormalities include abnormalities in pulse rate, heart rate variability, breathing, blood oxygen concentration, or blood pressure fluctuations.
- the server device 1 determines that the subject has a neurological disorder based on an image captured by a camera, and also determines that the subject has a neurological disorder based on information detected by another sensor, the server device 1 ultimately determines whether the subject has a neurological disorder.
- the estimation result indicating that there is a failure is transmitted to the terminal device 3.
- the server device 1 determines that the subject has a neurological disorder based on an image captured by a camera, but determines that there is no neurological disorder based on information detected by another sensor, the server device 1 ultimately determines that the subject has a neurological disorder.
- the estimation result of "none" is transmitted to the terminal device 3.
- this additional estimation of the presence or absence of a neurological disorder may be performed based on the image of the subject taken by the camera, rather than information detected by a sensor different from the camera.
- the additional estimation is performed by a method different from the determination of the presence or absence of a neurological disorder based on the subject's face, such as a method such as determination of the presence or absence of a neurological disorder based on the subject's entire body.
- the additional estimation does not necessarily need to be performed, and the server device 1 may only estimate the presence or absence of a neurological disorder based on the subject's face photographed by the camera of the terminal device 3.
- the information processing system may perform a diagnostic test on the subject in order to understand, for example, the degree of the symptoms of the neurological disorder.
- the server device 1 sends information to the terminal device 3 of a subject who has been determined to have a neurological disorder based on a judgment index such as CPSS (Cincinnati Prehospital Stroke Scale), NIHSS (National Institute of Health Stroke Scale), or KPSS (Kurashiki Prehospital Scale). Submit information to perform diagnostic tests for neurological disorders.
- the terminal device 3 performs a diagnostic test based on the information received from the server device 1 and transmits the information obtained from the subject to the server device 1.
- the server device 1 makes a determination regarding one or more test items included in the diagnostic test based on the information received from the terminal device 3, and transmits the results of the diagnostic test to the terminal device 3.
- the terminal device 3 displays the result of the diagnostic test received from the server device 1 and notifies the subject. Note that a diagnostic test does not necessarily have to be performed.
- the server device 1 in order to accurately perform the first estimation process regarding the presence or absence of a neurological disorder, that is, the estimation of the presence or absence of a neurological disorder based on the image of the subject's face captured by the camera of the terminal device 3.
- the server device 1 is performing various processes.
- the server device 1 estimates the presence or absence of a neurological disorder by checking the left-right symmetry of the subject's face based on the image of the subject's face taken by the camera of the terminal device 3.
- the server device 1 also compares the image of the subject's normal face registered in advance with the image of the subject's face taken by the camera of the terminal device 3, and determines whether or not there is a neurological disorder based on the difference. presume.
- the above estimation of the presence or absence of a neurological disorder is based on the premise that the subject's face photographed by the camera of the terminal device 3 is viewed from the front.
- the information processing system according to the present embodiment photographs the subject with the camera of the terminal device 3 installed in the subject's residence, and it is not necessarily possible to photograph the subject's face from the front. Therefore, in the information processing system according to the present embodiment, the image of the subject's face taken by the camera of the terminal device 3 is taken from a direction other than the front (for example, diagonally forward right or diagonally left) If the captured image does not show the entire face of the subject, the server device 1 complements the parts that are not shown (missing parts) and generates a front-view image of the subject's face. Then, use the method described above to estimate the presence or absence of neurological disorders.
- FIG. 2 is a block diagram showing the configuration of the server device 1 according to the first embodiment.
- the server device 1 according to the present embodiment includes a processing section 11, a storage section 12, a communication section (transceiver) 13, and the like. Note that although the present embodiment will be described assuming that the processing is performed by one server device, the processing may be performed in a distributed manner by a plurality of server devices.
- the processing unit 11 includes an arithmetic processing unit such as a CPU (Central Processing Unit), an MPU (Micro-Processing Unit), a GPU (Graphics Processing Unit), or a quantum processor, a ROM (Read Only Memory), a RAM (Random Access Memory), etc. It is configured using The processing unit 11 reads out and executes the server program 12a stored in the storage unit 12, thereby performing processing for estimating the presence or absence of a neurological disorder in the subject based on the photographed image acquired from the terminal device 3, and determining whether the subject has a neurological disorder.
- Various processes such as a process for implementing a diagnostic test on the determined subject are performed.
- the storage unit 12 is configured using a large-capacity storage device such as a hard disk, for example.
- the storage unit 12 stores various programs executed by the processing unit 11 and various data necessary for processing by the processing unit 11.
- the storage unit 12 stores a server program 12a executed by the processing unit 11, and is provided with a reference information storage unit 12b that stores information used for estimation processing of the presence or absence of a neurological disorder.
- the server program (computer program, program product) 12a is provided in a form recorded on a recording medium 99 such as a memory card or an optical disk, and the server device 1 reads the server program 12a from the recording medium 99 and stores it. It is stored in section 12.
- the server program 12a may be written into the storage unit 12, for example, during the manufacturing stage of the server device 1.
- the server program 12a may be distributed by another remote server device, and the server device 1 may obtain it through communication.
- the server program 12 a may be recorded on the recording medium 99 by a writing device and written into the storage unit 12 of the server device 1 .
- the server program 12a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 99.
- the reference information storage unit 12b stores information regarding the subject's face (normal face) obtained and/or generated in advance as reference information.
- the information stored in the reference information storage unit 12b may include, for example, a photographed image of the subject's face taken from the front, a three-dimensional model of the subject's face, and the like.
- the communication unit 13 communicates with various devices via a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like. In this embodiment, the communication unit 13 communicates with one or more terminal devices 3 via the network N. The communication unit 13 transmits data provided from the processing unit 11 to other devices, and also provides data received from other devices to the processing unit 11.
- a network N including a mobile phone communication network, a wireless LAN (Local Area Network), the Internet, and the like.
- the communication unit 13 communicates with one or more terminal devices 3 via the network N.
- the communication unit 13 transmits data provided from the processing unit 11 to other devices, and also provides data received from other devices to the processing unit 11.
- the storage unit 12 may be an external storage device connected to the server device 1.
- the server device 1 may be a multicomputer including a plurality of computers, or may be a virtual machine virtually constructed by software.
- the server device 1 is not limited to the above configuration, and may include, for example, a reading unit that reads information stored in a portable storage medium, an input unit that accepts operation input, a display unit that displays an image, etc. .
- the server device 1 includes a face image acquisition unit 11a, a three-dimensional model generation unit 11b, a face image generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b, a facial image acquisition unit 11a, a three-dimensional model generation unit 11b,
- the complementing section 11c, the neurological disorder estimating section 11d, the second estimating section 11e, the diagnostic test processing section 11f, the notification processing section 11g, etc. are implemented in the processing section 11 as software-like functional sections.
- functional units of the processing unit 11 functional units related to the process of estimating the presence or absence of a neurological disorder in the subject are illustrated, and functional units related to other processes are omitted.
- the face image acquisition unit 11a performs a process of acquiring an image of the target person's face photographed by the terminal device 3 by communicating with the terminal device 3 through the communication unit 13. Further, for example, when the whole body of the subject is captured in the image taken by the terminal device 3, the face image acquisition unit 11a performs a process of detecting the face of the subject from the captured image and extracting a partial image of the detected face. You may go. Note that the process of extracting the target person's face image from the photographed image may be performed not by the server device 1 but by the terminal device 3. In this case, the server device 1 extracts the target person's face image extracted from the photographed image. A face image can be acquired from the terminal device 3.
- the three-dimensional model generation unit 11b generates ( Performs processing to generate the latest) 3D model.
- a three-dimensional model of the target person's face is created in advance by photographing and measuring the face of the target person, and is stored in the reference information storage unit 12b of the server device 1. I'll keep it.
- the subject's face is photographed from multiple angles, and two-dimensional images of the subject's face from various angles are collected. Further, the surface shape of the subject's face is measured with a distance measurement sensor that uses infrared rays, ultrasonic waves, or the like, and a shape model that reproduces the shape of the subject's face in a three-dimensional virtual space is generated. By pasting the collected two-dimensional images onto this shape model, a three-dimensional model serving as a reference of the subject's face is generated.
- the server device 1 stores in advance the generated three-dimensional model and two-dimensional images of the subject's face taken from multiple angles when creating the three-dimensional model in the reference information storage unit 12b.
- the three-dimensional model of the subject's face stored in the reference information storage unit 12b will be referred to as a reference three-dimensional model
- the two-dimensional image will be referred to as a reference face image.
- the shape model may be generated based on a two-dimensional image of the subject's face taken from one or more angles.
- a trained face mesh learning model may be used to generate a three-dimensional shape model from a two-dimensional image.
- the face mesh learning model is a machine learning model that detects facial key points (feature points) from images, and can output several hundred feature points from a human face in three-dimensional coordinates.
- the reference three-dimensional model of the subject's face is preferably generated as a plurality of models for a plurality of facial expressions, or as a model that can change facial expressions. For this reason, it is preferable that the photographing of the subject's face and the measurement of the surface shape be performed with various facial expressions of the subject. Further, in this embodiment, it is preferable that the various facial expressions include left-right asymmetric facial expressions.
- the three-dimensional model generation unit 11b pastes the image of the target person's face acquired by the face image acquisition unit 11a onto the reference three-dimensional model of the target person's face stored in the reference information storage unit 12b. A three-dimensional model of the subject's face is generated. At this time, the 3D model generation unit 11b compares, for example, the image of the face to be pasted with the image of the face pasted on the surface of the reference 3D model, and selects the reference 3D model with the closest facial expression. , paste the face image onto this reference 3D model.
- the three-dimensional model generated by this is a model in which the image of the subject's face taken by the terminal device 3 is pasted, and the parts of the face that are missing in the image taken by the terminal device 3 are pasted. becomes the model of the face image pasted on the reference three-dimensional model.
- the face image complementation unit 11c performs processing to complement the missing portions of the image of the subject's face based on the three-dimensional model of the subject's face generated by the three-dimensional model generation unit 11b.
- the face image complementing unit 11c converts the generated three-dimensional model of the face into a two-dimensional image viewed from the front, that is, a two-dimensional image taken with a virtual camera placed in front of the three-dimensional model of the face in the three-dimensional virtual space. By generating the image, a front-view face image of the subject is obtained in which missing portions in the image captured by the terminal device 3 are supplemented.
- the neurological disorder estimating unit 11d performs a process of estimating the presence or absence of a neurological disorder in the subject based on the facial image of the subject supplemented by the facial image complementing unit 11c. Note that if the facial image of the subject obtained from the terminal device 3 is a front-view image, the neurological disorder estimating unit 11d determines the subject based on the image obtained from the terminal device 3 without performing the above-mentioned complementation process. The presence or absence of a neurological disorder in a person may be estimated.
- the neurological disorder estimating unit 11d extracts various feature points such as the positions of the eyes, mouth, forehead, and cheeks, as well as the angles of the corners of the mouth and eyebrows from the facial image of the subject, and extracts the extracted features. Compare points on the left and right sides of the face to check for symmetry.
- the neurological disorder estimating unit 11d can calculate, for example, the amount of deviation between the left and right sides of various features of the face, and if the calculated amount of deviation exceeds a predetermined threshold value, it can be estimated that there is a neurological disorder.
- the neurological disorder estimating unit 11d may also compare, for example, the subject's latest face image and past facial images, and estimate the presence or absence of a neurological disorder based on the difference.
- the server device 1 uses a facial image (information on features extracted from this facial image, or a three-dimensional model generated from this facial image) that has been previously photographed in a normal state (no neurological disorder) of the subject. ) is stored in the reference information storage section 12b.
- the neurological disorder estimating unit 11d extracts various features from the facial image acquired from the terminal device 3 (or a facial image supplemented with the facial image acquired from the terminal device 3), and extracts various features from the normal image stored in the reference information storage unit 12b. Compare with the features extracted from the state's face image.
- the neurological disorder estimating unit 11d can calculate the amount of deviation for the features of both facial images, and if the calculated amount of deviation exceeds a predetermined threshold, it can be estimated that there is a neurological disorder.
- the neurological disorder estimating unit 11d compares the latest face image and the past face image of the subject separately for the left half and right half of the face, and determines the amount of deviation for either the left or right half of the face. exceeds a threshold, the subject is presumed to have a neurological disorder. If the amount of deviation exceeds the threshold on both the left and right sides of the face, and if the amount of deviation does not exceed the threshold on both the left and right sides of the face, the neurological disorder estimation unit 11d estimates that the subject does not have a neurological disorder. Note that in the case where the amount of deviation exceeds a threshold value on both the left and right sides of the face, the neurological disorder estimating unit 11d may estimate that there is a neurological disorder.
- the second estimation unit 11e adds estimation of the presence or absence of a neurological disorder in the subject using a sensor different from the camera included in the terminal device 3.
- the shape of the target person's face is measured using the ranging sensor included in the terminal device 3, and the server device 1 acquires the measurement result from the terminal device 3 and performs the second estimation.
- the unit 11e estimates the presence or absence of a neurological disorder in the subject by comparing the shape of the subject's face with the left and right symmetry or the normal shape.
- the additional estimation by the second estimation unit 11e may be performed using any sensor included in the terminal device 3, and the estimation may be performed using any method based on the information acquired by the sensor. Good too.
- the diagnostic test processing section 11f is configured to perform a diagnostic test on the subject using the terminal device 3 when the neurological disorder estimating section 11d and the second estimating section 11e estimate that the subject has a neurological disorder. Perform processing.
- the diagnostic test processing unit 11f performs a stroke diagnostic test based on a determination index such as CPSS, NIHSS, or KPSS.
- the diagnostic test may be performed, for example, by displaying a question message on the display unit of the terminal device 3 and receiving an answer from the subject by text input, voice input, or the like.
- a message or the like requesting a predetermined action or expression is displayed on the display section of the terminal device 3, and a camera is used to photograph the subject performing the requested action or expression. This can be done by acquiring an image.
- the method for implementing the diagnostic test described above is an example, and the present invention is not limited to this, and the diagnostic test processing unit 11f may implement any type of diagnostic test.
- the diagnostic test processing unit 11f transmits information such as questions or requests regarding the diagnostic test to the terminal device 3. Based on this information, the terminal device 3 outputs a message such as a question or request related to the diagnostic test, accepts an answer from the subject, or photographs the subject's movements, and transmits information such as the received answer or the photographed image to the server. Send to 1.
- the diagnostic test processing unit 11f of the server device which has received the information regarding the diagnostic test from the terminal device 3, determines whether the subject has a neurological disorder, its degree, etc. based on the received information.
- the notification processing unit 11g performs a process of notifying the estimation results of the neurological disorder estimation unit 11d, the estimation results of the second estimation unit 11e, and/or the diagnosis results of the diagnostic test processing unit 11f.
- the notification processing unit 11g notifies the target person by transmitting information such as these estimation results or diagnosis results to the terminal device 3.
- the notification processing unit 11g may also notify other users related to the target person, such as family members, based on information such as a pre-registered email address or telephone number.
- FIG. 3 is a block diagram showing the configuration of the terminal device 3 according to the present embodiment.
- the terminal device 3 according to the present embodiment includes a processing section 31, a storage section 32, a communication section (transceiver) 33, a display section 34, an operation section 35, a camera 36, a distance measurement sensor 37, etc. It is composed of
- the terminal device 3 is a device installed in a residence or the like where a subject lives, whose presence or absence of neurological disorder is to be estimated.
- the terminal device 3 may be, for example, a device fixedly installed in a house or the like, or may be a portable device such as a smartphone or a tablet terminal device mounted on a stand or the like.
- the processing unit 31 is configured using an arithmetic processing device such as a CPU or MPU, a ROM, a RAM, and the like.
- the processing unit 31 reads out and executes the program 32a stored in the storage unit 32, thereby performing a process of photographing the target person using the camera 36, a process of detecting information regarding the target person using the ranging sensor 37, and a process of detecting information about the target person using the distance measuring sensor 37. Performs various processes such as inputting and outputting information for diagnostic tests between the computer and the computer.
- the storage unit 32 is configured using, for example, a nonvolatile memory element such as a flash memory or a storage device such as a hard disk.
- the storage unit 32 stores various programs executed by the processing unit 31 and various data necessary for processing by the processing unit 31.
- the storage unit 32 stores a program 32a executed by the processing unit 31.
- the program 32a is distributed by a remote server device or the like, and the terminal device 3 acquires it through communication and stores it in the storage unit 32.
- the program 32a may be written into the storage unit 32, for example, during the manufacturing stage of the terminal device 3.
- the program 32a may be stored in the storage unit 32 by reading the program 32a recorded on a recording medium 98 such as a memory card or an optical disk by the terminal device 3.
- the program 32a may be recorded on the recording medium 98 and read by a writing device and written into the storage unit 32 of the terminal device 3.
- the program 32a may be provided in the form of distribution via a network, or may be provided in the form of being recorded on the recording medium 98.
- the communication unit 33 communicates with various devices via a network N including a mobile phone communication network, wireless LAN, the Internet, and the like.
- the communication unit 33 communicates with the server device 1 via the network N.
- the communication unit 33 transmits data provided from the processing unit 31 to other devices, and also provides data received from other devices to the processing unit 31.
- the display unit 34 is configured using a liquid crystal display or the like, and displays various images, characters, etc. based on the processing of the processing unit 31.
- the operation unit 35 accepts user operations and notifies the processing unit 31 of the accepted operations.
- the operation unit 35 receives user operations using input devices such as mechanical buttons or a touch panel provided on the surface of the display unit 34 .
- the operation unit 35 may be an input device such as a mouse and a keyboard, and these input devices may be configured to be detachable from the terminal device 3.
- the camera 36 is configured using an image sensor such as a CCD (Charge Coupled Device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
- the camera 36 provides data of an image (moving image) captured by an image sensor to the processing unit 31 .
- the camera 36 may be built into the terminal device 3 or may be configured to be detachable from the terminal device 3.
- the distance sensor 37 is a sensor that measures the distance to an object by emitting, for example, infrared rays, ultrasonic waves, electromagnetic waves, etc., and detecting their reflected waves.
- a sensor called LiDAR Light Detection And Ranging
- the distance measurement sensor 37 is used for an additional estimation process performed when it is estimated that the subject has a neurological disorder based on the facial image taken by the camera 36. .
- the distance sensor 37 can measure the shape of the subject's face, and based on this measurement result, the server device 1 can estimate whether the subject has a neurological disorder.
- the sensor provided in the terminal device 3 for the additional estimation process is not limited to the ranging sensor 37, and may be a sensor (microphone) that detects audio, for example, or any other sensor other than these. Good too.
- the processing section 31 when the processing section 31 reads and executes the program 32a stored in the storage section 32, the photographing processing section 31a, the distance measurement processing section 31b, the diagnostic test processing section 31c, etc. is implemented in the processing section 31 as a software-like functional section.
- the photographing processing unit 31a performs a process of photographing the subject using the camera 36 and transmitting the obtained photographed image to the server device 1.
- the photographing processing unit 31a repeatedly performs photographing at a cycle of, for example, several times to several tens of times per second.
- the photographing processing unit 31a continuously performs photographing with the camera 36 regardless of whether or not there is a subject in the house, and transmits the image obtained by photographing to the server device 1.
- the photographing processing unit 31a performs, for example, a process of detecting a person from an image obtained by photographing, a process of detecting a person's face, a process of identifying a person photographed in the image, etc.
- images in which the subject's face is captured may be selected and transmitted to the server device 1. Furthermore, the photographing processing unit 31 a may extract a partial image in which the face of the subject is captured from the photographed image of the camera 36 and transmit the extracted partial image to the server device 1 . Further, the photographing processing unit 31a may perform photographing only when there are people around the terminal device 3, when the terminal device 3 is equipped with, for example, a human sensor.
- the distance measurement processing unit 31b performs a process of measuring the surface shape of the target person's face by measuring the distance to the target person's face using the distance measurement sensor 37.
- the measurement by the distance measurement processing unit 31b is not always performed, but is started when an instruction to perform additional estimation processing is given from the server device 1.
- the distance measurement processing unit 31b grasps the position of the target person's face based on, for example, an image taken by the camera 36, and obtains information corresponding to the position of the target person's face from distance information measured by the distance measurement sensor 37.
- the extracted information is transmitted to the server device 1 as information on the surface shape of the subject's face.
- the diagnostic test processing unit 31c performs a diagnostic test regarding neurological disorders by outputting messages and the like to the subject and accepting input from the subject.
- the diagnostic test processing unit 31c displays, for example, a message of a question given from the server device 1 on the display unit 34, receives an input of an answer to this question from the subject through the operation unit 35, and transmits the received answer to the server device. Send to 1.
- the output of the question and the input of the answer may be performed by voice input/output.
- the diagnostic test processing unit 31c displays a message regarding a request for movement or facial expression given from the server device 1 on the display unit 34, and displays the movement, facial expression, etc. performed by the subject in response to this request on the camera 34. and transmits the photographed image (moving image or still image) to the server device 1.
- FIG. 4 is a flowchart for explaining the procedure of pre-processing performed in the information processing system according to the first embodiment.
- FIG. 5 is a schematic diagram for explaining pre-processing performed in the information processing system according to the first embodiment.
- the preprocessing may be performed by the server device 1, the terminal device 3, or one or more devices other than these. In this embodiment, it is assumed that the pre-processing is performed using the server device 1 and the terminal device 3.
- the server device 1 uses, for example, the camera 36 of the terminal device 3 to photograph the face of the subject in a normal state for predicting the presence or absence of a neurological disorder (step S1).
- the target person can be viewed from multiple directions by, for example, manually or automatically moving the terminal device 3 relative to the target person's face, or by moving the target person's face relative to the camera 36 of the terminal device 3.
- Acquire multiple images of the face of the person for example, a message is displayed on the display unit 34 of the terminal device 3 to request the subject to make various facial expressions, and a plurality of captured images of faces with various facial expressions are obtained.
- the various facial expressions include asymmetrical facial expressions.
- FIG. 5 an example of the target person's face image obtained in the process of step S1 is shown.
- the server device 1 measures the three-dimensional shape of the target person's face using, for example, the distance measurement sensor 37 of the terminal device 3 (step S2). Regarding this measurement as well, it is preferable to perform measurements with various facial expressions of the subject. Further, the photographing in step S1 and the measurement in step S2 may be performed simultaneously.
- the server device 1 creates a three-dimensional shape model that reproduces the subject's face in a three-dimensional virtual space based on the measurement results in step S2 (step S3).
- the shape model created by the server device 1 in step S3 is a model that reproduces the shape (structure) of the subject's face, and does not reproduce the color or pattern of the surface of the face. In the middle part of FIG. 5, an example of the shape model created in the process of step S3 is shown.
- the server device 1 generates a three-dimensional model of the target person's face by pasting the image of the target person's face photographed in step S1 on the surface of the shape model created in step S3 (step S4 ).
- the three-dimensional model generated in step S4 is a model whose orientation can be changed in the three-dimensional virtual space. Furthermore, it is preferable that a plurality of three-dimensional models be generated in association with a plurality of facial expressions of the subject, or a model that can change facial expressions in a three-dimensional virtual space.
- the lower part of FIG. 5 shows an example of the three-dimensional model generated in the process of step S4, in which a plurality of images corresponding to the case where the three-dimensional model is changed to a plurality of orientations are arranged horizontally. It is shown.
- the server device 1 stores the face image acquired in step S1 and the three-dimensional model generated in step S4 in the reference information storage section 12b of the storage section 12 (step S5), and ends the pre-processing. .
- These pieces of information stored in the reference information storage unit 12b of the server device 1 are used in a process that complements the image of the subject's face taken by the terminal device 3 when performing the process of estimating the presence or absence of a neurological disorder in the subject. used for.
- the terminal device 3 installed in the target person's residence continuously takes pictures with the camera 36, and the image of the target person obtained by the photography (an image of the target person's face) is used. ) is continuously transmitted to the server device 1.
- the server device 1 acquires an image of the subject's face photographed by the terminal device 3, and performs a process of estimating whether or not the subject has a neurological disorder.
- FIG. 6 is a flowchart showing a procedure for estimating the presence or absence of a neurological disorder performed by the server device 1 according to the present embodiment.
- the facial image acquisition unit 11a of the processing unit 11 of the server device 1 communicates with the terminal device 3 through the communication unit 13, and the facial image of the target person photographed by the terminal device 3 using the camera 36. (Step S21).
- the face image acquisition unit 11a determines whether the face image acquired in step S21 is a front-view image obtained by photographing the subject's face from the front (step S22). If the image is a front view image (S22: YES), the processing unit 11 advances the process to step S24. If the image is not a front-view image (S22: NO), the processing unit 11 performs face image complementation processing (Step S23), and advances the process to Step S24. Note that the face image complementation process performed in step S23 will be described later.
- the neurological disorder estimating unit 11d of the processing unit 11 performs a process of estimating the presence or absence of a neurological disorder of the subject based on the face image acquired from the terminal device 3 or the face image obtained by correcting this facial image (step S24). At this time, the neurological disorder estimating unit 11d compares the left half and right half of the subject's face captured in the face image, and estimates the presence or absence of a neurological disorder based on the left-right symmetry. The neurological disorder estimating unit 11d also compares the face image acquired from the terminal device 3 or the face image obtained by correcting this face image with the face image of the subject in a normal state stored in the reference information storage unit 12b, Based on the difference, the presence or absence of neurological disorder is estimated.
- the processing unit 11 determines whether the subject is estimated to have a neurological disorder through the neurological disorder estimation process in step S24 (step S25). When it is estimated that there is no neurological disorder (S25: NO), the processing unit 11 returns the process to step S21 and repeats the above-described process. When it is estimated that there is a neurological disorder (S25: YES), the second estimation unit 11e of the processing unit 11 performs a second estimation process using the ranging sensor 37 provided in the terminal device 3 (Step S26). In the second estimation process, the second estimation unit 11e transmits, for example, a command to the terminal device 3 to perform measurement by the distance measurement sensor 37, and the measurement result of the surface shape of the subject's face by the distance measurement sensor 37 ( face information) is acquired from the terminal device 3.
- the second estimating unit 11e estimates the presence or absence of a neurological disorder based on the left-right symmetry of the surface shape of the subject's face, or the Using a method such as estimating the presence or absence of a neurological disorder based on a comparison with the surface shape of the face, it is possible to estimate the presence or absence of a neurological disorder in the subject using the distance measuring sensor 37.
- the processing unit 11 determines whether the subject is estimated to have a neurological disorder through the second estimation process of step S26 (step S27). If it is estimated that there is no neurological disorder (S27: NO), the processing unit 11 returns the process to step S21 and repeats the above-described process. If it is estimated that the subject has a neurological disorder (S27: YES), the diagnostic test processing unit 11f of the processing unit 11 performs a diagnostic test regarding the neurological disorder by conducting a question and answer session with the subject via the terminal device 3. (Step S28). The diagnostic test processing unit 11f causes the terminal device 3 to output the message, for example, by transmitting a message asking the subject a question to the terminal device 3, and also acquires the answer that the terminal device 3 receives from the subject.
- the diagnostic test processing unit 11f transmits a message requesting the subject to make an action or facial expression to the terminal device 3, causes the terminal device 3 to output the message, and photographs the motion or facial expression performed by the subject in response to this message using the camera 36. An image is acquired from the terminal device 3.
- the diagnostic test processing unit 11f performs these processes multiple times to collect information about the subject, and determines the presence or absence and degree of neurological disorder based on indicators such as CPSS, NIHSS, or KPSS.
- the notification processing unit 11g of the processing unit 11 transmits information regarding the estimation processing result in step S24, the estimation processing result in step S26, and/or the diagnostic test result in step S28 to the terminal device 3.
- the results of the estimation and diagnosis regarding the failure are notified to the subject (step S29), and the process ends.
- the notification processing unit 11g sends the notification to a terminal device other than the terminal device 3 installed at the target person's home, such as a terminal device used by the target person's family or a terminal device used by the target person's doctor. Good too.
- FIG. 7 is a flowchart showing the procedure of face image complementation processing performed by the server device 1 according to the first embodiment.
- the face image complementation process shown in this flowchart is a process executed in step S23 when it is determined in the flowchart shown in FIG. 7 that the face image acquired from the terminal device 3 is not a front view image.
- the three-dimensional model generation unit 11b of the processing unit 11 of the server device 1 reads out the reference three-dimensional model stored in the reference information storage unit 12b (step S41).
- the 3D model generation unit 11b pastes the face image of the target person photographed by the camera 36 of the terminal device 3 onto the reference 3D model read out in step S41, thereby generating the current face of the target person.
- a three-dimensional model is generated (step S42).
- the 3D model generation unit 11b extracts feature points from the face image, for example, and sets the facial expression of the reference 3D model so that the feature points of the face of the reference 3D model and the feature points of the face image match. or select a reference 3D model with the most matching feature points from a plurality of reference 3D models with facial expressions.
- the face image complementing unit 11c of the processing unit 11 generates a front-view two-dimensional image from the three-dimensional model of the subject's face generated in step S42, thereby replacing the missing portion of the face image captured by the terminal device 3.
- a face image is generated by complementing the above (step S43), and the face image complementing process ends.
- the face image complementing unit 11c sets a virtual camera in front of a 3D model of the subject's face in a 3D virtual space, and acquires a 2D image of the 3D model with the virtual camera. It is possible to obtain a facial image.
- the facial image of the subject to be subjected to face image complementation processing is a face that includes at least one feature point indicating a characteristic such as facial distortion. It is an image. For example, for a subject whose right half of the face has features such as distortion, a face image in which only the left half of the face is photographed is not subject to face image complementation processing. If the face image contains the left half of the face and at least a part of the right half of the face (the part where features such as distortion appear), a reference 3D model can be selected based on the feature points included in this part. Through face image complementation processing, it is possible to generate a front-view face image in which distortion or the like appears on the right half of the face.
- the face image complementing unit 11c can generate a front-view face image in which the right half of the face is distorted and the left half is undistorted, by complementing the left half of the undistorted face.
- FIG. 8 is a schematic diagram for explaining a method for estimating the presence or absence of a neurological disorder performed by the server device 1 according to the present embodiment.
- FIG. 8 shows an example of a face image of a subject who can be estimated to have a neurological disorder. This face image is a photographed image photographed by the camera 36 of the terminal device 3, or an image obtained by supplementing missing portions based on this photographed image.
- the neurological disorder estimating unit 11d of the processing unit 11 of the server device 1 extracts feature points from the facial image of the subject.
- the face image in FIG. 8 shows feature points at the corners of the mouth and the ends of the eyebrows extracted from the face image of the subject.
- the neurological disorder estimating unit 11d divides the subject into left and right halves along the median plane (center line indicated by a dashed line in FIG. 8), and compares the positional relationship of the feature points on the left half and right half of the subject's face. , determine the left-right symmetry of the subject's face.
- FIG. 8 for example, when comparing the positional relationship between the position of the right corner of the mouth and the position of the left corner of the subject's mouth, the position of the right corner of the mouth is lower than the position of the left corner of the mouth. Similarly, the position of the right eyebrow of the subject is lower than the position of the left eyebrow.
- the neurological disorder estimating unit 11d calculates the difference in coordinates or distance from the center line of feature points on the left and right sides of the face, such as the corners of the mouth and the ends of the eyebrows, and determines a neurological disorder if this difference exceeds a predetermined threshold. It can be assumed that there is.
- the neurological disorder estimating unit 11d uses a reference face image of the subject in a normal state that has been photographed in advance, a face image of the subject that has been photographed by the camera 36 of the terminal device 3, or a face supplemented with this. The images are compared to estimate whether the subject has a neurological disorder.
- the neurological disorder estimating unit 11d extracts feature points from the reference face image stored in the reference information storage unit 12b, extracts feature points from the face image acquired from the terminal device 3, and compares the feature points of both images. .
- the neurological disorder estimating unit 11d can estimate that there is a neurological disorder when the difference in the positions of corresponding feature points in both images exceeds a predetermined threshold.
- FIG. 9 is a schematic diagram showing an example of a notification screen of the estimation result of neurological disorder.
- the notification screen shown in FIG. 9 may be displayed, for example, after the estimation process based on the face image is performed and before the diagnostic test is performed, or may be displayed, for example, after the diagnostic test is completed.
- the server device 1 transmits information such as an image and a message for displaying a notification screen to the terminal device 3 at an appropriate timing, and the terminal device 3 that receives the information displays the illustrated notification screen on the display unit 34.
- the terminal device 3 Based on the information from the server device 1, the terminal device 3 displays, for example, the target person's face image from one week ago and the current face image used for estimation side by side, as shown in FIG. At this time, if the missing part of the face image photographed by the camera 36 has been supplemented, the terminal device 3 updates the current face image so that the actually photographed part and the complemented part of the current face image can be distinguished. Display a face image. For the distinguishable display, a display method such as displaying in different colors or displaying in shading may be adopted. In the example shown in FIG. 9, the current face image is displayed with the complementary portion shaded.
- the terminal device 3 displays, as a result of the estimation of a neurological disorder by the server device 1, for example, ⁇ The right corner of the mouth and the tip of the eyebrow are lower than the left, and there is a suspicion of a neurological disorder.We recommend that you undergo a detailed examination at a hospital.'' ” is displayed below the two facial images.
- the current face image is displayed to indicate which part of the face (feature point) is the basis for inferring that the patient has a neurological disorder.
- the terminal device 3 displays a circle surrounding the right corner of the mouth and the right eyebrow over the face image as the basis for estimating that the person has a neurological disorder. Note that the method of displaying the evidence part is not limited to the superimposition of circles shown in FIG. Any display method may be used.
- the terminal device 3 installed in the target person's home or the like continuously photographs the target person.
- the server device 1 estimates the presence or absence of a neurological disorder in the subject based on the facial image of the subject photographed by the terminal device 3, and stores the facial image used when the subject is estimated to have no neurological disorder. You can stay there.
- the server device 1 can store and collect face images of the subject in a normal state (no neurological disorder), and use the collected face images as a reference face image in subsequent estimation processing.
- the terminal device 3 may continuously perform measurements using the distance measurement sensor 37, and the server device 1 may store and collect the measurement results obtained by the distance measurement sensor 37 together with the facial image of the subject in a normal state.
- the server device 1 collects the face image taken by the camera 36 and the measurement result of the distance sensor 37 (i.e., the surface shape of the subject's face), and generates a three-dimensional model of the subject's face based on the collected information. be able to.
- the server device 1 continuously collects this information, periodically generates a three-dimensional model of the subject's face, and uses the collected facial images and the generated three-dimensional model as the subject's reference face image and reference 3. It is stored in the reference information storage unit 12b as a dimensional model.
- the collection of facial images and surface shape information by the server device 1 may not cover all of the information acquired from the terminal device 3, but may be collected, for example, about several times a day.
- the server device 1 stores, for example, several pieces of information selected under various conditions from among a plurality of face images and surface shape information photographed by the terminal device 3 in one day.
- the selection conditions include, for example, the front view of the subject, the brightness of the image exceeds a predetermined threshold, the ratio of the face to the entire photographed image exceeds the predetermined threshold, such as a smiling face, etc.
- Various conditions may be employed, such as a predetermined facial expression. Note that the selection conditions may be any conditions.
- the generation of the reference three-dimensional model by the server device 1 may be performed at a predetermined period, such as once every several months.
- the server device 1 may overwrite and update the old reference face image and reference 3D model that are already stored with the new reference face image and reference 3D model, and A new reference face image and a new reference three-dimensional model may be added and stored while leaving the three-dimensional model.
- the server device 1 acquires the face image (face information) of the target person photographed (detected) by the camera 36 (sensor) of the terminal device 3, and the acquired face A 3D model of the subject's face (facial structure information) is generated based on the image, the missing parts of the facial image are complemented based on the generated 3D model, and the neurological disorder of the subject is determined based on the supplemented facial image.
- face information face information
- facial structure information facial structure information
- the positional relationship between the target person and the device such as a camera that photographs the target person's face is not limited, it is expected that restrictions on where the device can be installed in a house etc. will be relaxed, and the target person's real-time It is expected that the system will be able to perform monitoring and abnormality detection.
- the server device 1 stores a reference three-dimensional model of the subject generated in advance in the reference information storage unit 12b, and the face image photographed by the camera 36 of the terminal device 3, the reference A three-dimensional model of the subject is generated based on the reference three-dimensional model stored in the information storage unit 12b.
- the standard 3D model is a multi-view face image (multi-view face information) taken of the subject's face from two or more directions with a camera, and the surface shape of the subject's face is measured using a distance sensor. It is generated based on the shape model (three-dimensional face information) reproduced by .
- the server device 1 does not generate and store a reference 3D model in advance, but stores multi-view face images and shape models for generating the reference 3D model, and stores them as necessary.
- a reference three-dimensional model may also be generated.
- the server device 1 stores the face image of the target person taken by the camera 36 by the terminal device 3 and the shape information of the target person's face measured by the distance measuring sensor 37 in the storage unit. 12 to store and accumulate. Based on the accumulated information, the server device 1 can update the reference 3D model etc. stored in the reference information storage unit 12b, and by using the updated reference 3D model, the missing parts of the face image can be removed. It is expected that accurate complementation will be achieved.
- the right half of the face image of the subject photographed by the terminal device 3 or the face image supplemented with this and the right half of the face image stored in the reference information storage unit 12b are combined. Based on the comparison result and the comparison result between the left half of the face image of the subject photographed by the terminal device 3 or the face image supplemented with this and the left half of the face image stored in the reference information storage unit 12b, The server device 1 estimates whether the person has a neurological disorder.
- the information processing system can be expected to accurately estimate the presence or absence of a neurological disorder in a subject.
- the information processing system calculates the value based on the symmetry of the left half and right half of the target person's face. , to estimate the presence or absence of a neurological disorder in the subject.
- the server device 1 estimates that the subject has a neurological disorder, for example, if the right half of the face is lower than the left half, or if the left half of the face is lower than the right half. be able to.
- the information processing system according to this embodiment can be expected to accurately estimate the presence or absence of a neurological disorder in a subject.
- the terminal device 3 when the face image photographed by the terminal device 3 is a face image other than a front-view face image, the supplemented front-view face image is displayed on the display unit of the terminal device 3. Further, the terminal device 3 displays the complemented face image and the reference face image stored in the reference information storage unit 12b side by side. Note that the terminal device 3 may display the complemented face image and the reference face image in a superimposed manner instead of displaying them side by side. Further, the terminal device 3 displays, together with the subject's face image, which part is the basis for the estimation result of the presence or absence of a neurological disorder. As a result, the information processing system according to the present embodiment can be expected to present the estimation results of the presence or absence of neurological disorders to the subject in an easy-to-understand manner.
- the server device 1 when the server device 1 estimates that the subject has a neurological disorder based on the facial image of the subject photographed by the camera 36 of the terminal device 3, the terminal device 3 The distance sensor 37 measures the surface shape of the subject's face, and based on the measured surface shape, the server device 1 further estimates whether or not the subject has a neurological disorder.
- the information processing system may first perform the estimation based on the measurement result of the distance measuring sensor 37, and may perform the estimation based on the captured image of the camera 36 later. Further, the information processing system according to the present embodiment performs a diagnostic test on the subject when it is estimated that the subject has a neurological disorder through these estimation processes. With these, the information processing system according to the present embodiment can be expected to accurately determine the presence or absence and degree of neurological disorder in a subject.
- the camera 36 is used as a sensor for acquiring face information of the subject in the initial process of estimating the presence or absence of a neurological disorder, but this is only an example and is not limited to this.
- the sensor may be, for example, the distance measuring sensor 37, or may be a sensor other than these.
- the distance measuring sensor 37 is used as the second sensor in the process of estimating the presence or absence of a second neurological disorder, but this is merely an example and is not limited to this.
- the second sensor may be, for example, a microphone, an event-driven photographing device that extracts luminance changes, a millimeter wave sensor, an ultrasonic sensor, a thermography camera, or the like, or may be a sensor other than these.
- the second sensor may include multiple types of sensors.
- the information acquired by the second sensor is not limited to face information, but also abnormalities of the subject such as changes in arm fluctuation (body sway/tremor), arm weakness, and speech abnormalities (aphasia/dysarthria). It may also be information indicating.
- the server device 1 performs a process of estimating the presence or absence of a neurological disorder of the subject based on the face image taken by the camera of the terminal device 3, but this is not limited to this. Instead, the terminal device 3 may perform the process of estimating the presence or absence of a neurological disorder, and in this case, the information processing system does not need to include the server device 1.
- the server device 1 fills in missing portions of the target person's face image photographed by the terminal device 3 using a reference three-dimensional model and a reference face image that are stored in advance.
- a learning model so-called AI By using Artificial Intelligence
- the server device 1 fills in the missing portions of the target person's face image photographed by the terminal device 3.
- FIG. 10 is a schematic diagram for explaining the configuration of a learning model included in the server device 1 according to the second embodiment.
- the storage unit 12 is not provided with the reference information storage unit 12b, and instead, information regarding learned learning models is stored in the storage unit 12.
- the information regarding the learning model may include, for example, information defining the structure of the learning model and information such as values of internal parameters determined by machine learning.
- the server device 1 according to the second embodiment uses two learning models, a shape estimation model 51 and a complementation model 52, to perform a process of complementing a missing part of a target person's face image.
- the shape estimation model 51 receives a two-dimensional face image as input, estimates the three-dimensional surface shape of the face captured in this face image, and has undergone machine learning in advance to output the estimation result as shape information.
- This is a learning model.
- a trained facial mesh learning model may be used as the shape estimation model 51.
- the face mesh learning model is a machine learning model that detects facial key points (feature points) from images, and can output several hundred feature points from a human face in three-dimensional coordinates.
- FIG. 11 is a schematic diagram showing an example of shape information output by the shape estimation model 51.
- This figure shows multiple feature points output by a face mesh learning model plotted in a three-dimensional virtual space.
- the shape of a human face is reproduced, and the direction of the face can be changed in the three-dimensional virtual space. is possible.
- the face mesh learning model is an existing technology, detailed explanation of the machine learning method and the like will be omitted.
- a learning model other than the face mesh learning model may be adopted.
- the shape estimation model 51 is configured to be able to estimate this by inputting a facial image that includes at least one feature point indicating a characteristic such as facial distortion. It is possible to output shape information that reproduces the distortion of the subject's face. Even if the subject has a neurological disorder, if a face image that does not include any feature points indicating features such as facial distortion is input, the shape estimation model 51 It is possible to output shape information that reproduces a normal face without reproducing facial distortion.
- the complementary model 52 is machined in advance to receive a two-dimensional face image and the shape information estimated from the face image by the shape estimation model 51 as input, and to generate a three-dimensional model of the face captured in the face image. This is a learning model that has been trained.
- the three-dimensional model output by the complementary model 52 is a three-dimensional model in which a facial image is pasted onto the input shape information, and missing portions of the facial image are complemented.
- the complementary model 52 may be a learning model such as a DNN (Deep Neural Network) or a CNN (Convolutional Neural Network).
- the complementary model 52 acquires a face image, shape information, and a three-dimensional model using the same procedure as the three-dimensional model generation performed as pre-processing in the information processing system according to the first embodiment (see FIG. 5), for example. It can be generated by performing machine learning using these as training data. Note that since the machine learning processing of these learning models is an existing technology, a detailed explanation will be omitted.
- FIG. 12 is a flowchart showing the procedure of face image complementation processing performed by the server device 1 according to the second embodiment.
- the three-dimensional model generation unit 11b of the processing unit 11 of the server device 1 according to the second embodiment converts the face image of the subject photographed by the camera 36 of the terminal device 3 into a shape estimation model 51 that has undergone machine learning in advance. (Step S61).
- the three-dimensional model generation unit 11b acquires the shape information output by the shape estimation model 51 (step S62).
- the three-dimensional model generation unit 11b inputs the target person's face image and the shape information acquired in step S62 to the complementary model 52 (step S63).
- the three-dimensional model generation unit 11b generates a three-dimensional model of the subject's face by acquiring the three-dimensional model output by the complementary model 52 (step S64).
- the face image complementing unit 11c of the processing unit 11 generates a front-view two-dimensional image from the three-dimensional model of the subject's face generated in step S63, thereby replacing the missing portion of the face image captured by the terminal device 3.
- a face image is generated by complementing the above (step S65), and the face image complementing process ends.
- the face image taken by the terminal device 3 using the camera 36 and/or the shape information measured by the distance measuring sensor 37 etc. similarly to the information processing system according to the first embodiment, the face image taken by the terminal device 3 using the camera 36 and/or the shape information measured by the distance measuring sensor 37 etc., and store them in memory. This accumulated information can be used to retrain the complementary model 52.
- the server device 1 can store information several times a day, for example, and relearn the complementary model 52 once a week, for example.
- the server device 1 complements the face image of the subject photographed by the camera 36 of the terminal device 3, and estimates that the subject has a neurological disorder based on the complemented face image, and then, for example, within a predetermined time. If it is estimated that there is no neurological disorder without performing complementation processing based on the front-view face image taken by the camera 36 of the terminal device 3, it is determined that the initial estimation result was incorrect.
- the server device 1 stores the face image used in the initial estimation process which was incorrect, the face image supplemented with this, the measurement result by the distance measurement sensor 37 performed following the initial estimation process, and the subsequent estimation process. Information such as face images used in the process is stored as information for relearning. In this way, by accumulating information when the server device 1 determines that there is an error in the estimation result and using it for relearning the complementary model 52, it is possible to improve the accuracy of face image complementation by the complementary model 52. You can expect it.
- the information processing system receives a face image of a subject's face as input, and performs machine learning to output a three-dimensional model of the subject (shape estimation).
- the server device 1 includes a learning model (a learning model that is a combination of a model 51 and a complementary model 52).
- the server device 1 generates a three-dimensional model from the target person's facial image by inputting the facial image photographed by the camera 36 of the terminal device 3 into the learning model and acquiring the three-dimensional model output from the learning model.
- the information processing system according to the second embodiment can be expected to accurately generate a three-dimensional model of the subject and complement the missing portions of the facial image with accuracy.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Epidemiology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Primary Health Care (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Biophysics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
図1は、本実施の形態に係る情報処理システムの概要を説明するための模式図である。本実施の形態に係る情報処理システムは、対象者の神経障害の有無を推定する処理を行うサーバ装置1と、サーバ装置1が行う推定処理に必要な対象者の顔画像の撮影等を行う一又は複数の端末装置3とを備えて構成されている。本実施の形態に係る情報処理システムでは、対象者が住む住宅内(例えばリビングルーム又はダイニングルーム等)、介護施設又は医療施設等に端末装置3が設置される。端末装置3は、カメラを搭載しており、住宅内で定期的に対象者の撮影を行って、撮影画像をサーバ装置1へ送信する。
図2は、実施の形態1に係るサーバ装置1の構成を示すブロック図である。本実施の形態に係るサーバ装置1は、処理部11、記憶部(ストレージ)12及び通信部(トランシーバ)13等を備えて構成されている。なお本実施の形態においては、1つのサーバ装置にて処理が行われるものとして説明を行うが、複数のサーバ装置が分散して処理を行ってもよい。
実施の形態1に係る情報処理システムでは、予め対象者の正常状態(神経障害がない状態)での顔の情報取得が事前処理として行われる。図4は、実施の形態1に係る情報処理システムにて行われる事前処理の手順を説明するためのフローチャートである。また図5は、実施の形態1に係る情報処理システムにて行われる事前処理を説明するための模式図である。なお事前処理は、サーバ装置1にて行われてもよく、端末装置3にて行われてもよく、これら以外の一又は複数の装置を用いて行われてもよい。本実施の形態において事前処理は、サーバ装置1及び端末装置3を用いて行われるものとする。
本実施の形態に係る情報処理システムでは、対象者の住宅内に設置された端末装置3がカメラ36による撮影を継続的に行い、撮影により得られた対象者の画像(対象者の顔の画像)をサーバ装置1へ継続的に送信している。サーバ装置1は、端末装置3が撮影した対象者の顔の画像を取得して、この対象者の神経障害の有無を推定する処理を行う。図6は、本実施の形態に係るサーバ装置1が行う神経障害の有無の推定処理の手順を示すフローチャートである。
本実施の形態に係る情報処理システムでは、サーバ装置1が対象者に神経障害ありと推定した場合に、端末装置3に情報を表示して対象者への通知を行う。図9は、神経障害の推定結果の通知画面の一例を示す模式図である。なお図9に示す通知画面は、例えば顔画像に基づく推定処理が行われた後、診断テストが実施される前に表示されてもよく、また例えば診断テストの終了後に表示されてもよい。サーバ装置1は適宜のタイミングで通知画面を表示するための画像及びメッセージ等の情報を端末装置3へ送信し、これを受信した端末装置3が図示の通知画面を表示部34に表示する。
本実施の形態に係る情報処理システムでは、対象者の自宅等に設置された端末装置3が継続的に対象者の撮影を行っている。端末装置3が撮影した対象者の顔画像に基づいてサーバ装置1が対象者の神経障害の有無を推定し、神経障害なしと推定した際に用いた顔画像をサーバ装置1が記憶しておいてもよい。サーバ装置1は、正常状態(神経障害がない状態)の対象者の顔画像を記憶して収集し、収集した顔画像を以後の推定処理において基準顔画像として用いることができる。
以上の構成の本実施の形態に係る情報処理システムでは、端末装置3のカメラ36(センサ)が撮影(検知)した対象者の顔画像(顔情報)をサーバ装置1が取得し、取得した顔画像を基に対象者の顔の3次元モデル(顔構造情報)を生成し、生成した3次元モデルに基づいて顔画像の欠落部分を補完し、補完した顔画像に基づいて対象者の神経障害の有無を推定する。これにより情報処理システムは、正面視以外の方向から撮影された顔画像を基に対象者の神経障害の有無を精度よく推定することが期待できる。また、対象者とこの対象者の顔を撮影するカメラ等のデバイスとの位置関係が限定されないことにより、住宅内等においてデバイスを設置する場所の制限を緩和することが期待でき、対象者のリアルタイムでの見守り及び異常検出等を行うことが期待できる。
実施の形態1に係る情報処理システムでは、予め記憶された基準3次元モデル及び基準顔画像を用いて、端末装置3が撮影した対象者の顔画像の欠落部分をサーバ装置1が補完する。これに対して実施の形態2に係る情報処理システムでは、基準3次元モデル及び基準顔画像等の基準となる情報を予め記憶しておくことなく、予め機械学習がなされた学習モデル(いわゆるAI(Artificial Intelligence))を用いることによって、端末装置3が撮影した対象者の顔画像の欠落部分をサーバ装置1が補完する。
3 端末装置
11 処理部
11a 顔画像取得部
11b 3次元モデル生成部
11c 顔画像補完部
11d 神経障害推定部
11e 第2推定部
11f 診断テスト処理部
11g 通知処理部
12 記憶部
12a サーバプログラム
12b 基準情報記憶部
13 通信部
31 処理部
31a 撮影処理部
31b 測距処理部
31c 診断テスト処理部
32 記憶部
32a プログラム
33 通信部
34 表示部
35 操作部
36 カメラ
37 測距センサ
51 形状推定モデル
52 補完モデル
98,99 記録媒体
N ネットワーク
Claims (22)
- コンピュータに、
センサが検知した対象者の顔情報を取得し、
取得した前記顔情報を基に、前記対象者の顔構造情報を生成し、
生成した前記顔構造情報に基づいて、取得した前記顔情報について欠落した前記対象者の顔情報を補完し、
補完した前記顔情報に基づいて、当該対象者の神経障害の有無を推定する
処理を実行させる、コンピュータプログラム。 - 予め生成された前記対象者の基準顔構造情報を記憶部から取得し、
前記センサが検知した前記顔情報、及び、前記記憶部に記憶された前記基準顔構造情報に基づいて、前記対象者の顔構造情報を生成する、
請求項1に記載のコンピュータプログラム。 - 前記基準顔構造情報には、前記対象者の顔の前面に対して2以上の方向からセンサにて取得した多視点顔情報を含む、
請求項2に記載のコンピュータプログラム。 - 前記基準顔構造情報には、前記対象者の顔の3次元形状をセンサにて取得した3次元顔情報を含む、
請求項2又は請求項3に記載のコンピュータプログラム。 - 前記基準顔構造情報には、前記対象者の顔の前面に対して2以上の方向からセンサにて取得した多視点顔情報を基に生成した顔構造情報、及び/又は、前記対象者の顔の3次元形状をセンサにて取得した3次元顔情報を基に生成した顔構造情報を含む、
請求項2から請求項4までのいずれか1つに記載のコンピュータプログラム。 - 取得した顔情報又は当該顔情報を基に生成した顔構造情報を前記基準顔構造情報として前記記憶部に記憶する、
請求項2から請求項5までのいずれか1つに記載のコンピュータプログラム。 - 補完した前記顔情報に基づく前記対象者の顔の右半分と前記基準顔構造情報に基づく顔の右半分との比較結果、及び、補完した前記顔情報に基づく前記対象者の顔の左半分と前記基準顔構造情報に基づく顔の左半分との比較結果に基づいて、当該対象者の神経障害の有無を推定する、
請求項2から請求項6までのいずれか1つに記載のコンピュータプログラム。 - 補完した前記顔情報に基づく顔の右半分が前記基準顔構造情報に基づく顔の右半分と比較して下がっている場合、又は、補完した前記顔情報に基づく顔の左半分が前記基準顔構造情報に基づく顔の左半分と比較して下がっている場合に、前記対象者に神経障害ありと推定する、
請求項7に記載のコンピュータプログラム。 - 対象者の顔情報を入力として受け付けて前記対象者の顔構造情報を出力するよう機械学習がなされた学習モデルに、前記センサが検知した顔情報を入力して前記学習モデルが出力する前記顔構造情報を取得することで、前記対象者の顔構造情報を生成する、
請求項1から請求項8までのいずれか1つに記載のコンピュータプログラム。 - 補完した前記顔情報に基づく前記対象者の顔の右半分及び左半分の対称性に基づいて、当該対象者の神経障害の有無を推定する、
請求項1から請求項9までのいずれか1つに記載のコンピュータプログラム。 - 前記顔の右半分が左半分と比較して下がっている場合、又は、前記顔の左半分が右半分と比較して下がっている場合に、前記対象者に神経障害ありと推定する、
請求項10に記載のコンピュータプログラム。 - 前記センサが検知した顔情報を補完した顔情報に基づいて前記対象者に神経障害ありと推定した後、前記センサが検知した顔情報に基づいて補完せずに前記対象者に神経障害なしと推定した場合、神経障害ありと推定した際の顔情報を記憶部に記憶する、
請求項1から請求項11までのいずれか1つに記載のコンピュータプログラム。 - 前記センサが検知した顔情報が前記対象者の正面視以外の顔情報である場合に、補完した正面視の顔情報を表示部に表示する、
請求項1から請求項12までのいずれか1つに記載のコンピュータプログラム。 - 補完した顔情報と記憶部に予め記憶された前記対象者の顔情報とを並べて又は重畳して前記表示部に表示する、
請求項13に記載のコンピュータプログラム。 - 前記神経障害の有無の推定結果に関与した部分を顔情報と共に前記表示部に表示する、
請求項13又は請求項14に記載のコンピュータプログラム。 - 前記対象者に神経障害が有ると推定した場合に、前記センサとは異なる第2センサにて前記対象者の情報を取得し、
前記第2センサにて取得した前記情報に基づいて、当該対象者の神経障害の有無を更に推定する、
請求項1から請求項15までのいずれか1つに記載のコンピュータプログラム。 - 前記センサとは異なる第2センサにて前記対象者の情報を取得し、
前記第2センサにて取得した前記情報に基づいて、当該対象者の神経障害の有無を推定し、
前記対象者に神経障害が有ると推定した場合に、前記センサにて取得した前記顔情報に基づいて、当該対象者の神経障害の有無を更に推定する、
請求項1から請求項15までのいずれか1つに記載のコンピュータプログラム。 - 前記対象者に神経障害が有ると推定した場合に、前記対象者に対する診断テストを実施する、
請求項1から請求項17までのいずれか1つに記載のコンピュータプログラム。 - 前記センサは、2次元の画像を撮影するカメラである、
請求項1から請求項18までのいずれか1つに記載のコンピュータプログラム。 - 前記センサは、測距センサである、
請求項1から請求項18までのいずれか1つに記載のコンピュータプログラム。 - 情報処理装置が、
センサが検知した対象者の顔情報を取得し、
取得した前記顔情報を基に、前記対象者の顔構造情報を生成し、
生成した前記顔構造情報に基づいて、取得した前記顔情報について欠落した前記対象者の顔情報を補完し、
補完した前記顔情報に基づいて、当該対象者の神経障害の有無を推定する、
情報処理方法。 - センサが検知した対象者の顔情報を取得する取得部と、
取得した前記顔情報を基に、前記対象者の顔構造情報を生成する生成部と、
生成した前記顔構造情報に基づいて、取得した前記顔情報について欠落した前記対象者の顔情報を補完する補完部と、
補完した前記顔情報に基づいて、当該対象者の神経障害の有無を推定する推定部と
を備える、情報処理装置。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202380014836.4A CN118338853A (zh) | 2022-03-28 | 2023-03-08 | 计算机程序、信息处理方法及信息处理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022-052275 | 2022-03-28 | ||
JP2022052275 | 2022-03-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023189309A1 true WO2023189309A1 (ja) | 2023-10-05 |
Family
ID=88201449
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/008702 WO2023189309A1 (ja) | 2022-03-28 | 2023-03-08 | コンピュータプログラム、情報処理方法及び情報処理装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN118338853A (ja) |
WO (1) | WO2023189309A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372437A (zh) * | 2023-12-08 | 2024-01-09 | 安徽农业大学 | 用于面神经麻痹智能化检测与量化方法及其系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011209916A (ja) * | 2010-03-29 | 2011-10-20 | Secom Co Ltd | 顔画像合成装置 |
JP2013097588A (ja) * | 2011-11-01 | 2013-05-20 | Dainippon Printing Co Ltd | 三次元ポートレートの作成装置 |
KR20190135598A (ko) * | 2018-05-29 | 2019-12-09 | 상명대학교산학협력단 | 얼굴 대칭성 측정 장치 및 방법 |
JP2020107038A (ja) * | 2018-12-27 | 2020-07-09 | 日本電気株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP2020199072A (ja) | 2019-06-10 | 2020-12-17 | 国立大学法人滋賀医科大学 | 脳卒中判定装置、方法およびプログラム |
CN112768065A (zh) * | 2021-01-29 | 2021-05-07 | 北京大学口腔医学院 | 一种基于人工智能的面瘫分级诊断方法、装置 |
WO2022030592A1 (ja) * | 2020-08-05 | 2022-02-10 | パナソニックIpマネジメント株式会社 | 脳卒中検査システム、脳卒中検査方法、及び、プログラム |
WO2022054687A1 (ja) * | 2020-09-08 | 2022-03-17 | テルモ株式会社 | プログラム、情報処理装置及び情報処理方法 |
-
2023
- 2023-03-08 CN CN202380014836.4A patent/CN118338853A/zh active Pending
- 2023-03-08 WO PCT/JP2023/008702 patent/WO2023189309A1/ja active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011209916A (ja) * | 2010-03-29 | 2011-10-20 | Secom Co Ltd | 顔画像合成装置 |
JP2013097588A (ja) * | 2011-11-01 | 2013-05-20 | Dainippon Printing Co Ltd | 三次元ポートレートの作成装置 |
KR20190135598A (ko) * | 2018-05-29 | 2019-12-09 | 상명대학교산학협력단 | 얼굴 대칭성 측정 장치 및 방법 |
JP2020107038A (ja) * | 2018-12-27 | 2020-07-09 | 日本電気株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP2020199072A (ja) | 2019-06-10 | 2020-12-17 | 国立大学法人滋賀医科大学 | 脳卒中判定装置、方法およびプログラム |
WO2022030592A1 (ja) * | 2020-08-05 | 2022-02-10 | パナソニックIpマネジメント株式会社 | 脳卒中検査システム、脳卒中検査方法、及び、プログラム |
WO2022054687A1 (ja) * | 2020-09-08 | 2022-03-17 | テルモ株式会社 | プログラム、情報処理装置及び情報処理方法 |
CN112768065A (zh) * | 2021-01-29 | 2021-05-07 | 北京大学口腔医学院 | 一种基于人工智能的面瘫分级诊断方法、装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372437A (zh) * | 2023-12-08 | 2024-01-09 | 安徽农业大学 | 用于面神经麻痹智能化检测与量化方法及其系统 |
CN117372437B (zh) * | 2023-12-08 | 2024-02-23 | 安徽农业大学 | 用于面神经麻痹智能化检测与量化方法及其系统 |
Also Published As
Publication number | Publication date |
---|---|
CN118338853A (zh) | 2024-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11961620B2 (en) | Method and apparatus for determining health status | |
US20210174934A1 (en) | Remote assessment of emotional status | |
US11699529B2 (en) | Systems and methods for diagnosing a stroke condition | |
JP6863563B2 (ja) | ストレス評価システム | |
JP2018007792A (ja) | 表情認知診断支援装置 | |
CN112001122A (zh) | 基于端到端生成对抗网络的非接触式生理信号测量方法 | |
WO2023189309A1 (ja) | コンピュータプログラム、情報処理方法及び情報処理装置 | |
JP2023553957A (ja) | 身体画像に基づいて睡眠分析を決定するシステム及び方法 | |
Gaber et al. | Automated grading of facial paralysis using the Kinect v2: a proof of concept study | |
Gaber et al. | Comprehensive assessment of facial paralysis based on facial animation units | |
US20200074240A1 (en) | Method and Apparatus for Improving Limited Sensor Estimates Using Rich Sensors | |
KR20210141197A (ko) | 원격 진료를 위한 증강현실 인터페이스 제공 방법, 장치, 컴퓨터 프로그램 및 컴퓨터 판독 가능한 기록 매체 | |
KR20240049776A (ko) | 사용자 캡처 데이터로부터 질병 위험 평가 방법 | |
Masullo et al. | CaloriNet: From silhouettes to calorie estimation in private environments | |
US20220409120A1 (en) | Information Processing Method, Computer Program, Information Processing Device, and Information Processing System | |
JP2021033359A (ja) | 感情推定装置、感情推定方法、プログラム、情報提示装置、情報提示方法及び感情推定システム | |
KR102267688B1 (ko) | 모발 자가진단 정확성을 향상시킨 어플리케이션 실행 방법 | |
CN117316440A (zh) | 一种基于机器学习面部特征计算的智能面瘫程度评测方法 | |
CA3187876A1 (en) | System and method for automatic personalized assessment of human body surface conditions | |
Nahavandi et al. | A low cost anthropometric body scanning system using depth cameras | |
KR20220072484A (ko) | 얼굴표정 분석기반 자폐 스펙트럼 장애 평가 방법 | |
Wang et al. | Facial Landmark based BMI Analysis for Pervasive Health Informatics | |
TW202143908A (zh) | 多參數生理訊號量測方法 | |
Akshay et al. | iAlert: An Alert System based on Eye Gaze for Human Assistance | |
WO2023195473A1 (ja) | 睡眠状態計測システム、睡眠状態計測方法、睡眠状態計測プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23779336 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2024511608 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202380014836.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023779336 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2023779336 Country of ref document: EP Effective date: 20241004 |