Nothing Special   »   [go: up one dir, main page]

WO2021135557A1 - 一种人工智能多模成像分析装置 - Google Patents

一种人工智能多模成像分析装置 Download PDF

Info

Publication number
WO2021135557A1
WO2021135557A1 PCT/CN2020/123568 CN2020123568W WO2021135557A1 WO 2021135557 A1 WO2021135557 A1 WO 2021135557A1 CN 2020123568 W CN2020123568 W CN 2020123568W WO 2021135557 A1 WO2021135557 A1 WO 2021135557A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
image
human body
lens
light source
Prior art date
Application number
PCT/CN2020/123568
Other languages
English (en)
French (fr)
Inventor
黄国亮
吕文琦
蒋凯
符荣鑫
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Publication of WO2021135557A1 publication Critical patent/WO2021135557A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Definitions

  • the invention relates to the field of multi-mode imaging technology based on artificial intelligence (Artificial Intelligence, abbreviated as AI), and in particular to an artificial intelligence multi-mode imaging analysis device.
  • AI Artificial Intelligence
  • “Wang Diagnosis” belongs to the first of the four basic diagnostic methods of "wang”, “smelling”, “inquiring” and “cutting” in traditional Chinese medicine. By observing the patient's gait, physical signs, facial features, tongue, palmistry, and eye conditions (sclera or White eye, iris, vitreous body, fundus, etc.) and image features of other parts of the body, such as feature information such as color, plaque, blood vessels, dots, and protrusions, are used to diagnose diseases of human viscera and analyze and predict health conditions.
  • "Yellow Emperor's Internal Classic", “Golden Chamber Synopsis”, “Mai Jue Hui Bien” and other ancient medical books have detailed records of "seeing diagnosis", and gradually established the "five rounds and eight outlines” theory.
  • the visual diagnosis of traditional Chinese medicine mainly relies on experienced traditional Chinese medicine (including Zhuang doctors, Vietnamese medicine, etc.) for naked eye observation, and the diagnosis of white eye feature changes through the accumulation of long-term experience, and links to related diseases.
  • the application of sclera visual diagnosis is very limited.
  • the equipment for assisting sclera observation is a slit lamp, it takes about 30 minutes to examine a patient, which is inefficient; on the other hand, it requires more doctors and medical environment and has less experience. Or misdiagnosis is prone to occur when the inspection conditions are poor, and it is difficult to promote it in primary hospitals and families.
  • the prior art discloses equipment for imaging the eyes, an in-vivo analysis system and method for human health status based on white-eye shadowless imaging, etc., which solve some of the problems of image acquisition and image processing, but there is still widespread intelligence.
  • all stages of map acquisition, processing, diagnosis, etc. rely on the high degree of cooperation of the subjects and the participation of professional and technical personnel, so it is difficult to popularize.
  • Existing technologies focus on the software and hardware design of image acquisition and initial image processing. They do not automatically capture the black eye without intelligence, and perform spherical stitching and distortion correction on the black eye positioning and white eye image. There are multiple image acquisitions taken by the same eye.
  • the main purpose of the present invention is to provide an artificial intelligence multi-mode imaging analysis device to realize big data analysis of human body characteristic images and health conditions, and to meet the needs of information development of telemedicine and smart medical care.
  • an artificial intelligence multi-mode imaging analysis device which includes:
  • Human body feature image and video acquisition system used to realize the collection of video signals of various parts of the human body and the collection of static images of the human body;
  • Human body characteristic image database used to store historical image information, image characteristic information and historical clinical diagnosis information
  • the artificial intelligence hardware control and data analysis and processing system is used to automatically control the human characteristic image and video acquisition and processing system to complete video and image acquisition, and analyze and compare with the information stored in the human characteristic image database to complete disease warning and/ Or health status assessment.
  • the device includes a microprocessor, the microprocessor is wired or wirelessly connected to MedNet cloud medical big data, the artificial intelligence hardware control and data analysis and processing system and the human body characteristic image database are installed directly or mirrored on the micro Used within the processor.
  • the human body characteristic image and video acquisition system includes an optical imaging module, a normal incidence illuminating light source module, an oblique incidence illuminating light source module, and an indication guide module;
  • the optical imaging module is used for image collection or video recording of the body of interest
  • the normal incidence illuminating light source module is used to focus the emitted light into an interesting part of the human body
  • the oblique incident illumination light source module is used for dark-field illumination of the sensitive part of the human body
  • the instruction guide module is used to generate cursor instructions to guide the subject to adjust the position and angle of the part of interest.
  • the optical imaging module includes more than one lens or lens that can move back and forth, an imaging detector, and a supporting gimbal adjustment bracket or pan/tilt.
  • the lens or lens is used to dynamically focus the body of interest, and the imaging
  • the detector is used for image collection or video recording of the human body of interest, and the supporting universal adjustment bracket or pan/tilt is used for changing the direction and angle of the lens or lens and the imaging detector to realize the tracking of the movement of the human body.
  • the normal incident illumination light source module includes more than one lens or lens that can move back and forth, a beam splitter, a polarizer, a coupling collimator, and a light source, and the light emitted by the light source passes through the coupling collimator and the polarizer.
  • the sheet is emitted to the beam splitter, and the light reflected by the beam splitter is focused by the lens or lens and enters the body of interest.
  • the oblique incident illumination light source module includes a movable first to fourth light sources that illuminate in four different directions up, down, left, and right, and all or part of the first to fourth light sources are used for humans.
  • the somatosensory parts are illuminated in dark field.
  • the indicator guide module includes one or more lenses or lenses that can move back and forth, a beam splitter, two polarizers, a lens, and a liquid crystal display.
  • the liquid crystal display is used to generate a dynamic indicator cursor, and the cursor passes through the
  • the lens and the first polarizer are emitted to the beam splitter, part of the light reflected by the beam splitter passes through the lens or lens to focus on the body of interest, and the light reflected by the cursor through the lens or lens passes through the beam splitter.
  • the beam mirror is cut off after being transmitted to the second polarizer, wherein the first polarizer and the second polarizer form an orthogonal polarization state.
  • the artificial intelligence hardware control and data analysis processing system includes a mode selection and illumination control module, an image and video acquisition control module, a feedback guidance module, a target tracking module, a storage module, and a data analysis module;
  • the mode selection and illumination control module is used to control the combined mode of the optical imaging module, the normal incident illumination light source module, the oblique incident illumination light source module, and the instruction guide module, wherein the optical imaging module is controlled to be used alone or Controlling the optical imaging module to be used in combination with any one or several of the normal incident illumination light source module, the oblique incident illumination light source module, and the instruction guide module;
  • the image and video acquisition control module is used to control the optical imaging module to dynamically collect human motion video signals and human body static images;
  • the target tracking module is used to dynamically track the body of interest, by dynamically acquiring images of the human body of interest, controlling the optical imaging module to automatically change the imaging angle along with the movement of the human body, and automatically adjust the focus of the optical imaging module, Ensure that the part of interest of the human body is always in the center of the field of view and the size is basically unchanged, so as to realize the tracking of the part of interest of the human body;
  • the feedback guidance module is used to guide the subject to adjust the position and angle through verbal prompts and/or cursor indications of the instruction guidance module;
  • the storage module is used to store the collected information of the part of interest
  • the data analysis module is used to compare the collected information with the big data information stored in the human body characteristic image database through machine learning methods, such as similarity, image spectrum, entropy, physical signs, etc., for disease warning and/or health status assessment.
  • the artificial intelligence hardware control and data analysis and processing system further includes:
  • the feature extraction module is used to identify the collected information and extract the features of the image
  • Image stitching and distortion correction used for stitching and distortion correction of the extracted image features using a pre-established neural network model
  • the feature labeling module is used for stitching and correcting the feature image, using the pre-trained deep convolutional neural network model to identify the features and parameters of the feature image, and labeling the identified features and parameters;
  • the data analysis module uses a pre-trained network model to predict the disease according to the features and parameters of the part of interest, and compares it with the big data information stored in the human body feature image database, such as "gait”, “blood”, and “color”. ",” “texture”, “spot”, “dot”, “stripe”, “blood”, “mound”, “pterygium”, “metal ring”, “moon halo”, “mesh”, temperature and One or more of physiological and pathological parameters, to provide early warning of the subject’s disease and/or health assessment.
  • the specific process of the picture splicing and distortion correction step is:
  • corner point matching method adopts Harris corner point detection algorithm
  • the feature point association adopts a mutual information method
  • the image fusion adopts feathering, pyramid or gradient methods to achieve smooth transition between stitched images.
  • the present invention can quickly and accurately acquire video signals such as gait and physical signs of human movement and various characteristic image information of human body such as static eye, face, tongue, palm and other parts of the body, which is beneficial to the comprehensive analysis of human health information Evaluation;
  • the present invention can realize intelligent tracking of human body movement, automatic focusing, automatic adjustment of magnification, and video signal collection and image signal collection, which greatly reduces the degree of cooperation with subjects and the professional technical level of operators. Proficiency dependence;
  • the present invention adopts oblique incident illumination and AI feedback guidance to reduce the influence of reflected light interference in human body partial imaging detection; guide the subject to change the observation direction and angle through language prompts and cursor indications, and automatically adjust the radius and curvature of each part of the eyeball The position of the pupil of the eye, so that the reflected image of each part of the eyeball to the oblique incident illumination light source is gathered to a point and coincides with the pupil of the eye, so as to realize the non-reflected light interference imaging of the eye sclera, iris, etc.;
  • the present invention uses normal incidence illumination and AI feedback guidance to focus the light emitted by the normal incidence illumination source module into the pupil of the eyeball, illuminate the vitreous body and retina of the eyeball, and guide the subject to automatically adjust the eyeball through language prompts and cursor instructions.
  • the radii of curvature of each part and the position of the eyes, the frontal observation clearly guides and indicates the cursor, so that the optical imaging module can automatically achieve clear imaging of the fundus, retina, and vitreous body;
  • the present invention can manually enter information and automatically store the extracted feature information, expert medical advice information, health analysis data, etc., establish a large database of human body feature images and health status analysis, and adopt a combination of multi-layer neural network and machine learning.
  • Intelligent algorithm to realize multi-mode human body characteristic information storage and human health status analysis including disease risk prediction of healthy people, screening and early warning of sub-healthy people, disease of the affected population (such as diabetes, thoracic cerebrovascular disease, lung cancer, polycystic ovary) Syndrome, AIDS, etc.) rapid non-invasive testing and long-term follow-up health assessment, etc., to provide subjects with disease risk early warning measures and health management suggestions;
  • the present invention shares cloud resources, has remote multimedia interaction functions, can realize AI big data analysis of human body characteristic images and health conditions, and meets the information development needs of telemedicine and smart medical care.
  • Figure 1 is a schematic structural diagram of an artificial intelligence multi-mode imaging analysis device according to an embodiment of the present invention
  • Figure 2 is a schematic diagram of a process for collecting information such as gait and physical signs in an embodiment of the present invention
  • FIG. 3 is a schematic diagram of the image acquisition process for eye, face, tongue, palm and other parts of the body in an embodiment of the present invention
  • FIG. 4 is a schematic diagram of the process of realizing imaging without reflection light interference under the guidance of AI feedback language prompts and signs in an embodiment of the present invention
  • FIG. 5 is a schematic diagram of the process of collecting images of the fundus, retina, and vitreous body in an embodiment of the present invention
  • FIG. 6 is a working flowchart of AI multi-mode imaging health analysis according to an embodiment of the present invention.
  • Fig. 7 is a working flow chart of an embodiment of the present invention used for eye image collection position and distortion correction
  • FIG. 8 is a working flow chart of an embodiment of the present invention used to identify the characteristics of the white eye image
  • Fig. 9 is a working flow chart of an embodiment of the present invention used for machine learning health analysis.
  • the artificial intelligence multi-mode imaging analysis device has the functions of identifying and positioning interested targets, target tracking, feedback language prompts and label guidance, automatic focusing, image and video signal collection, image splicing and Distortion correction, feature recognition, feature extraction and marking, data analysis and other functions can realize the collection of human motion video signals and the collection of human body static eye, face, tongue, palm and other parts of the body, as well as image processing, feature recognition and marking, etc.
  • the device includes:
  • Human characteristic image and video acquisition system CJXT human characteristic image database BIGDATA and artificial intelligence hardware control and data analysis processing system AI. among them:
  • the human body characteristic image and video acquisition system CJXT is used to realize the collection of motion video signals of various parts of the human body and the collection of human body static images.
  • the human body static image collection includes the collection of the eye, face, tongue, palm and other parts of the body. .
  • the human body characteristic image database BIGDATA stores a large amount of historical image information, image characteristic information and historical clinical diagnosis information;
  • Artificial intelligence hardware control and data analysis and processing system AI is used to use artificial intelligence algorithms that combine multi-layer neural networks and machine learning to automatically control the human body feature image and video acquisition and processing system to achieve target tracking, automatic focusing, dynamic amplification, Light source control, video capture, image capture and other functions, and analyze and compare the information stored in the body characteristic image database BIGDATA, provide early warning of the subject’s disease and/or evaluate the subject’s health status, among which , It can perform feature recognition, feature extraction and feature labeling on the collected images and video signals, and analyze and compare the extracted feature information with the historical information stored in the body feature image database BIGDATA; it can also analyze and compare the collected images and video signals. Extract the features, directly compare the similarity, spectrum difference, entropy difference, etc. between the test image and the image stored in the database for disease warning and/or health status assessment.
  • the device includes a microprocessor processor.
  • the microprocessor processor has functions such as digital signal storage, analysis and processing, display language prompt MK, and remote interaction. It can be wired or wirelessly connected to MedNet cloud medical big data.
  • the intelligent hardware control and data analysis and processing system AI and the human body characteristic image database BIGDATA can be installed directly or mirrored in the microprocessor Processor for use.
  • the human body characteristic image and video acquisition system CJXT includes an optical imaging module, a normal incident illumination light source module, an oblique incident illumination light source module, and an indication guide module.
  • the optical imaging module is used for image acquisition or video recording of the body of interest, and it includes more than one lens or lens L3, an imaging detector CCD and a matching universal adjustment bracket or pan/tilt M, lens or
  • the lens L3 is used to focus on the part of interest of the human body
  • the imaging detector CCD is used for image acquisition or video recording of the part of interest.
  • the optical imaging module is used to install the lens or lens L3 and the lens L3 and the lens through the matching universal adjustment bracket or pan/tilt.
  • the imaging detector CCD adjusts the angle or position.
  • the optical imaging module is controlled by artificial intelligence hardware control and data analysis and processing system AI, which can track the movement of human objects, automatically adjust the focus, and automatically adjust the magnification for video signal acquisition and image signal acquisition. Obtain human gait, body shape, physical signs and other information.
  • the normal incidence illuminating light source module is used to focus the emitted light into a certain part of the human body, such as the human eye pupil, to illuminate the eyeball vitreous body and retina.
  • the normal incident illumination light source module includes more than one lens or lens L1 that can move back and forth, a beam splitter SP1, a polarizer P1, a coupling collimator lens L5, and a light source S0, wherein the light emitted by the light source S0 is coupled and collimated
  • the mirror L5 and the polarizer P1 are emitted to the beam splitter SP1, and the light reflected by the beam splitter SP1 is focused by the lens or lens L1 into the pupil of the eyeball, illuminating the vitreous body and retina of the eyeball.
  • the function of the polarizer P1 is to make the light emitted by the light source S0 have the characteristics of linear polarization.
  • the oblique incident illumination light source module is used to illuminate the human body in a dark field to reduce the impact of the illumination light reflection on the imaging of the optical imaging module. As shown in Figure 3, it includes four movable top, bottom, left and right The four light sources S1, S2, S3, S4 illuminated in different directions can be controlled by the artificial intelligence hardware control and data analysis and processing system to achieve all or part of the light sources S1-S4 on the human body (such as sclera or iris). Dark field lighting.
  • the instruction guide module is used to generate cursor instructions through the display screen to guide the subject to adjust the position and angle of the measured part of interest. It includes one or more lenses or lenses L2 that can move forwards and backwards, a beam splitter SP2, and two Polarizing plates P2 and P3, a lens L4, and a liquid crystal display YJP. Among them, the liquid crystal display YJP is used to generate a dynamic indicator cursor to guide the subject to adjust the position and angle of the measured part of interest.
  • the cursor is emitted to the beam splitter SP2 through the lens L4 and the polarizer P2 in turn, and is reflected by the beam splitter SP2 Part of the light passes through the lens or lens L2 into a virtual image at the 250mm photopic distance of the human eye, and then is clearly imaged on the retina through the eyeball.
  • the light emitted by the oblique incident illumination source module uses dark-field illumination to illuminate the body of interest (such as the human eye sclera or iris), from The light reflected or scattered by the body of interest (human eye sclera or iris, etc.) is received by the lens or lens L2, transmitted through the beam splitter SP2, and passed through the polarizer P3 to be received by the optical imaging module.
  • the polarizer P3 It forms an orthogonal polarization state with the polarizer P2 to ensure that the reflected light of the light source S0 through the lens or lens L1 is completely cut off and will not be received by the imaging detector CCD.
  • the optical imaging module uses the oblique incident illumination light source module and/ Or normal incidence illumination light source module, lens or lens L1, lens or lens L2 and lens or lens L3 are located on the same optical axis.
  • the artificial intelligence hardware control and data analysis and processing system AI can control the oblique incident illumination light source module and the instruction guide module according to the image feedback detected by the imaging detector CCD, and perform oblique incident dark field illumination from any one or several directions.
  • the angle ⁇ between the light source S (which can be any of S1, S2, S3, and S4) and the optical axis is variable, and the language of the microprocessor processor prompts MK and/or indicates the cursor ( It can be an arrow or a cross or other signs) to guide the subject to change the viewing direction and the angle between the pupil and the optical axis
  • the curvature radius of each part of the eyeball and the position of the pupil of the eye are automatically adjusted, so that the reflected image of each part of the eyeball to the oblique incident illumination light source is gathered to a point and coincides with the pupil of the eye to realize the sclera or The non-reflected light of the iris interferes with the imaging, as shown in Figure 4(b
  • the optical imaging module can be used alone, as shown in Figure 2; it can also be used in combination with any one or several free combinations of the normal incident illumination light source module, the oblique incident illumination light source module and the indicating guide module, as shown in Figure 3, Shown in Figure 5.
  • the optical imaging module is used in combination with the oblique incident illumination light source module and the indicating guide module.
  • the artificial intelligence hardware control and data analysis and processing system can jointly control the optical imaging module and the oblique incident illumination light source module and the instruction guide module to achieve dark-field illumination, automatic positioning, automatic focusing, and automatic adjustment of the magnification of the local human body position.
  • Video signal acquisition and image signal acquisition to obtain the human eye (images of the sclera, iris, vitreous, etc.), facial features, tongue phases, palmistry, and other parts of the body image information.
  • the optical imaging module is used in combination with the normal incidence illuminating light source module and the indicating guide module.
  • the optical imaging module is used in conjunction with the normal incident illumination light source module and the instruction guide module.
  • the language prompt of the microprocessor and/or the cursor indication generated on the liquid crystal display YJP of the instruction guide module guide the subject automatically Adjust the radius of curvature of each part of the eyeball and the position of the eye, and observe the cursor that clearly indicates the guiding module from the front; then, the artificial intelligence hardware control and data analysis and processing system AI controls the normal incidence of the illumination light source module to emit light, and enters through the lens or lens L1.
  • the pupil of the eyeball illuminates the vitreous body and retina of the eyeball, so that the optical imaging module can achieve clear imaging of the fundus, retina, and vitreous body.
  • the artificial intelligence hardware control and data analysis and processing system AI is used to realize interest target recognition and positioning, target tracking, feedback language prompts and label guidance, automatic focusing, image and video signal acquisition, image splicing and Distortion correction, feature recognition, feature extraction, feature labeling, data analysis and other functions, as shown in Figure 6, the system includes mode selection and light control module, image and video acquisition control module, target tracking module, feedback guidance and automatic focusing Module, feature extraction and marking module, image stitching and distortion correction and storage module, and data analysis module;
  • the mode selection and illumination control module is used to control the combined mode of the incident illumination light source module, the oblique incident illumination light source module, and the instruction guide module;
  • Image and video acquisition control module used to control the optical imaging module, dynamically collect the human motion video signal and the human body's static eye, face, tongue, palm and other parts of the body images;
  • the target tracking module is used to dynamically track the human body's interest part, by automatically and dynamically collecting the image of the human body's interest part, control the universal adjustment bracket or pan/tilt supporting the optical imaging module, and automatically change the imaging angle with the movement of the human body.
  • the lens or lens L3 of the imaging module automatically adjusts the focus and automatically adjusts the magnification to ensure that the body of interest is always in the center of the field of view and the size is basically the same, so as to track and measure the gait, body shape, physical signs and other information of the human body movement.
  • the feedback guidance module is used to guide the subject to automatically adjust the radius of curvature of each part of the eyeball and the position of the eye through the language prompt of the microprocessor and the cursor indication generated on the liquid crystal display YJP of the instruction guide module, in the normal incident illumination mode, Observe the cursor that indicates the guiding module clearly in the forward direction, so that the optical imaging module can achieve clear imaging of the fundus, retina, and vitreous body; in the oblique incident illumination mode, guide the subject to change the observation direction and the pupil and optical axis
  • the included angle automatically adjusts the radius of curvature of each part of the eyeball and the position of the pupil of the eye, so that the reflected image of each part of the eyeball to the oblique incident illumination light source is gathered to a point and coincides with the pupil of the eye to achieve no reflection of light on the sclera or iris of the eye. Interfere with imaging.
  • the feature extraction module is used to identify and extract the features of the collected images.
  • the feature identification information includes “gait”, “blood”, “color”, “texture”, “spot”, “point”, “stripe”, One or more of “blood”, “mound”, “pterygium”, “metal ring”, “moon halo”, “network”, temperature, similarity, image spectrum, entropy, and physiological and pathological parameters, etc., Conducive to comprehensive assessment of human health;
  • Image stitching and distortion correction are used to stitch and correct the extracted features using a pre-established neural network model.
  • the neural network model is an existing technology.
  • the specific steps may include feature extraction, image registration, Calculate the homography matrix H, deformation and fusion.
  • the designated reference image is calculated, the pixels of the input image are mapped to the plane defined by the reference image, and the smooth transition between the spliced images is realized by the method of image fusion.
  • the feature labeling module is used to stitch the corrected feature image.
  • the neural network The model can adopt the existing technology.
  • the specific steps include using different sizes of sliding windows to frame a certain part of the previously spliced whole-eye image as the eye feature candidate area, extract the visual information related to the eye feature candidate area, and use the classifier to perform Recognize different eye features, so I won’t go into details.
  • the storage module is used to mark and store the features and parameters of the part of interest
  • the data analysis module can be used to compare the collected images or video information with the big data information stored in the human body characteristic image database for disease warning and/or health status assessment, that is, to directly compare the test image with the database stored image Similarity, spectral difference, entropy difference, etc., for disease warning or health status assessment; it is also possible to call the features and parameters of the part of interest to predict the disease using a pre-trained network model, and compare it with the content stored in the human body feature image database.
  • Early warning of the subject’s disease and/or health assessment in which the network model can be optimized and trained based on the correlation between the eye condition and the disease by using the BP neural network, and the details will not be repeated.
  • the corner point matching method may use the existing Harris corner point detection algorithm, which detects more features and has rotation invariance and scale variability. Calculate the autocorrelation matrix for each point (x, y) in the acquired image, and then perform Gaussian filtering on each pixel to obtain a new matrix M, where the discrete two-dimensional zero-mean Gaussian function is:
  • the displacement (u, v) is the displacement of the window each time corner detection is performed.
  • the feature point association method adopts a mutual information (Mutual Information, MI) method.
  • MI Mutual Information
  • image fusion adopts feathering, pyramid, gradient and other methods to achieve smooth transition between stitched images.
  • the BP neural network used in this embodiment is specifically:
  • the activation value of the i node of the j-th layer is:
  • n is the node on the j-1 layer, which is the weight; a is the activation value, and j is the bias.
  • the activation value of the node in layer j is:
  • the human body feature image database BIGDATA is used to store and manage the collected human body images, video signals, feature extraction information, expert medical advice information, and health analysis data; store the data that has been trained through neural networks and machine learning
  • the feature recognition queue, intermediate data, feature recognition mark results, etc., as shown in Figure 9, have the functions of manually inputting information and automatically classifying and storing the extracted feature information, expert medical order information, and health analysis data.
  • the artificial intelligence hardware control and data analysis and processing system AI and the human body characteristic image database BIGDATA can share cloud resources, have remote multimedia interaction functions, share cloud data information resources, and have remote medical interaction functions.
  • the AI big data analysis of human body feature images and health status meets the needs of the informatization development of telemedicine and smart medical care.
  • a part of the samples can be taken from the obtained human feature images and used for artificial intelligence hardware control and data analysis and processing system AI neural network parameter adjustment and machine learning to form training feature recognition queues, intermediate data, and features Recognize and mark results, etc., and store all the information in the database, combine the collected human body images, video signals and extracted feature information, expert medical advice information, health analysis data, etc., to build a body feature image database BIGDATA for the subject.
  • the remaining samples are subjected to health analysis, and finally the results of the analysis of the subjects' health conditions are given, including health warnings and conditioning suggestions for the subjects.
  • the following is a detailed description of the process of imaging detection using the artificial intelligence multi-mode imaging device provided by this embodiment to realize the acquisition of human motion video signals and the static eye, face, tongue, palm and other body images of the subject.
  • the lens L3 automatically adjusts the focus and automatically adjusts the magnification, realizes the tracking of human movement, performs video signal acquisition and image signal acquisition, obtains human gait, body shape, physical signs and other information, and stores all information in the storage module after processing ,
  • the data analysis module analyzes and compares it with the information in the body characteristic image database BIGDATA to obtain the health status of the subjects.
  • the subjects need to collect the image information of human eye (images of sclera, iris, vitreous, etc.), facial, tongue, palm and other parts of the body, it is controlled by artificial intelligence hardware and data analysis and processing system AI, combined Control the optical imaging module, the oblique incident illumination light source module and the instruction guide module, first start all or part of the light sources S1-S4 of the oblique incident illumination light source module to illuminate the human body in a dark field, and then perform dark field illumination on the lens L3 of the optical imaging module Automatic focusing and magnification adjustment, and finally through the language of the microprocessor Processor to prompt the MK and/or the cursor indication generated on the liquid crystal display YJP of the instruction guide module to guide the subject to adjust the position and angle of the measured part of interest.
  • AI artificial intelligence hardware and data analysis and processing system AI
  • the artificial intelligence hardware control and data analysis and processing system AI will jointly control the optical imaging module, the normal incident illumination light source module and the instruction guide module, which are first processed by the microcomputer.
  • the light source S0 of the normal incident illumination light source module to focus the light into the pupil of the eyeball, illuminate the vitreous body and retina of the eyeball, and synchronize the automatic focusing and magnification adjustment by the lens L3 of the optical imaging module, and finally by the optical
  • the imaging module realizes clear imaging or video recording of the fundus, retina and vitreous body.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Animal Behavior & Ethology (AREA)
  • Epidemiology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

一种人工智能多模成像分析装置,该装置包括:人体特征图像与视频采集系统,用于实现人体各部位运动视频信号采集和人体静态图像采集;人体特征图像数据库,用于存储历史图像信息、图像特征信息和历史临床诊断信息;人工智能硬件控制与数据分析处理系统,用于自动控制所述人体特征图像与视频采集处理系统完成视频和图像采集,并与所述人体特征图像数据库存储的信息进行分析比较,完成疾病预警和/或健康状态评估。

Description

一种人工智能多模成像分析装置 技术领域
本发明涉及基于人工智能(Artificial Intelligence,简写成AI)多模成像技术领域,特别是关于一种人工智能多模成像分析装置。
背景技术
“望诊”属于中医“望”、“闻”、“问”、“切”四种基本诊断方法之首,通过观察病人的步态、体征、面相、舌相、手相、眼象(巩膜或白睛、虹膜、玻璃体和眼底等)和身体其它部位的图像特征,例如颜色、斑块、血脉、点、突起等特征信息,来进行人体脏腑疾病诊断和健康状况分析预测等。《黄帝内经》、《金匮要略》、《脉诀汇辨》等医学古籍对“望诊”均有详细记载,逐步建立了“五轮八廓”学说。近代以来“望诊”理论进一步完善,郑徳良将眼睛巩膜划分为14个区域、王今觉著《望目辨证诊断学》中将白睛划分为17个区域,对应不同的脏腑器官,通过观察白睛不同区域的不同特征来进行疾病的诊断,具有较强的临床应用价值。
现阶段中医目诊主要依靠经验丰富的中医(包括壮医、藏医等)进行肉眼观察,通过长期经验的积累进行白睛特征变化现象诊断,并与相关疾病联系起来。目前巩膜目诊应用非常有限,一方面是因为辅助观察巩膜的设备为裂隙灯,检查一位患者需要约30分钟,效率低;另一方面是因为对医生和医疗环境要求较高,经验较少或检查条件较差时易发生误诊,在基层医院和家庭中难以推广。
现有的技术中公开了用于对眼睛成像的设备、基于白睛无影成像的人体健康状况在体分析系统及方法等解决了部分图像采集和图像处理的问题,但式仍普遍存在智能化程度低的问题,采图、处理、诊断等各阶段都依赖受试者的高度配合和专业技术人员的参与,因此难以推广普及。现有技术都着重研究图像采集的软硬件设计和图像初步处理,没有智能化地自动捕捉黑睛,并对黑睛定位采集白睛图像进行球面拼接和畸变修正,存在同一眼睛拍摄多张图片采集的眼象有重复、拍摄畸变造成眼象特征识别不准确等问题;现有技术的一种真彩色眼象图白睛区域的完整提取方法实现了白睛整体的提取,但是没有实现白睛上眼象特征的提取、分类、标记,对提升诊断效率作用十分有限。
综上,需要一种对受试者配合程度和专业人员辅助程度依赖小,高效准确,便于在基层医院和家庭推广,可一站式实现人体重要部位图像、视频采集处理、特征识别标记等。
发明内容
针对上述问题,本发明的主要目的在于提供一种人工智能多模成像分析装置,以实现人体特征图像与健康状况的大数据分析,满足远程医疗、智慧医疗的信息化发展需要。
为了实现上述目的,本发明采用的技术方案为:一种人工智能多模成像分析装置,该装置包括:
人体特征图像与视频采集系统,用于实现人体各部位运动视频信号采集和人体静态图像采集;
人体特征图像数据库,用于存储历史图像信息、图像特征信息和历史临床诊断信息;
人工智能硬件控制与数据分析处理系统,用于自动控制所述人体特征图像与视频采集处理系统完成视频和图像采集,并与所述人体特征图像数据库存储的信息进行分析比较,完成疾病预警和/或健康状态评估。
优选地,该装置包括微处理器,所述微处理器有线或无线连接MedNet云端医疗大数据,所述人工智能硬件控制与数据分析处理系统和人体特征图像数据库直接安装或镜像安装在所述微处理器内使用。
优选地,所述人体特征图像与视频采集系统包括光学成像模块、正入射照明光源模块、斜入射照明光源模块和指示引导模块;
所述光学成像模块用于对人体感兴趣部位进行图像采集或视频录制;
所述正入射照明光源模块用于将发出的光聚焦进入人体感兴趣的部位;
所述斜入射照明光源模块用于对人体感兴部位进行暗场照明;
所述指示引导模块用于产生光标指示引导受试者调整感兴部位的位置及角度。
优选地,所述光学成像模块包括一个以上可前后运动的镜片或镜头、成像探测器和一个配套万向调节支架或云台,所述镜片或镜头用于动态聚焦人体感兴趣部位,所述成像探测器用于对人体感兴趣部位进行图像采集或视频录制,所述配套万向调节支架或云台用于改变所述镜片或镜头、成像探测器的方向与角度,实现对人体移动的跟踪。
优选地,所述正入射照明光源模块包括一个以上可前后运动的镜片或镜头、分束镜、偏振片、耦合准直镜和光源,所述光源发出的光经过所述耦合准直镜和偏振片发射到所述分束镜,经所述分束镜反射的光通过所述镜片或镜头聚焦进入人体感兴趣部位。
优选地,所述斜入射照明光源模块包括可以移动的上、下、左、右四个不同方向照明的第一光源~第四光源,采用第一光源~第四光源中的全部或部分对人体感 兴部位进行暗场照明。
优选地,所述指示引导模块包括一个或以上可前后运动的镜片或镜头、分束镜、两个偏振片、透镜和液晶显示器,所述液晶显示器用于产生动态指示光标,光标依次通过所述透镜和第一偏振片发射到所述分束镜,经所述分束镜反射的部分光通过所述镜片或镜头聚焦人体感兴趣部位,光标经所述镜片或镜头反射的光通过所述分束镜透射到第二偏振片后被截止,其中,第一偏振片与第二偏振片形成正交偏振态。
优选地,所述人工智能硬件控制与数据分析处理系统包括模式选择与光照控制模块、图像与视频采集控制模块、反馈引导模块、目标跟踪模块、存储模块和数据分析模块;
所述模式选择与光照控制模块,用于对所述光学成像模块、正入射照明光源模块、斜入射照明光源模块以及指示引导模块的组合模式进行控制,其中,控制所述光学成像模块单独使用或者控制所述光学成像模块与所述正入射照明光源模块、斜入射照明光源模块和指示引导模块中的任意一个或几个自由组合联合使用;
所述图像与视频采集控制模块,用于控制所述光学成像模块,动态采集人体运动视频信号和人体静态图像;
目标跟踪模块,用于动态跟踪人体感兴趣部位,通过动态采集人体感兴趣部位的图像,控制所述光学成像模块随着人体运动自动改变成像角度,并对所述光学成像模块进行自动调焦,保证人体感兴趣部位始终在视场中心且大小基本不变,实现对人体感兴趣部位进行跟踪;
反馈引导模块,用于通过语言提示和/或指示引导模块的光标指示引导受试者调整位置及角度;
存储模块,用于对该感兴趣部位的采集信息进行存储;
数据分析模块,用于通过机器学习方法比较采集信息与人体特征图像数据库存储的大数据信息,如相似度、图像频谱、熵、体征等,进行疾病预警和/或健康状态评估。
优选地,所述人工智能硬件控制与数据分析处理系统还包括:
特征提取模块,用于对采集信息识别并提取图像的特征;
图片拼接与畸变校正,用于采用预先建立的神经网络模型对提取的图像特征进行拼接和畸变校正;
特征标记模块,用于拼接矫正后的特征图像采用预先训练的深度卷积神经网络模型识别特征图像的特征及参数,对识别出的特征及参数进行标记;
其中,所述数据分析模块根据感兴趣部位的特征及参数采用预先训练的网络模 型预测疾病,并与人体特征图像数据库存储的大数据信息进行比较,如“步态”、“血脉”、“颜色”、“纹理”、“斑”、“点”、“条带”、“血脉”、“丘”、“翼状胬肉”、“金属环”、“月晕”、“网状”、温度以及生理病理参数等的一种或多种,对受试者疾病进行预警和/或对健康进行评估。
优选地,所述图片拼接与畸变校正步骤的具体过程为:
首先,采用角点匹配方法检测出特征点;
然后,将特征点关联,并采用RANSAC原理删除不需要的角点;
最后,指定参考图像,将输入图像的像素映射到参考图像定义的平面上,并通过图像融合的方法实现拼接图像间的平滑过渡;
其中,所述角点匹配方法采用Harris角点检测算法;
所述特征点关联采用互信息方法;
所述图像融合采用羽化、金字塔或梯度方法实现拼接图像间的平滑过渡。
本发明由于采取以上技术方案,具有以下优点:
1、本发明可以快速准确获取人体运动的步态和体征等视频信号和人体静态的眼象、面相、舌相、手相和身体其它部位等多种特征图像信息,有利于人体健康信息的综合分析评价;
2、本发明可以实现智能化地对人体运动的跟踪、自动调焦、自动调整放大倍数,进行视频信号采集和图像信号采集,大大减少了对受试者配合程度和操作人员的专业技术水平与熟练程度的依赖性;
3、本发明采用斜入射照明和AI反馈引导,降低人体局部成像检测的反射光干扰影响;通过语言提示和光标指示等引导受试者改变观察方向与角度,自动调节眼球各部分的曲率半径和眼睛瞳孔的位置,使眼球各部分对斜入射照明光源的反射像聚集到一点,并与眼睛瞳孔重合,实现对眼睛巩膜、虹膜等的无反射光干扰成像;
4、本发明采用正入射照明和AI反馈引导,将正入射照明光源模块发出的光聚焦进入眼球的瞳孔,照亮眼球的玻璃体和视网膜,通过语言提示和光标指示等引导受试者自动调节眼球各部分的曲率半径和眼睛的位置,正向观察清楚引导指示光标,以便光学成像模块能够自动实现对眼底、视网膜和玻璃体等的清晰成像;
5、本发明可以手工录入信息和自动将提取的特征信息、专家医嘱信息、健康分析数据等进行存储,建立人体特征图像与健康状况分析大数据库,采用多层神经网络与机器学习相结合的人工智能算法,实现多模式人体特征信息存储与人体健康状况分析,包括健康人患病风险预测,亚健康人群筛查与早期预警,发病人群病症(如糖尿病、胸脑血管疾病、肺癌、多囊卵巢综合症、艾滋病等)的快速无创检测 与长期跟踪健康状况评估等,为受试者提供疾病风险预警措施和健康管理建议;
6、本发明共享云端资源,具备远程多媒体交互功能,可以实现人体特征图像与健康状况的AI大数据分析,满足远程医疗、智慧医疗的信息化发展需要。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1是本发明实施例的人工智能多模成像分析装置结构示意图;
图2是本发明实施例中用于步态、体征等信息采集的过程示意图;
图3是本发明实施例中用于眼象、面相、舌相、手相和身体其它部位等图像采集的过程示意图;
图4是本发明实施例中用于AI反馈语言提示与标示引导下实现无反射光干扰成像的过程示意图;
图5是本发明实施例用于眼底、视网膜和玻璃体等图像采集的过程示意图;
图6是本发明的实施例用于AI多模成像健康分析的工作流程图;
图7是本发明实施例用于眼象采集位置与畸变校正的工作流程图;
图8是本发明实施例用于白睛眼象特征识别标记的工作流程图;
图9是本发明实施例用于机器学习健康分析的工作流程图。
具体实施方式
下面将参照附图更详细地描述本发明的示例性实施方式。虽然附图中显示了本发明的示例性实施方式,然而应当理解,可以以各种形式实现本发明而不应被这里阐述的实施方式所限制。相反,提供这些实施方式是为了能够更透彻地理解本发明,并且能够将本发明的范围完整的传达给本领域的技术人员。
应理解的是,文中使用的术语仅出于描述特定示例实施方式的目的,而无意于进行限制。除非上下文另外明确地指出,否则如文中使用的单数形式“一”、“一个”以及“所述”也可以表示包括复数形式。术语“包括”、“包含”、“含有”以及“具有”是包含性的,并且因此指明所陈述的特征、步骤、操作、元件和/或部件的存在,但并不排除存在或者添加一个或多个其它特征、步骤、操作、元件、部件、和/或它们的组合。文中描述的方法步骤、过程、以及操作不解释为必须要求它们以所描述或说明的特定顺序执行,除非明确指出执行顺序。还应当理解,可以使用另外或者替代的步骤。
如图1所示,本实施例提供的人工智能多模成像分析装置,具有对感兴趣目标识别定位、目标跟踪、反馈语言提示与标示引导、自动调焦、图像与视频信号采集、图片拼接与畸变校正、特征识别、特征提取标记、数据分析等功能,可以实现人体运动视频信号采集和人体静态眼象、面相、舌相、手相和身体其它部位图像的采集以及图像处理、特征识别标记等,该装置包括:
人体特征图像与视频采集系统CJXT,人体特征图像数据库BIGDATA和人工智能硬件控制与数据分析处理系统AI。其中:
人体特征图像与视频采集系统CJXT,用于实现人体各部位运动视频信号采集和人体静态图像采集,其中,人体静态图像采集包括人体的眼象、面相、舌相、手相和身体其它部位图像的采集。
人体特征图像数据库BIGDATA,存储有大量历史图像信息、图像特征信息和历史临床诊断信息;
人工智能硬件控制与数据分析处理系统AI,用于采用多层神经网络与机器学习相结合的人工智能算法,自动控制人体特征图像与视频采集处理系统,实现目标跟踪、自动调焦、动态放大、光源控制、视频采集和图像采集等功能,并对与人体特征图像数据库BIGDATA中的存储的信息进行分析比较,对受试者的疾病进行预警和/或对受试者的健康状态进行评估,其中,可以对采集图像和视频信号进行特征识别、特征提取和特征标记等,并将提取的特征信息与人体特征图像数据库BIGDATA中的存储的历史信息进行分析比较;也可以对采集图像和视频信号不提取特征,直接通过比较测试图像与数据库存储图像之间的相似度、频谱差异、熵差异等,进行疾病预警和/或健康状态评估。
本发明的一些实施例中,该装置包括微处理器Processor,微处理器Processor具有数字信号存储、分析处理、显示语言提示MK和远程交互等功能,可以有线或无线连接MedNet云端医疗大数据,人工智能硬件控制与数据分析处理系统AI和人体特征图像数据库BIGDATA可以直接安装或镜像安装在微处理器Processor内进行使用。
本发明的一些实施例中,人体特征图像与视频采集系统CJXT包括光学成像模块、正入射照明光源模块、斜入射照明光源模块和指示引导模块。
具体地,光学成像模块用于对人体感兴趣部位进行图像采集或视频录制,其包括一个以上的镜片或镜头L3、一个成像探测器CCD和一个配套的万向调节支架或云台M,镜片或镜头L3用于聚焦人体感兴趣部位,成像探测器CCD用于对感兴趣部位进行图像采集或视频录制,其中,光学成像模块通过配套的万向调节支架或云 台对安装的镜片或镜头L3以及成像探测器CCD进行角度或位置调整。如图2所示,光学成像模块通过人工智能硬件控制与数据分析处理系统AI控制,可以实现对人体Object运动的跟踪、自动调焦、自动调整放大倍数,用于视频信号采集和图像信号采集,获得人体步态、身体形态、体征等信息。
正入射照明光源模块用于将发出的光聚焦进入人体某个局部部位,例如人体眼球瞳孔,照亮眼球玻璃体和视网膜。正入射照明光源模块包括一个以上可前后移动的镜片或镜头L1、一个分束镜SP1、一个偏振片P1、一个耦合准直镜L5和一个光源S0,其中,光源S0发出的光经过耦合准直镜L5和偏振片P1发射到分束镜SP1,经分束镜SP1反射的光通过镜片或镜头L1聚焦进入眼球瞳孔,照亮眼球的玻璃体和视网膜。其中,偏振片P1的作用是使光源S0发出的光具有线偏振的特点。
斜入射照明光源模块用于对人体感兴部位进行暗场照明,降低照明光反射对光学成像模块的成像影响,如图3所示,其包括四个可以移动的从上、下、左、右四个不同方向照明的光源S1、S2、S3、S4,可以在人工智能硬件控制与数据分析处理系统控制下,实现光源S1-S4的全部或部分对人体感兴部位(如巩膜或虹膜)进行暗场照明。
指示引导模块用于通过显示屏产生光标指示,引导受试者调整被测量感兴部位的位置及角度等,其包括一个或以上可前后运动的镜片或镜头L2、一个分束镜SP2、二个偏振片P2和P3、一个透镜L4和一个液晶显示器YJP。其中,液晶显示器YJP用于产生动态指示光标,引导受试者调整被测量感兴部位的位置及角度等,光标依次通过透镜L4、偏振片P2发射到分束镜SP2,经分束镜SP2反射的部分光通过镜片或镜头L2成虚像在人眼的250mm明视距离位置,再通过眼球清晰成像在视网膜上。
如图3所示,在人工智能硬件控制与数据分析处理系统AI控制下,斜入射照明光源模块发出的光采用暗场照明的方式照亮人体感兴趣部位(如人眼巩膜或虹膜),从人体感兴趣部位(人眼巩膜或虹膜等)反射或散射回来的光,被镜片或镜头L2接收,经分束镜SP2透射后,透过偏振片P3被光学成像模块接收,其中,偏振片P3与偏振片P2形成正交偏振态,确保光源S0经镜片或镜头L1的反射光被完全截止,不会被成像探测器CCD接收,需要说明的是光学成像模块联合使用斜入射照明光源模块和/或正入射照明光源模块,镜片或镜头L1、镜片或镜头L2和镜片或镜头L3位于同一光轴。
优选地,人工智能硬件控制与数据分析处理系统AI可以根据成像探测器CCD探测的图像反馈,控制斜入射照明光源模块和指示引导模块,从任意一个方向或几 个方向进行斜入射暗场照明,光源S(可以是S1、S2、S3和S4中任意一个)与光轴的夹角Ψ可变,通过微处理器Processor的语言提示MK和/或指示引导模块的液晶显示器YJP上产生的光标(可以是箭头或十字或其它标志)指示,引导受试者改变观察方向和瞳孔与光轴的夹角
Figure PCTCN2020123568-appb-000001
如图4(a)所示,自动调节眼球各部分的曲率半径和眼睛瞳孔的位置,使眼球各部分对斜入射照明光源的反射像聚集到一点,并与眼睛瞳孔重合,实现对眼睛巩膜或虹膜等的无反射光干扰成像,如图4(b)所示。
优选地,光学成像模块可以单独使用,如图2所示;也可以与正入射照明光源模块、斜入射照明光源模块和指示引导模块中的任意一个或几个自由组合联合使用,如图3、图5所示。
如图3所示,是光学成像模块与斜入射照明光源模块和指示引导模块联合使用的示例。人工智能硬件控制与数据分析处理系统可以通过联合控制光学成像模块与斜入射照明光源模块及指示引导模块,实现对人体局部位置进行暗场照明、自动定位、自动调焦、自动调整放大倍数,进行视频信号采集和图像信号采集,获得人体眼象(巩膜、虹膜、玻璃体等的图像)、面相、舌相、手相和身体其它部位等的图像信息。
如图5所示,是光学成像模块与正入射照明光源模块和指示引导模块联合使用的示例。在图5中,光学成像模块与正入射照明光源模块和指示引导模块联合使用,通过微处理器Processor的语言提示和/或指示引导模块的液晶显示器YJP上产生的光标指示,引导受试者自动调节眼球各部分的曲率半径和眼睛的位置,正向观察清楚指示引导模块的光标;然后由人工智能硬件控制与数据分析处理系统AI,控制正入射照明光源模块发光,经镜片或镜头L1聚焦进入眼球的瞳孔,照亮眼球的玻璃体和视网膜,以便光学成像模块能够实现对眼底、视网膜和玻璃体等的清晰成像。
本发明的一些实施例中,人工智能硬件控制与数据分析处理系统AI用于实现感兴趣目标识别定位、目标跟踪、反馈语言提示与标示引导、自动调焦、图像与视频信号采集、图片拼接与畸变校正、特征识别、特征提取、特征标记、数据分析等功能,如图6所示,该系统包括模式选择与光照控制模块、图像与视频采集控制模块、目标跟踪模块、反馈引导及自动调焦模块、特征提取标记模块、图片拼接与畸变校正和存储模块和数据分析模块;
模式选择与光照控制模块,用于对入射照明光源模块、斜入射照明光源模块以及指示引导模块的组合模式进行控制;
图像与视频采集控制模块,用于控制光学成像模块,动态采集人体运动视频信号和人体静态的眼象、面相、舌相、手相和身体其它部位图像;
目标跟踪模块,用于动态跟踪人体感兴趣部位,通过自动动态采集人体感兴趣部位的图像,控制光学成像模块配套的万向调节支架或云台,随着人体运动自动改变成像角度,并对光学成像模块的镜头或镜片L3进行自动调焦、自动调整放大倍数,保证人体感兴趣部位始终在视场中心且大小基本不变,实现对人体运动的步态、身体形态、体征等信息进行跟踪测量。
反馈引导模块,用于通过微处理器的语言提示和指示引导模块的液晶显示器YJP上产生的光标指示,在正入射照明模式,引导受试者自动调节眼球各部分的曲率半径和眼睛的位置,正向观察清楚所述指示引导模块的光标,以便所述光学成像模块能够实现对眼底、视网膜和玻璃体等的清晰成像;在斜入射照明模式,引导受试者改变观察方向和瞳孔与光轴的夹角,自动调节眼球各部分的曲率半径和眼睛瞳孔的位置,使眼球各部分对斜入射照明光源的反射像聚集到一点,并与眼睛瞳孔重合,实现对眼睛巩膜或虹膜等的无反射光干扰成像。
特征提取模块,用于识别并提取采集图像的特征,其中,特征识别信息包括“步态”、“血脉”、“颜色”、“纹理”、“斑”、“点”、“条带”、“血脉”、“丘”、“翼状胬肉”、“金属环”、“月晕”、“网状”、温度、相似度、图像频谱、熵以及生理病理参数等的一种或多种,有利于对人体健康状况进行综合评估;
图片拼接与畸变校正,用于采用预先建立的神经网络模型对提取的特征进行拼接和畸变校正,如图7所示,神经网络模型为现有技术,具体步骤可以包括特征提取、图像配准、计算单应矩阵H、变形和融合,输入眼部特征图像后:首先采用角点匹配方法检测出特征点,然后将特征点关联,用RANSAC原理计算单映矩阵估计并删除不需要的角点,最后计算指定参考图像,将输入图像的像素映射到参考图像定义的平面上,并通过图像融合的方法实现拼接图像间的平滑过渡。
特征标记模块,用于拼接矫正后的特征图像采用预先训练的深度卷积神经网络模型识别特征图像的具体特征及参数,并对识别出的特征及参数进行标记,如图8所示,神经网络模型可以采用现有技术,具体步骤包括利用不同尺寸的滑动窗口框住之前拼接获得的全眼图像的某一部分作为眼象特征候选区域、提取眼象特征候选区域相关的视觉信息、利用分类器进行识别不同的眼象特征,具体不做赘述。
存储模块,用于对该感兴趣部位的特征及参数进行标记和存储;
数据分析模块,可以用于通过比较采集图像或视频信息与人体特征图像数据库存储图像存储的大数据信息,进行疾病预警和/或健康状态评估,即直接通过比较测试图像与数据库存储图像之间的相似度、频谱差异、熵差异等,进行疾病预警或健康状态评估;也可以调用感兴趣部位的特征及参数采用预先训练的网络模型预测疾 病,并与人体特征图像数据库存储的内容进行比较,对受试者疾病进行预警和/或对健康进行评估,其中网络模型可以采用BP神经网络进行基于眼象与疾病间的关联进行优化和训练,具体不做赘述。
本发明的一些实施例中,角点匹配方法可以采用现有Harris角点检测算法,该算法检测的特征较多,具有旋转不变性和尺度变异性。将采集图像中各点(x,y)计算自相关矩阵,然后对每个像素点进行高斯滤波,获得新矩阵M,其中,离散二维零均值高斯函数为:
Figure PCTCN2020123568-appb-000002
其中,位移(u,v)是每次进行角点探测时窗口的位移。
计算每个点(x,y)的角点度量:
R=Det(M)-k*trace(M)    (2)
选择局部最大值与设置的阈值进行比较,若该局部最大值高于阈值,则为角点。
本发明的一些实施例中,特征点关联方法采用互信息(Mutual Information,MI)方法。
本发明的一些实施例中,图像融合采用羽化(feathering),金字塔(pyramid),梯度(gradient)等方法实现拼接图像间的平滑过渡。
本发明的一些实施例中,本实施例所采用的BP神经网络具体为:
在进行神经网络健康分析过程中具有20个输入节点,两个输出节点,两个隐藏层,每个隐藏层中有10个节点,第j层的i节点的激活值为:
Figure PCTCN2020123568-appb-000003
在方程(2),n是j-1层上的结点,是权重;a是激活值,j是偏置。
j层结点的激活值是:
Figure PCTCN2020123568-appb-000004
在上述方程,k是j层上的一个结点;n是j-1层上的结点;ω是权重;a是激活值;j是偏置。
本发明的一些实施例中,人体特征图像数据库BIGDATA,用于存储并管理所 采集的人体图像、视频信号以及特征提取信息、专家医嘱信息、健康分析数据;储存经过神经网络和机器学习完成训练的特征识别队列、中间数据、特征识别标记结果等,如图9所示,具有手工录入信息和自动将提取的特征信息、专家医嘱信息、健康分析数据等进行分类存储管理的功能。
本发明的一些实施例中,人工智能硬件控制与数据分析处理系统AI和人体特征图像数据库BIGDATA,可以共享云端资源,具备远程多媒体交互功能,共享云端数据信息资源,具备远程医疗交互功能,可以实现人体特征图像与健康状况的AI大数据分析,满足远程医疗、智慧医疗的信息化发展需要。具体使用时,可以从所获得的人体特征图像中拿出一部分样本,用于人工智能硬件控制与数据分析处理系统AI的神经网络参数调整和机器学习,形成训练的特征识别队列、中间数据、特征识别标记结果等,并将全部信息存储入库,联合所采集的人体图像、视频信号和所提取的特征信息、专家医嘱信息、健康分析数据等,构建人体特征图像数据库BIGDATA,以便对受试者剩余样本进行健康分析,最终给出受试者健康状况分析结果,包括为受试者提供健康预警和调理建议等。
下面详细说明采用本实施例提供的人工智能多模成像装置进行成像检测的过程,实现对受试者进行人体运动视频信号采集和人体静态的眼象、面相、舌相、手相和身体其它部位图像的采集,以及进行图像处理、特征识别标记、疾病诊断和健康分析,包括以下内容:
对受试者进行人体运动视频信号采集和人体全身或身体感兴部位图像采集,人工智能硬件控制与数据分析处理系统AI控制光学成像模块配套的万向调节支架或云台,对光学成像模块的镜头L3进行自动调焦以及自动调整放大倍数,实现对人体运动的跟踪,进行视频信号采集和图像信号采集,获得人体步态、身体形态、体征等信息,并将全部信息处理后存储到存储模块,数据分析模块将其与人体特征图像数据库BIGDATA中的信息进行分析比较,获得受试者健康状况。
当需要对受试者进行人体眼象(巩膜、虹膜、玻璃体等的图像)、面相、舌相、手相和身体其它部位等的图像信息采集,由人工智能硬件控制与数据分析处理系统AI,联合控制光学成像模块、斜入射照明光源模块和指示引导模块,先启动斜入射照明光源模块的光源S1-S4的全部或部分对人体感兴部位进行暗场照明,然后对光学成像模块的镜头L3进行自动调焦和放大倍数调整,最后通过微处理器Processor的语言提示MK和/或指示引导模块的液晶显示器YJP上产生的光标指示,引导受试者调整被测量感兴部位的位置、角度等,使斜入射照明光源的反射像完全消失,或聚集到一点并与眼睛瞳孔重合,可以获得被测量感兴部位的清晰图像或录制视频 等。后续通过数字图像处理、畸变校正、AI特征识别、特征提取、特征标记,将全部信息存储到存储模块,数据分析系统将其与人体特征图像与健康状况分析大数据库BIGDATA中的信息进行分析比较,获得受试者健康状况分析结果。
当需要对受试者进行人体眼底、视网膜等的图像信息采集时,由人工智能硬件控制与数据分析处理系统AI,联合控制光学成像模块、正入射照明光源模块和指示引导模块,先由微处理器Processor的语言提示MK和/或指示引导模块的液晶显示器YJP上产生的光标指示,引导受试者自动调节眼球各部分的曲率半径和眼睛的位置,正向观察清楚指示引导模块液晶显示器YJP上产生的光标,然后开启正入射照明光源模块的光源S0将光聚焦进入眼球的瞳孔,照亮眼球的玻璃体和视网膜,同步由光学成像模块的镜头L3进行自动调焦和放大倍数调整,最后由光学成像模块实现对眼底、视网膜和玻璃体的清晰成像或录制视频等。后续通过数字图像处理、畸变校正、AI特征识别、特征提取、特征标记,将全部信息存储进入存储模块,数据分析系统将其与人体特征图像与健康状况分析大数据库BIGDATA中的信息进行分析比较,获得受试者健康状况分析结果。
上述各实施例仅用于说明本发明,其中各部件的结构、连接方式和制作工艺等都是可以有所变化的,凡是在本发明技术方案的基础上进行的等同变换和改进,均不应排除在本发明的保护范围之外。

Claims (10)

  1. 一种人工智能多模成像分析装置,其特征在于,该装置包括:
    人体特征图像与视频采集系统,用于实现人体各部位运动视频信号采集和人体静态图像采集;
    人体特征图像数据库,用于存储历史图像信息、图像特征信息和历史临床诊断信息;
    人工智能硬件控制与数据分析处理系统,用于自动控制所述人体特征图像与视频采集处理系统完成视频和图像采集,并与所述人体特征图像数据库存储的信息进行分析比较,完成疾病预警和/或健康状态评估。
  2. 根据权利要求1所述的人工智能多模成像分析装置,其特征在于,该装置包括微处理器,所述微处理器有线或无线连接MedNet云端医疗大数据,所述人工智能硬件控制与数据分析处理系统和人体特征图像数据库直接安装或镜像安装在所述微处理器内使用。
  3. 根据权利要求1或2所述的人工智能多模成像分析装置,其特征在于,所述人体特征图像与视频采集系统包括光学成像模块、正入射照明光源模块、斜入射照明光源模块和指示引导模块;
    所述光学成像模块用于对人体感兴趣部位进行图像采集或视频录制;
    所述正入射照明光源模块用于将发出的光聚焦进入人体感兴趣的部位;
    所述斜入射照明光源模块用于对人体感兴部位进行暗场照明;
    所述指示引导模块用于产生光标指示引导受试者调整感兴部位的位置及角度。
  4. 根据权利要求3所述的人工智能多模成像分析装置,其特征在于,所述光学成像模块包括一个以上可前后运动的镜片或镜头、成像探测器和一个配套万向调节支架或云台,所述镜片或镜头用于动态聚焦人体感兴趣部位,所述成像探测器用于对人体感兴趣部位进行图像采集或视频录制,所述配套万向调节支架或云台用于改变所述镜片或镜头、成像探测器的方向与角度,实现对人体移动的跟踪。
  5. 根据权利要求3所述的人工智能多模成像分析装置,其特征在于,所述正入射照明光源模块包括一个以上可前后运动的镜片或镜头、分束镜、偏振片、耦合准直镜和光源,所述光源发出的光经过所述耦合准直镜和偏振片发射到所述分束镜,经所述分束镜反射的光通过所述镜片或镜头聚焦进入人体感兴趣部位。
  6. 根据权利要求3所述的人工智能多模成像分析装置,其特征在于,所述斜入射照明光源模块包括可以移动的上、下、左、右四个不同方向照明的第一光源~ 第四光源,采用第一光源~第四光源中的全部或部分对人体感兴部位进行暗场照明。
  7. 根据权利要求3所述的人工智能多模成像分析装置,其特征在于,所述指示引导模块包括一个或以上可前后运动的镜片或镜头、分束镜、两个偏振片、透镜和液晶显示器,所述液晶显示器用于产生动态指示光标,光标依次通过所述透镜和第一偏振片发射到所述分束镜,经所述分束镜反射的部分光通过所述镜片或镜头聚焦人体感兴趣部位,光标经所述镜片或镜头反射的光通过所述分束镜透射到第二偏振片后被截止,其中,第一偏振片与第二偏振片形成正交偏振态。
  8. 根据权利要求3所述的人工智能多模成像分析装置,其特征在于,所述人工智能硬件控制与数据分析处理系统包括模式选择与光照控制模块、图像与视频采集控制模块、反馈引导模块、目标跟踪模块、存储模块和数据分析模块;
    所述模式选择与光照控制模块,用于对所述光学成像模块、正入射照明光源模块、斜入射照明光源模块以及指示引导模块的组合模式进行控制,其中,控制所述光学成像模块单独使用或者控制所述光学成像模块与所述正入射照明光源模块、斜入射照明光源模块和指示引导模块中的任意一个或几个自由组合联合使用;
    所述图像与视频采集控制模块,用于控制所述光学成像模块,动态采集人体运动视频信号和人体静态图像;
    目标跟踪模块,用于动态跟踪人体感兴趣部位,通过动态采集人体感兴趣部位的图像,控制所述光学成像模块随着人体运动自动改变成像角度,并对所述光学成像模块进行自动调焦,保证人体感兴趣部位始终在视场中心且大小基本不变,实现对人体感兴趣部位进行跟踪;
    反馈引导模块,用于通过语言提示和/或指示引导模块的光标指示引导受试者调整位置及角度;
    存储模块,用于对该感兴趣部位的采集信息进行存储;
    数据分析模块,用于通过机器学习方法比较采集信息与人体特征图像数据库存储的大数据信息,进行疾病预警和/或健康状态评估。
  9. 根据权利要求8所述的人工智能多模成像分析装置,其特征在于,所述人工智能硬件控制与数据分析处理系统还包括:
    特征提取模块,用于对采集信息识别并提取图像的特征;
    图片拼接与畸变校正,用于采用预先建立的神经网络模型对提取的图像特征进行拼接和畸变校正;
    特征标记模块,用于拼接矫正后的特征图像采用预先训练的深度卷积神经网络 模型识别特征图像的特征及参数,对识别出的特征及参数进行标记;
    其中,所述数据分析模块根据感兴趣部位的特征及参数采用预先训练的网络模型预测疾病,并与人体特征图像数据库存储的大数据信息进行比较,对受试者疾病进行预警和/或对健康进行评估。
  10. 根据权利要求9所述的人工智能多模成像分析装置,其特征在于,所述图片拼接与畸变校正步骤的具体过程为:
    首先,采用角点匹配方法检测出特征点;
    然后,将特征点关联,并采用RANSAC原理删除不需要的角点;
    最后,指定参考图像,将输入图像的像素映射到参考图像定义的平面上,并通过图像融合的方法实现拼接图像间的平滑过渡;
    其中,所述角点匹配方法采用Harris角点检测算法;
    所述特征点关联采用互信息方法;
    所述图像融合采用羽化、金字塔或梯度方法实现拼接图像间的平滑过渡。
PCT/CN2020/123568 2019-12-30 2020-10-26 一种人工智能多模成像分析装置 WO2021135557A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911392016.0 2019-12-30
CN201911392016.0A CN111128382B (zh) 2019-12-30 2019-12-30 一种人工智能多模成像分析装置

Publications (1)

Publication Number Publication Date
WO2021135557A1 true WO2021135557A1 (zh) 2021-07-08

Family

ID=70504781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123568 WO2021135557A1 (zh) 2019-12-30 2020-10-26 一种人工智能多模成像分析装置

Country Status (2)

Country Link
CN (1) CN111128382B (zh)
WO (1) WO2021135557A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379397A (zh) * 2021-07-16 2021-09-10 北京华博创科科技股份有限公司 一种基于机器学习的云工作流架智能管理与调度系统
CN113724874A (zh) * 2021-09-08 2021-11-30 北京新清泰克体育科技有限公司 一种基于评估测试系统
CN117316432A (zh) * 2023-10-11 2023-12-29 南京中医药大学 一种中医全身望诊形态信息数据采集装置
CN118486089A (zh) * 2024-07-16 2024-08-13 山东新众通信息科技有限公司 基于步态识别的情感辅助分析方法及系统

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128382B (zh) * 2019-12-30 2022-06-28 清华大学 一种人工智能多模成像分析装置
CN113974546A (zh) * 2020-07-27 2022-01-28 广西释码智能信息技术有限公司 一种翼状胬肉检测方法和移动终端
CN115049575A (zh) * 2021-02-26 2022-09-13 北京小米移动软件有限公司 图像分析方法和装置
CN113180971B (zh) * 2021-04-29 2023-09-22 新乡市中心医院 一种中医护理装置和控制方法
CN113390885B (zh) * 2021-08-17 2021-11-09 济南邦德激光股份有限公司 一种激光头切割保护镜状态检测装置及检测方法
CN114664410B (zh) * 2022-03-11 2022-11-08 北京医准智能科技有限公司 一种基于视频的病灶分类方法、装置、电子设备及介质
CN116636808B (zh) * 2023-06-28 2023-10-31 交通运输部公路科学研究所 一种智能座舱驾驶员视觉健康度分析方法与装置
CN116784804A (zh) * 2023-07-19 2023-09-22 湖南云医链生物科技有限公司 一种未病分析评估系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110021445A (zh) * 2017-08-15 2019-07-16 北京蜂鸟互动科技有限公司 一种基于vr模型的医疗系统
US20190371450A1 (en) * 2018-05-30 2019-12-05 Siemens Healthcare Gmbh Decision Support System for Medical Therapy Planning
CN110619945A (zh) * 2018-06-19 2019-12-27 西门子医疗有限公司 针对机器学习网络的输入的训练的量的表征
CN111128382A (zh) * 2019-12-30 2020-05-08 清华大学 一种人工智能多模成像分析装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10297574B4 (de) * 2001-12-21 2009-09-10 Sensomotoric Instruments Gmbh Verfahren und Vorrichtung zur Augenerfassung
CN105310646B (zh) * 2015-12-09 2017-09-22 博奥颐和健康科学技术(北京)有限公司 基于白睛无影成像的人体健康状况在体分析系统
CN105426695B (zh) * 2015-12-18 2018-08-03 北京铭光正讯科技有限公司 一种基于虹膜的健康状态检测系统
CN206497027U (zh) * 2017-02-28 2017-09-15 福建师范大学 基于光开关的单光谱仪偏振频域光学相干层析成像系统
CN107307848B (zh) * 2017-05-27 2021-04-06 天津海仁医疗技术有限公司 一种基于高速大范围扫描光学微造影成像的人脸识别及皮肤检测系统
CN107680683A (zh) * 2017-10-09 2018-02-09 上海睦清视觉科技有限公司 一种ai眼部健康评估方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110021445A (zh) * 2017-08-15 2019-07-16 北京蜂鸟互动科技有限公司 一种基于vr模型的医疗系统
US20190371450A1 (en) * 2018-05-30 2019-12-05 Siemens Healthcare Gmbh Decision Support System for Medical Therapy Planning
CN110619945A (zh) * 2018-06-19 2019-12-27 西门子医疗有限公司 针对机器学习网络的输入的训练的量的表征
CN111128382A (zh) * 2019-12-30 2020-05-08 清华大学 一种人工智能多模成像分析装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113379397A (zh) * 2021-07-16 2021-09-10 北京华博创科科技股份有限公司 一种基于机器学习的云工作流架智能管理与调度系统
CN113379397B (zh) * 2021-07-16 2023-09-22 北京华博创科科技股份有限公司 一种基于机器学习的云工作流架智能管理与调度系统
CN113724874A (zh) * 2021-09-08 2021-11-30 北京新清泰克体育科技有限公司 一种基于评估测试系统
CN117316432A (zh) * 2023-10-11 2023-12-29 南京中医药大学 一种中医全身望诊形态信息数据采集装置
CN118486089A (zh) * 2024-07-16 2024-08-13 山东新众通信息科技有限公司 基于步态识别的情感辅助分析方法及系统

Also Published As

Publication number Publication date
CN111128382B (zh) 2022-06-28
CN111128382A (zh) 2020-05-08

Similar Documents

Publication Publication Date Title
WO2021135557A1 (zh) 一种人工智能多模成像分析装置
US8371693B2 (en) Autism diagnosis support apparatus
CN105513077B (zh) 一种用于糖尿病性视网膜病变筛查的系统
JP6072798B2 (ja) 幼児や小児における瞳孔赤色反射検査と眼の角膜光反射スクリーニングを文書化し記録するためのシステムおよび方法
CN107184178A (zh) 一种智能便携式手持视力筛查仪及验光方法
CN110448267B (zh) 一种多模眼底动态成像分析系统及其方法
US10219693B2 (en) Systems and methods for combined structure and function evaluation of retina
CN102028477A (zh) 一种测量眼底视网膜血氧饱和度的装置及方法
US20190045170A1 (en) Medical image processing device, system, method, and program
WO2021256130A1 (ja) スリットランプ顕微鏡
Besenczi et al. Automatic optic disc and optic cup detection in retinal images acquired by mobile phone
EP3695775B1 (en) Smartphone-based handheld optical device and method for capturing non-mydriatic retinal images
WO2021261103A1 (ja) スリットランプ顕微鏡
Aloudat et al. Histogram analysis for automatic blood vessels detection: First step of IOP
Jansen et al. A torsional eye movement calculation algorithm for low contrast images in video-oculography
US20230301511A1 (en) Slit lamp microscope
TWI839124B (zh) 光學斷層掃描自測系統、光學斷層掃描方法及眼部病變監控系統
JP7540711B2 (ja) 眼球運動自動記録システム、演算装置、演算方法およびプログラム
Talib et al. A General Overview of Retinal Lesion Detecting Devices
CN117373075B (zh) 基于眼部特征点和眼部区域分割结果的情感识别数据集
US20220151482A1 (en) Biometric ocular measurements using deep learning
Dong et al. High-speed pupil dynamic tracking algorithm for RAPD measurement equipment utilizing gray-level features
WO2024004455A1 (ja) 眼科情報処理装置、眼科装置、眼科情報処理方法、及びプログラム
CN117530654A (zh) 一种实时的双眼瞳孔检查系统及检测方法
US20220180509A1 (en) Diagnostic tool for eye disease detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20909936

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20909936

Country of ref document: EP

Kind code of ref document: A1