Nothing Special   »   [go: up one dir, main page]

CN115312185A - Artificial intelligence-based ophthalmic cataract screening method and system - Google Patents

Artificial intelligence-based ophthalmic cataract screening method and system Download PDF

Info

Publication number
CN115312185A
CN115312185A CN202210946533.3A CN202210946533A CN115312185A CN 115312185 A CN115312185 A CN 115312185A CN 202210946533 A CN202210946533 A CN 202210946533A CN 115312185 A CN115312185 A CN 115312185A
Authority
CN
China
Prior art keywords
image
cataract
sub
neural network
screening
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210946533.3A
Other languages
Chinese (zh)
Inventor
王皓
李慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AFFILIATED HOSPITAL OF JILIN MEDICAL COLLEGE
Original Assignee
AFFILIATED HOSPITAL OF JILIN MEDICAL COLLEGE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AFFILIATED HOSPITAL OF JILIN MEDICAL COLLEGE filed Critical AFFILIATED HOSPITAL OF JILIN MEDICAL COLLEGE
Priority to CN202210946533.3A priority Critical patent/CN115312185A/en
Publication of CN115312185A publication Critical patent/CN115312185A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a screening method and a system for cataract in ophthalmology based on artificial intelligence, wherein the method comprises the following steps: acquiring a detection image of the cataract, dividing the detection image into a plurality of sub-images, screening the image characteristics of suspected cataract from each sub-image and marking the image characteristics; mixing the sub-images with the image characteristics into at least one standard image to form a combined image; and sequentially inputting each sub-image with image characteristics into a first neural network unit to compare and check whether each sub-image has cataract characteristics or not based on a historical characteristic library, respectively inputting a detection image and a combined image into an independent second neural network unit along with the first neural network unit, respectively training the detection image and the combined image by taking the characteristic selector as a learning task training factor to obtain a second screening result, and taking the first screening result as an indication to indicate a part, which is the same as the first screening result, in the second screening result as a cataract screening result.

Description

Artificial intelligence-based ophthalmic cataract screening method and system
Technical Field
The invention relates to the technical field of cataract screening intelligent detection, in particular to an artificial intelligence-based ophthalmology cataract screening method and system.
Background
The general cataract detection depends on the experience of doctors or on Electroretinograms (ERGs), but in the early stage of cataract, the cataract does not have crystal-like change, generally only has the macula or the tiny phenomenon of retinal detachment, the general manual detection is difficult to distinguish, and the macula is easy to be confused with organ damage and lesion, for example, the early stage symptom of hepatitis b is that the macula is distributed in the eyeball. Generally, the initial stage of cataract is the onset of various diseases, for example, there are lumps in macula and retina, the retina is slightly dropped, the eyeball lens is punctate turbid (not obvious and generally can not be judged by eyes), the cataract in the early stage can be cured well, and the cataract in the later stage must be treated by surgery. Therefore, cataract is early found and early prevented. However, current physician experience or reliance on Electroretinograms (ERGs) is prone to inadvertent early symptoms, causing some treatment delay for the patient.
Disclosure of Invention
In view of the above, the present invention provides a method and a system for screening cataract in ophthalmology based on artificial intelligence. The technical scheme adopted by the invention is as follows:
the screening method for the ophthalmic cataract based on artificial intelligence comprises the following steps:
acquiring a detection image of cataract, dividing the detection image into a plurality of subimages according to a set rule, screening the image characteristics of suspected cataract from each subimage and marking;
mixing the sub-images with the image characteristics into at least one standard image to form a combined image;
sequentially inputting each sub-image with image characteristics into a first neural network unit, comparing and checking whether each sub-image has a cataract characterization or not based on a historical characteristic library, establishing a first screening result according to the cataract characterization identified in each sub-image, and establishing a labeling characteristic in the first screening result according to the cataract characterization;
the detection image and the combined image are respectively input to independent second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, iterative training is respectively carried out on the detection image and the combined image by taking the feature selector as a learning task training factor to acquire a second screening result, the first screening result is taken as an indication to indicate the part, identical to the first screening result, in the second screening result, and the indication result is taken as a cataract screening result.
The invention also provides an artificial intelligence based ophthalmic cataract screening system, which comprises:
the image acquisition part is used for acquiring a plurality of groups of detection images of the eyes;
the sample recognizer is used for carrying out primary recognition on a plurality of groups of detection images so as to obtain a detection image with definition meeting the set requirement as a sample image;
the segmentation matrix is provided with a plurality of segmentation units and a deviation rectifying part, the sample image is loaded into the standard template, the deviation rectifying part corrects the sample image by taking eyeballs as centers, and the sample image is segmented into a plurality of sub-images according to a segmentation rule after correction;
the artificial intelligence system is provided with a screening device, a combining part, a task flow manager, an indicating unit, a first neural network unit and at least two second neural network units which are constructed by taking the first neural network unit as task flow circulation,
the screening device is used for receiving each sub-image in sequence and identifying and processing the sub-images so as to screen out and label the image characteristics of suspected cataract;
the combining part is used for mixing the sub-images with the image characteristics into at least one standard image to form a combined image;
the task flow manager inputs each sub-image with image characteristics into the first neural network unit in sequence to compare based on the historical characteristic library so as to check whether each sub-image has cataract characterization, establishes a first screening result according to the cataract characterization, and establishes a labeling characteristic in the first screening result according to the cataract characterization;
the task manager respectively inputs the sample image and the combined image to corresponding second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, and the feature selector is used as a learning task training factor to respectively carry out iterative training on the detection image and the combined image to acquire a second screening result;
the indication unit is used for taking the first screening result as an indication, indicating the same part of the second screening result as the first screening result, and taking the cataract screening result as the indication result.
Further, the sample identifier has:
the sliding window is used for respectively scanning the detection images so as to eliminate incomplete or unclear detection images;
and the amplifier amplifies the detection image qualified by the detection of the sliding window according to a set proportion, and then performs secondary detection by using the detection window so as to control the quality of the image and obtain a detection image with definition meeting the set requirement as a sample image.
Further, the partition matrix has:
m is multiplied by M, wherein M is more than or equal to 2 and is an integer;
the dividing unit is a square frame body;
embedding the segmentation matrix in a standard template, wherein the center of the segmentation matrix formed by the segmentation units is superposed with the center of the standard template;
the center of the standard template is provided with a floating deviation-correcting cursor, the sample image is loaded into the standard template, the deviation-correcting part firstly scans and positions the sample image to obtain the center position of eyeballs in the sample image, the center position of the eyeballs moves to the cursor to correct the eyeball, and the sample image is divided into a plurality of sub-images by taking each division unit as a division unit according to a division rule after correction.
Further, the screener has:
a recognition unit connected to each of the partition units in the partition matrix, respectively, for acquiring the area of the sub-image based on the partition unit;
the comparator is coupled with a drawing device, the drawing device is used for sequentially acquiring image subunits from the sub-images, and the comparator loads the recognition library to compare the image subunits with the standard atlas stored in the recognition library one by one so as to acquire whether the image subunits have any suspected cataract image characteristics of yellow spots, crystals, foreign matters and tumors; if yes, the image subunits are marked and stored in a storage module.
Further, the combining section has:
the sorting unit is used for acquiring at least one image subunit marked and restoring the positions of the image subunits in the sub-images based on the marking to form an initial combined image;
and the combining unit is used for filling and combining the missing parts in the initial combined image with the standard image to form the combined image.
Further, the first neural network unit loads each sub-image with image features from the storage module and conducts iterative training based on the historical feature library to check whether each sub-image has cataract characterization, a first screening result is established according to the cataract characterization, and a labeling feature is established in the first screening result according to the cataract characterization.
Further, the second neural network unit is established by taking the first neural network unit as task circulation according to the task development.
Further, the exchange of labeled features between the second neural network element and the first neural network element is enabled.
According to the method, a plurality of detection images of the eye are obtained, the incomplete or unclear detection images are scanned by using the sliding window, and then the detection images are amplified according to set times to control the quality of the detection images, so that the detection images with the best definition and meeting the set requirements are obtained as sample images. The method comprises the steps that a sample image is divided into a plurality of sub-images with the same size through a division matrix, then each sub-image is sequentially obtained from the sub-image through an image taking device, a comparator loads an identification library to compare the image sub-units with standard image sets stored in the identification library one by one, and whether the image sub-units have any suspected cataract image characteristics of yellow spots, crystals, foreign matters and tumors is obtained; if so, labeling the image subunits and storing the image subunits in a storage module. The standard atlas is a set of images which are judged by experts and accord with cataract representation, are positioned at different parts of eyes and belong to different cataract periods; after the image subunits with suspected cataract representation are screened out, the positions of the image subunits in the subimages are restored based on the labels to form an initial combined image; and filling and combining the missing parts in the initial combined image with the standard images to form the combined image.
The task flow manager inputs each sub-image with image characteristics into the first neural network unit in sequence to compare based on the historical characteristic library so as to check whether each sub-image has cataract characterization, establishes a first screening result according to the cataract characterization, and establishes a labeling characteristic in the first screening result according to the cataract characterization; the task manager respectively inputs the sample image and the combined image to corresponding second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, and the feature selector is used as a learning task training factor to respectively carry out iterative training on the detection image and the combined image to acquire a second screening result; the indication unit is used for taking the first screening result as an indication, indicating the same part of the second screening result as the first screening result, and taking the cataract screening result as the indication result.
In the above, the primary screening of the cataract characterization is firstly performed, the suspected cataract characterization is labeled in the primary screening process, then the sub-images with the suspected cataract characterization are iteratively trained in the first neural network unit based on the historical feature library to check whether each sub-image has the cataract characterization, a first screening result is established according to the cataract characterization, and a labeling feature is established in the first screening result according to the cataract characterization;
and inputting the sample image and the combined image into a second neural network unit, performing iterative training in the second neural network unit based on the feature selector, acquiring a second screening result, taking the first screening result as an indication to indicate the same part of the second screening result as the first screening result, and analyzing whether the cataract is characterized by the same part number. Generally, if there are no identical parts, the probability of developing cataract is relatively small, less than three ten-thousandths, and can be ignored, and if at least one is identical, the probability of developing cataract increases to sixty percent; if there are multiple such indications, it is common to enter at least the cataractous stage. The cataract is screened by the image recognition method through the artificial intelligence system, so that the method has high reliability, is beneficial to judgment of a doctor on the cataract, and particularly can greatly discover the characterization of the early cataract so as to achieve the purpose of early prevention and early treatment.
Drawings
The invention is illustrated and described only by way of example and not by way of limitation in the scope of the invention as set forth in the following drawings, in which:
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a detailed process diagram of step 1 in the present invention;
fig. 3 is a schematic diagram of the framework of the invention.
Detailed Description
In order to make the objects, technical solutions, design methods, and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1 to 3, the present invention provides an artificial intelligence-based screening method for ophthalmic cataract, comprising the following steps:
step 1: acquiring a detection image of cataract, dividing the detection image into a plurality of subimages according to a set rule, screening the image characteristics of suspected cataract from each subimage and marking;
referring to fig. 2, step 1 specifically includes:
(1) the method comprises the following steps Acquiring a detection image of the cataract, and scanning the detection image respectively to remove incomplete or unclear detection images; amplifying the detected image according to a set proportion by using an amplifier, and then carrying out secondary detection by using a detection window to control the quality of the image so as to obtain a detected image with definition meeting a set requirement as a sample image;
(2) the method comprises the following steps Embedding the segmentation matrix in a standard template, so that the center of the segmentation matrix formed by the segmentation units is superposed with the center of the standard template; a floating deviation-correcting cursor is arranged at the center of the standard template, the sample image is loaded into the standard template, the deviation-correcting part firstly scans and positions the sample image to obtain the center position of eyeballs in the sample image, and moves the center position of the eyeballs to the cursor to correct the center position, and then the sample image is divided into a plurality of sub-images by taking each division unit as a division unit according to a division rule after correction;
(3) the method comprises the following steps Acquiring a region of the sub-image based on the segmentation unit; sequentially acquiring image subunits from the subimages by using an image acquisition device, and comparing the image subunits with standard image sets stored in an identification library one by loading the identification library by using a comparator so as to acquire whether the image subunits have the image characteristics of any suspected cataract in yellow spots, crystals, foreign matters and tumors; if yes, the image subunits are marked and stored in a storage module.
Step 2: mixing the sub-images with the image characteristics into at least one standard image to form a combined image; the step 2 specifically comprises:
a, acquiring at least one marked image subunit, and restoring the positions of the image subunits in the sub-images by utilizing a sorting unit based on the mark to form an initial combined image;
b: and filling and combining the missing parts in the initial combined image with the standard images to form a combined image.
And 3, step 3: sequentially inputting each sub-image with image characteristics into a first neural network unit, comparing and checking whether each sub-image has a cataract characterization or not based on a historical characteristic library, establishing a first screening result according to the cataract characterization identified in each sub-image, and establishing a labeling characteristic in the first screening result according to the cataract characterization;
and 4, step 4: the detection image and the combined image are respectively input to independent second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, iterative training is respectively carried out on the detection image and the combined image by taking the feature selector as a learning task training factor to acquire a second screening result, the first screening result is taken as an indication to indicate the part, identical to the first screening result, in the second screening result, and the indication result is taken as a cataract screening result.
The invention also provides an ophthalmic cataract screening system based on artificial intelligence, which comprises:
the image acquisition part is used for acquiring a plurality of groups of detection images of the eyes;
the sample recognizer is used for carrying out primary recognition on a plurality of groups of detection images so as to obtain a detection image with definition meeting the set requirement as a sample image;
the segmentation matrix is provided with a plurality of segmentation units and a deviation rectifying part, the sample image is loaded into the standard template, the deviation rectifying part corrects the sample image by taking eyeballs as centers, and the sample image is segmented into a plurality of sub-images according to a segmentation rule after correction;
the artificial intelligence system is provided with a screening device, a combining part, a task flow manager, an indicating unit, a first neural network unit and at least two second neural network units which are constructed by taking the first neural network unit as task flow circulation,
the screening device is used for receiving each sub-image in sequence and identifying and processing the sub-images so as to screen out and label the image characteristics of suspected cataract;
the combining part is used for mixing the sub-images with the image characteristics into at least one standard image to form a combined image;
the task flow manager inputs each sub-image with image characteristics into the first neural network unit in sequence to compare based on the historical characteristic library so as to check whether each sub-image has cataract characterization, establishes a first screening result according to the cataract characterization, and establishes a labeling characteristic in the first screening result according to the cataract characterization;
the task manager respectively inputs the sample image and the combined image to corresponding second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, and the feature selector is used as a learning task training factor to respectively carry out iterative training on the detection image and the combined image to acquire a second screening result;
the indicating unit is used for taking the first screening result as an indication, indicating the same part of the second screening result as the first screening result, and taking the cataract screening result as the cataract screening result based on the indication result.
In the above, the result of the iterative training of the first neural network unit is used as a part of the training factor of the second neural network unit, and the feature selector period includes the historical feature library and also includes the result of the continuous iterative training of the first neural network unit, so that the intelligent experience of the feature selector is richer.
In the above, the sample identifier has:
the sliding windows are used for respectively scanning the detection images so as to eliminate incomplete or unclear detection images;
and the amplifier amplifies the detection image qualified by the detection of the sliding window according to a set proportion, and then performs secondary detection by using the detection window so as to control the quality of the image and obtain a detection image with definition meeting the set requirement as a sample image.
In the above, the sliding window is only used for identifying the quality of the detection image, and the gray scale of the detection image is not detected, because when the cataract is in the middle and late stages, the obtained detection image has the existence of crystals, the gray scale value of the crystal is high, and the detection gray scale directly excludes the situation. Therefore, only the incomplete or unclear detection image needs to be scanned by using the sliding window, and then the detection image is amplified according to the set times to control the quality of the detection image so as to obtain a detection image which has the best definition and meets the set requirement as a sample image.
In the above, the partition matrix has:
m is multiplied by M, wherein M is more than or equal to 2 and is an integer;
the dividing unit is a square frame body; specifically, a square frame body formed by 2-5mm side length can be adopted; preferably, M is any one of 3, 4 and 5.
Embedding the segmentation matrix in a standard template, wherein the center of the segmentation matrix formed by the segmentation units is superposed with the center of the standard template;
the center of the standard template is provided with a floating deviation-correcting cursor, the sample image is loaded into the standard template, the deviation-correcting part firstly scans and positions the sample image to obtain the center position of eyeballs in the sample image, the center position of the eyeballs moves to the cursor to correct the eyeball, and the sample image is divided into a plurality of sub-images by taking each division unit as a division unit according to a division rule after correction.
In the above, the screener has:
a recognition unit connected to each of the division units in the division matrix, and acquiring a region of the sub-image based on the division units;
the comparator is coupled with a drawing device, the drawing device is used for sequentially acquiring image subunits from the sub-images, and the comparator loads the recognition library to compare the image subunits with the standard atlas stored in the recognition library one by one so as to acquire whether the image subunits have any suspected cataract image characteristics of yellow spots, crystals, foreign matters and tumors; if so, labeling the image subunits and storing the image subunits in a storage module. The standard atlas is a collection of images at different parts of the eye and belonging to different cataract stages that are judged by an expert to be consistent with cataract characterization.
In the above, the combining section has:
the sorting unit is used for acquiring at least one image subunit marked and restoring the positions of the image subunits in the sub-images based on the marking to form an initial combined image;
and the combining unit is used for filling and combining the missing parts in the initial combined image with the standard images to form a combined image.
In the above, the first neural network unit adds each sub-image with image features from the storage module and performs iterative training based on the historical feature library to check whether each sub-image has a cataract characterization, and establishes a first screening result according to the cataract characterization, and establishes a labeling feature in the first screening result according to the cataract characterization.
In the above, the historical feature library is obtained by manually labeling the detected images in different periods of the cataract in the early, middle and late stages based on the confirmed diagnosis, and includes a large number of characterization images of the cataract in different periods, thereby facilitating experience learning.
In the above, the second neural network unit is established by taking the first neural network unit as task circulation according to the task development.
In the above, the exchange of the labeled features between the second neural network element and the first neural network element is enabled.
The principle of the application is as follows: the method comprises the steps of obtaining a plurality of detection images of the eyes, scanning incomplete or unclear detection images by using a sliding window, and then amplifying the detection images according to set times to control the quality of the detection images so as to obtain a detection image with the best definition and meeting set requirements as a sample image. The method comprises the steps that a sample image is divided into a plurality of sub-images with the same size through a division matrix, then each sub-image is sequentially obtained from the sub-image through an image taking device, a comparator loads an identification library to compare the image sub-units with standard image sets stored in the identification library one by one, and whether the image sub-units have any suspected cataract image characteristics of yellow spots, crystals, foreign matters and tumors is obtained; if so, labeling the image subunits and storing the image subunits in a storage module. The standard atlas is a set of images which are judged by experts to be in accordance with the cataract representation, are positioned at different parts of eyes and belong to different cataract periods; after the image subunits with suspected cataract representation are screened out, the positions of the image subunits in the subimages are restored based on the labels to form an initial combined image; and filling and combining the missing parts in the initial combined image with the standard images to form a combined image.
The task flow manager inputs each sub-image with image characteristics into the first neural network unit in sequence to compare based on the historical characteristic library so as to check whether each sub-image has cataract characterization, establishes a first screening result according to the cataract characterization, and establishes a labeling characteristic in the first screening result according to the cataract characterization; the task manager respectively inputs the sample image and the combined image to corresponding second neural network units along with the first neural network unit, each second neural network unit acquires the labeling characteristic and adds the labeling characteristic into the characteristic selector, and the characteristic selector is used as a learning task training factor to respectively carry out iterative training on the detection image and the combined image to acquire a second screening result; the indicating unit is used for taking the first screening result as an indication, indicating the same part of the second screening result as the first screening result, and taking the cataract screening result as the cataract screening result based on the indication result.
In the above, the primary screening of the cataract characterization is firstly performed, the suspected cataract characterization is labeled in the primary screening process, then the sub-images with the suspected cataract characterization are iteratively trained in the first neural network unit based on the historical feature library to check whether each sub-image has the cataract characterization, a first screening result is established according to the cataract characterization, and a labeling feature is established in the first screening result by the cataract characterization;
and inputting the sample image and the combined image into a second neural network unit, performing iterative training in the second neural network unit based on the feature selector, obtaining a second screening result, taking the first screening result as an indication to indicate the same part of the second screening result as the first screening result, and analyzing whether the cataract characterization exists or not by using the same part number. Generally, if there are no identical parts, the probability of developing cataract is relatively small, less than three tenths of a thousand, which is negligible, and if at least one is identical, the probability of developing cataract increases to sixty percent; if there are multiple such indications, it is common to enter at least the cataractous stage. The cataract is screened by the image recognition method through the artificial intelligence system, so that the method has high reliability, is helpful for doctors to judge the cataract, and particularly can greatly discover the characterization of early cataract so as to achieve the purpose of early prevention and early treatment.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

1. The method for screening the cataract in the ophthalmology based on the artificial intelligence is characterized by comprising the following steps:
acquiring a detection image of the cataract, dividing the detection image into a plurality of sub-images according to a set rule, screening the image characteristics of the suspected cataract from each sub-image and marking the image characteristics;
mixing the sub-images with the image characteristics into at least one standard image to form a combined image;
sequentially inputting each sub-image with image characteristics into a first neural network unit, comparing and checking whether each sub-image has a cataract characterization or not based on a historical characteristic library, establishing a first screening result according to the cataract characterization identified in each sub-image, and establishing a labeling characteristic in the first screening result according to the cataract characterization;
the detection image and the combined image are respectively input to independent second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, iterative training is respectively carried out on the detection image and the combined image by taking the feature selector as a learning task training factor to acquire a second screening result, the first screening result is taken as an indication to indicate the part, identical to the first screening result, in the second screening result, and the indication result is taken as a cataract screening result.
2. Based on artificial intelligence ophthalmology cataract screening system, its characterized in that includes:
the image acquisition part is used for acquiring a plurality of groups of detection images of the eyes;
the sample recognizer is used for carrying out primary recognition on a plurality of groups of detection images so as to obtain a detection image with definition meeting the set requirement as a sample image;
the system comprises a segmentation matrix, a sampling module and a correction module, wherein the segmentation matrix is provided with a plurality of segmentation units and a correction part, the sample image is loaded into a standard template, the correction part corrects the sample image by taking eyeballs as centers, and the sample image is segmented into a plurality of sub-images according to a segmentation rule after correction;
the artificial intelligence system is provided with a screener, a combining part, a task flow manager, an indicating unit, a first neural network unit and at least two second neural network units which are constructed by taking the first neural network unit as task flow,
the screening device is used for receiving each sub-image in sequence and identifying and processing the sub-images so as to screen out and label the image characteristics of suspected cataract;
the combining part is used for mixing the sub-images with the image characteristics into at least one standard image to form a combined image;
the task flow manager inputs each sub-image with image characteristics into the first neural network unit in sequence to compare based on the historical characteristic library so as to check whether each sub-image has cataract characterization, establishes a first screening result according to the cataract characterization, and establishes a labeling characteristic in the first screening result according to the cataract characterization;
the task manager respectively inputs the sample image and the combined image to corresponding second neural network units along with the first neural network unit, each second neural network unit acquires an annotation feature and adds the annotation feature into a feature selector, and the feature selector is used as a learning task training factor to respectively carry out iterative training on the detection image and the combined image to acquire a second screening result;
the indication unit is used for taking the first screening result as an indication, indicating the same part of the second screening result as the first screening result, and taking the cataract screening result as the indication result.
3. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the specimen recognizer has:
the sliding window is used for respectively scanning the detection images so as to eliminate incomplete or unclear detection images;
and the amplifier amplifies the detection image qualified by the detection of the sliding window according to a set proportion, and then performs secondary detection by using the detection window so as to control the quality of the image and obtain a detection image with definition meeting a set requirement as a sample image.
4. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the segmentation matrix has:
m multiplied by M division units, wherein M is more than or equal to 2 and is an integer;
the dividing unit is a square frame body;
embedding the segmentation matrix in a standard template, wherein the center of the segmentation matrix formed by the segmentation units is superposed with the center of the standard template;
the center of the standard template is provided with a floating deviation-correcting cursor, the sample image is loaded into the standard template, the deviation-correcting part firstly scans and positions the sample image to obtain the center position of eyeballs in the sample image, the center position of the eyeballs moves to the cursor to correct the eyeball, and the sample image is divided into a plurality of sub-images by taking each division unit as a division unit according to a division rule after correction.
5. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the screener has:
a recognition unit connected to each of the partition units in the partition matrix, respectively, for acquiring the area of the sub-image based on the partition unit;
the comparator is coupled with a drawing device, the drawing device is used for sequentially acquiring image subunits from the sub-images, and the comparator loads the recognition library to compare the image subunits with the standard atlas stored in the recognition library one by one so as to acquire whether the image subunits have any suspected cataract image characteristics of yellow spots, crystals, foreign matters and tumors; if yes, the image subunits are marked and stored in a storage module.
6. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the combining section has:
the sorting unit is used for acquiring at least one marked image subunit, and restoring the positions of the image subunits in the sub-images on the basis of the mark to form an initial combined image;
and the combining unit is used for filling and combining the missing parts in the initial combined image with the standard images to form a combined image.
7. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the first neural network unit loads each sub-image with image features from the storage module and performs iterative training based on the historical feature library to see whether each sub-image has cataract characterization, establishes the first screening result according to the cataract characterization, and establishes the labeled features in the first screening result according to the cataract characterization.
8. The artificial intelligence based ophthalmic cataract screening system of claim 2, wherein the second neural network element is established upon task development with the first neural network element tasked.
9. The artificial intelligence based ophthalmic cataract screening system of claim 2 or 8, wherein an exchange of labeled features is enabled between the second neural network unit and the first neural network unit.
CN202210946533.3A 2022-08-08 2022-08-08 Artificial intelligence-based ophthalmic cataract screening method and system Withdrawn CN115312185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210946533.3A CN115312185A (en) 2022-08-08 2022-08-08 Artificial intelligence-based ophthalmic cataract screening method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210946533.3A CN115312185A (en) 2022-08-08 2022-08-08 Artificial intelligence-based ophthalmic cataract screening method and system

Publications (1)

Publication Number Publication Date
CN115312185A true CN115312185A (en) 2022-11-08

Family

ID=83859825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210946533.3A Withdrawn CN115312185A (en) 2022-08-08 2022-08-08 Artificial intelligence-based ophthalmic cataract screening method and system

Country Status (1)

Country Link
CN (1) CN115312185A (en)

Similar Documents

Publication Publication Date Title
CN111481166B (en) Automatic identification system based on eye ground screening
CN109886273B (en) CMR image segmentation and classification system
CN110327013B (en) Fundus image detection method, device and equipment and storage medium
US7474775B2 (en) Automatic detection of red lesions in digital color fundus photographs
US10339655B2 (en) Automated image evaluation in x-ray imaging
Alban et al. Automated detection of diabetic retinopathy using fluorescein angiography photographs
US20200202103A1 (en) Method and Apparatus for Processing Retinal Images
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN109697719B (en) Image quality evaluation method and device and computer readable storage medium
EP3944185A1 (en) Computer-implemented method, system and computer program product for detecting a retinal condition from eye fundus images
CN110163844B (en) Fundus focus detection method, fundus focus detection device, fundus focus detection computer device and fundus focus storage medium
CN110766656B (en) Method, device, equipment and storage medium for screening fundus macular region abnormality
CN110942447B (en) OCT image segmentation method, OCT image segmentation device, OCT image segmentation equipment and storage medium
CN110310723A (en) Bone image processing method, electronic equipment and storage medium
US20210271931A1 (en) Systems and methods for detecting laterality of a medical image
CN112869697A (en) Judgment method for simultaneously identifying stage and pathological change characteristics of diabetic retinopathy
US20240087133A1 (en) Method of refining tissue specimen image, and computing system performing same
CN115312185A (en) Artificial intelligence-based ophthalmic cataract screening method and system
CN116830148A (en) Automatic detection and differentiation of small intestine lesions in capsule endoscopy
CN110728660B (en) Method and device for lesion segmentation based on ischemic stroke MRI detection mark
CN117352164A (en) Multi-mode tumor detection and diagnosis platform based on artificial intelligence and processing method thereof
WO2020016836A1 (en) System and method for managing the quality of an image
CN113222061B (en) MRI image classification method based on two-way small sample learning
EP4427201A1 (en) Hybrid classifier training for feature annotation
CN116228660A (en) Method and device for detecting abnormal parts of chest film

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20221108