Nothing Special   »   [go: up one dir, main page]

CN114898172A - Diabetic retinopathy classification modeling method based on multi-feature DAG network - Google Patents

Diabetic retinopathy classification modeling method based on multi-feature DAG network Download PDF

Info

Publication number
CN114898172A
CN114898172A CN202210362909.6A CN202210362909A CN114898172A CN 114898172 A CN114898172 A CN 114898172A CN 202210362909 A CN202210362909 A CN 202210362909A CN 114898172 A CN114898172 A CN 114898172A
Authority
CN
China
Prior art keywords
image
retinal
feature
features
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210362909.6A
Other languages
Chinese (zh)
Other versions
CN114898172B (en
Inventor
方玲玲
乔焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaoning Normal University
Original Assignee
Liaoning Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaoning Normal University filed Critical Liaoning Normal University
Priority to CN202210362909.6A priority Critical patent/CN114898172B/en
Publication of CN114898172A publication Critical patent/CN114898172A/en
Application granted granted Critical
Publication of CN114898172B publication Critical patent/CN114898172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a diabetic retinopathy classification modeling method based on a multi-feature DAG network, which comprises the steps of firstly, extracting index features of diabetic retinopathy by using different methods, wherein the index features comprise hemorrhagic spot features, fundus neovascularization features and retinal varicosis features; secondly, an optimized DAG network is built, and a training scheme is continuously changed to train the network, so that the extracted features are fused in a multi-feature mode, and complex local or global features are formed through local features, so that an object is restored; finally, normal and lesion classification is performed by the softmax classifier. The performance evaluation of the classification model is carried out by using the DIARETDB1 data set and the third people's hospital data in the large connected city, and the result shows that the classification accuracy of the classification model is 98.7% and 98.5% respectively for the DIARETDB1 data set image and the hospital data.

Description

Diabetic retinopathy classification modeling method based on multi-feature DAG network
Technical Field
The invention relates to the field of medical image processing, in particular to a diabetic retinopathy classification modeling method based on a multi-feature DAG network.
Background
Diabetic retinopathy is a relatively serious eye disease causing blindness at present. Diabetic retinopathy is classified into stage VI, the first three stages are non-proliferative stages, and the last three stages are proliferative stages. Except general symptoms such as polydipsia, polyphagia, urine glucose and blood sugar rise, the diabetic patients have fundus changes which are mainly characterized by punctate hemorrhage, neovascularization and varicose blood vessels on retinas of two eyes, so that the characteristics of each period of the diabetic retinas have great significance for the diagnosis and estimation prognosis of diabetes. In the past, ophthalmologists manually evaluate fundus images according to diabetic retinopathy characteristics to detect whether retinas are diseased or not, but because some patients are in the early stage of diabetic retinopathy and the characteristics are not obvious, the problem that the optimal treatment time is missed due to inaccurate evaluation easily occurs. Therefore, a fast and accurate diabetic retinopathy image automatic identification and classification system is needed, and the diabetic retinopathy images with unobvious features are also identified.
At present, classification of retinal images is realized by adopting a characteristic extraction mode, such as extraction of microaneurysm characteristics, exudate characteristics, vitreous hemorrhage characteristics and the like, and extraction of diabetic retinal hemorrhage spot characteristics, fundus neovascularization characteristics and retinal varicosis characteristics is not performed. Meanwhile, the existing research only extracts one or two kinds of feature information, so that the classification model cannot learn the features carefully and comprehensively, and the classification accuracy is low.
Disclosure of Invention
The invention aims to solve the technical problems in the prior art and provides a diabetic retinopathy classification modeling method based on a multi-feature DAG network.
The technical solution of the invention is as follows: a classification modeling method for diabetic retinopathy based on a multi-feature DAG network is carried out according to the following steps:
step 1: preprocessing each retina image in the training set to obtain a characteristic image training set
Step 1.1 obtaining the characteristic image of the retinal hemorrhage spot
Extracting a green channel image in an RGB color mode of the retina image, graying and dividing the green channel image into a plurality of sub-imagesBlock, counting the cumulative distribution histogram of each sub-block, setting a finite threshold value T in the histogram c
T c =max(1,T d ×h×w/S)
In the formula, T d Is the iterative adaptive soft threshold, S is the total pixels of the image, h and w are the length and width of the image;
comparing the gray value in the histogram with a set limited threshold value T c Comparing the histogram with a threshold T c The gray value areas are uniformly distributed below the histogram and the total area of the histogram is ensured to be unchanged, and finally, the optimization processing is carried out by using a linear interpolation method to make the retinal hemorrhagic spot characteristic prominent, namely, a retinal hemorrhagic spot characteristic image is obtained;
step 2.2, acquiring the characteristic image of the eyeground new blood vessel
Firstly, 8 masks M are used for each pixel point in the retina image q 1, 2.., 7 take the derivatives of the convolution, each mask M q Making maximum responses to 8 specific edge directions, and taking the maximum value G of the maximum responses as the output of an edge amplitude image:
G=max{|M 0 |,|M 1 |,|M 2 |,|M 3 |,|M 4 |,|M 5 |,|M 6 |,|M 7 |}
finally, carrying out binarization processing on the image according to a self-adaptive soft threshold value to enable the characteristics of the new blood vessels to be prominent, and obtaining a fundus new blood vessel characteristic image;
step 3.3 obtaining retinal varicose characteristic image
Dividing the retina image into N sub-blocks, and iteratively calculating the clustering center C of retinal blood vessels in the retinopathy image v And corresponding degree of membership D pv
Figure BDA0003585877690000021
Figure BDA0003585877690000022
Wherein N is the total number of subblocks, C is the number of clusters of the cluster, x p P 1, 2., N denotes the p-th sub-block, | | x | | | denotes a metric of arbitrary distance, and m ∈ [1, ∞) belongs to a weighted index; c k A cluster center representing a kth class; when the membership degree meets the following iteration termination condition, stopping iteration and calculating a local optimal value J m Making the retinal varicose feature prominent to obtain a retinal varicose feature image;
Figure BDA0003585877690000023
wherein l is the number of iteration steps, and epsilon is the error threshold;
Figure BDA0003585877690000024
step 3.4, the obtained retina characteristic image set is used as a characteristic image training set;
step 2: inputting the characteristic images in the characteristic image training set into a DAG network for training
Step 2.1, establishing an optimized DAG network, wherein the optimized DAG network is composed of a trunk, branches, an add layer, a pooling layer avpool and a Full connection layer Full connect, the trunk is divided into five groups, each group is composed of a convolution layer Conv, a normalization layer BN and an activation function layer relu, the branches are convolution layers skipConv, and the trunk and the branches are simultaneously linked with the add layer;
step 2.2, inputting the images of the feature image training set into the optimized DAG network, realizing multi-feature fusion and learning, and outputting a multi-feature fusion result F i add
F i add =(X i +Y i +Z i )*K=X i *K+Y i *K+Z i *K
Wherein K represents convolution kernel, X represents convolution i Indicating the characteristics of retinal hemorrhage plaques, Y i Representing the characteristics of the fundus new blood vessels; z i Representing retinal varicose vein features;
and step 3: fusing the multi-feature results F i add Sending the data to a softmax classifier, calculating the prediction probability of normal or pathological changes according to the following formula, and realizing effective classification of diabetic retinopathy and normal;
Figure BDA0003585877690000031
wherein, R is a prediction probability, i is 1,2, M represents the ith image, M is the total number of retinal images, and e is a parameter;
when Y belongs to (0,0.5), the image is judged to be normal, and when Y belongs to [0.5,1), the image is judged to be pathological change.
Firstly, extracting index characteristics of diabetic retinopathy by using different methods, wherein the index characteristics comprise hemorrhagic spot characteristics, fundus neovascular characteristics and retinal varicose blood vessel characteristics; secondly, an optimized DAG network is built, and a training scheme is continuously changed to train the network, so that the extracted features are fused in a multi-feature mode, and complex local or global features are formed through local features, so that an object is restored; finally, normal and lesion classification is performed by the softmax classifier. The performance evaluation of the classification model is carried out by using the DIARETDB1 data set and the third people's hospital data in the large connected city, and the result shows that the classification accuracy of the classification model is 98.7% and 98.5% respectively for the DIARETDB1 data set image and the hospital data.
Drawings
FIG. 1 is a graph showing the result of extracting retinal hemorrhage spot characteristics according to the embodiment of the present invention.
Fig. 2 is a result chart of the embodiment of the invention for extracting the characteristics of the fundus neovascularization.
Fig. 3 is a diagram illustrating the result of extracting the retinal varicose vein feature according to the embodiment of the invention.
Fig. 4 is a diagram of a DAG network structure optimized according to an embodiment of the present invention.
FIG. 5 is an overall flow chart of an embodiment of the present invention.
Detailed Description
The invention discloses a diabetic retinopathy classification modeling method based on a multi-feature DAG network, which is shown in FIG. 5 and comprises the following steps:
step 1: taking DIARETDB1 data set and third people hospital data set (hospital data for short) in Dalian city, dividing the data sets into a training set and a testing set, preprocessing each diabetic retina image in the training set, and obtaining a characteristic image training set
Step 1.1 obtaining the characteristic image of the retinal hemorrhage spot
Extracting a green channel image in an RGB color mode of a retina image, graying the green channel image and dividing the green channel image into a plurality of sub-blocks, counting an accumulative distribution histogram of each sub-block, and setting a limited threshold value T in the histogram c
T c =max(1,T d ×h×w/S)
In the formula, T d Is the iterative adaptive soft threshold, S is the total pixels of the image, h and w are the length and width of the image;
comparing the gray value in the histogram with a set limited threshold value T c Comparing the histogram with a threshold T c The gray value regions are uniformly distributed below the histogram and the total area of the histogram is ensured to be unchanged, and finally, the transition problem of each sub-block is optimized and processed by using a linear interpolation method, so that the retinal hemorrhagic spot features are highlighted, that is, a retinal hemorrhagic spot feature image is obtained, as shown in fig. 1;
step 2.2, acquiring the characteristic image of the eyeground new blood vessel
Firstly, 8 masks M are used for each pixel point in the retina image q 1, 2.., 7 take the derivatives of the convolution, each mask M q Making maximum responses to 8 specific edge directions, and taking the maximum value G of the maximum responses as the output of an edge amplitude image:
G=max{|M 0 |,|M 1 |,|M 2 |,|M 3 |,|M 4 |,|M 5 |,|M 6 |,|M 7 |}
finally, carrying out binarization processing on the image according to a self-adaptive soft threshold value to enable the characteristics of the new blood vessels to be prominent, and obtaining a fundus new blood vessel characteristic image as shown in figure 2;
step 3.3 obtaining retinal varicose characteristic image
Dividing the retina image into N subblocks, and iteratively calculating the clustering center C of retinal blood vessels in the retinopathy image v And corresponding degree of membership D pv
Figure BDA0003585877690000041
Figure BDA0003585877690000042
Wherein N is the total number of subblocks, C is the number of clusters of the cluster, x p N denotes the pth sub-block, | | denotes a metric of arbitrary distance, and m ∈ [1, ∞) belongs to a weighted index; c k A cluster center representing a kth class; when the membership degree meets the following iteration termination condition, stopping iteration and calculating a local optimal value J m Making the retinal vessel varicosity characteristic prominent to obtain a retinal vessel varicosity characteristic image, as shown in figure 3;
Figure BDA0003585877690000051
wherein l is the number of iteration steps, and epsilon is the error threshold;
Figure BDA0003585877690000052
step 3.4, the obtained retina characteristic image set is used as a characteristic image training set;
step 2: inputting the characteristic images in the characteristic image training set into a DAG network for training
Step 2.1, establishing an optimized DAG network, wherein the optimized DAG network is composed of a main trunk, two branches, an add layer, a pooling layer avpool and a Full connection layer Full connect (fc) as shown in FIG. 4, the main trunk is divided into five groups, each group is composed of a convolution layer Conv, a normalization layer BN and an activation function layer relu, the five groups are composed of a convolution layer Conv1-5, a normalization layer BN1-5 and an activation function layer relu1-5, the two branches are convolution layer skipConv-1 and skipConv-2, and the main trunk and the two branches are simultaneously linked with the add layer;
step 2.2, inputting the images of the feature image training set into a DAG network, realizing multi-feature fusion and learning, and outputting a multi-feature fusion result F i add
F i add =(X i +Y i +Z i )*K=X i *K+Y i *K+Z i *K
Wherein K represents convolution kernel, X represents convolution i Indicating the characteristics of retinal hemorrhage plaques, Y i Representing the characteristics of the fundus new blood vessels; z i Representing retinal varicose veins characteristics;
the training parameters and values of the training parameters are shown in table 1.
TABLE 1
Figure BDA0003585877690000053
And 3, step 3: fusing the results F with multiple features i add Sending the data to a softmax classifier, calculating the prediction probability of normal or pathological changes according to the following formula, and realizing effective classification of diabetic retinopathy and normal;
Figure BDA0003585877690000061
wherein, R is a prediction probability, i is 1,2, M represents the ith image, M is the total number of diabetic retina images, and e is a parameter;
when Y belongs to (0,0.5), the image is judged to be normal, and when Y belongs to [0.5,1), the image is judged to be pathological change.
Experiment:
the DIARETDB1 data set and the test set image in the third people hospital data set (hospital data set for short) in Dalian city are input into the model established by the embodiment of the invention, and the model identifies and classifies the retina image of the data set into two categories, namely a normal fundus image and a lesion fundus image. Aiming at the two-classification problem of the retina images, the evaluation indexes of the two-classification problem are the classification accuracy (accuracy), Precision (Precision), Recall (Recall), Specificity (Specificity) and F1-score obtained after the data test set images enter the model to jointly evaluate the performance of the model, so that the model is more stable and reliable. The correlation formula is as follows:
Figure BDA0003585877690000062
Figure BDA0003585877690000063
Figure BDA0003585877690000064
Figure BDA0003585877690000065
Figure BDA0003585877690000066
in the formula, TP represents the number of correctly classified positive samples; TN represents the number of correctly classified negative examples; FP represents the number of false positive samples in the negative samples; FN represents the number of false marks as negative samples in the positive samples.
In order to prove the importance of the extracted features, the invention uses hospital data to respectively carry out self algorithms of not extracting the features and only extracting one or two of the features to carry out experimental result comparison. The comparative results are shown in Table 2.
TABLE 2
Figure BDA0003585877690000071
Meanwhile, in order to verify the performance of the model of the present invention, the same data set DIARETDB1 was selected for performance comparison between the conventional model and the model of the present invention, and the comparison result is shown in Table 3.
TABLE 3
Figure BDA0003585877690000072
Note: '- -' represents the deficiency.

Claims (1)

1. A classification modeling method for diabetic retinopathy based on a multi-feature DAG network is characterized by comprising the following steps:
step 1: preprocessing each retina image in the training set to obtain a characteristic image training set
Step 1.1 obtaining the characteristic image of the retinal hemorrhage spot
Extracting a green channel image in an RGB color mode of a retina image, graying and dividing the green channel image into a plurality of sub-blocks, counting an accumulative distribution histogram of each sub-block, and setting a limited threshold value T in the histogram c
T c =max(1,T d ×h×w/S)
In the formula, T d Is the iterative adaptive soft threshold, S is the total pixels of the image, h and w are the length and width of the image;
comparing the gray value in the histogram with a set limited threshold value T c Comparing the histogram with a threshold T c The gray value areas are uniformly distributed below the histogram and the total area of the histogram is ensured to be unchanged, and finally, the optimization processing is carried out by using a linear interpolation method to make the retinal bleeding spot characteristic prominent, namely, a retinal bleeding spot characteristic image is obtained;
step 2.2, acquiring the characteristic image of the eyeground new blood vessel
Firstly, 8 masks M are used for each pixel point in the retina image q 1, 2.., 7 take the derivatives of the convolution, each mask M q Making maximum responses to 8 specific edge directions, and taking the maximum value G of the maximum responses as the output of an edge amplitude image:
G=max{|M 0 |,|M 1 |,|M 2 |,|M 3 |,|M 4 |,|M 5 |,|M 6 |,|M 7 |}
finally, carrying out binarization processing on the image according to a self-adaptive soft threshold value to enable the characteristics of the new blood vessels to be prominent, and obtaining a fundus new blood vessel characteristic image;
step 3.3 obtaining retinal varicose characteristic image
Dividing the retina image into N sub-blocks, and iteratively calculating the clustering center C of retinal blood vessels in the retinopathy image v And corresponding degree of membership D pv
Figure FDA0003585877680000011
Figure FDA0003585877680000012
Wherein N is the total number of subblocks, C is the number of clusters of the cluster, x p P 1, 2., N denotes the p-th sub-block, | | x | | | denotes a metric of arbitrary distance, and m ∈ [1, ∞) belongs to a weighted index; c k A cluster center representing a kth class; when the membership degree meets the following iteration termination condition, stopping iteration and calculating a local optimal value J m Making the retinal varicose feature prominent to obtain a retinal varicose feature image;
Figure FDA0003585877680000021
wherein l is the number of iteration steps, and epsilon is the error threshold;
Figure FDA0003585877680000022
step 3.4, the obtained retina characteristic image set is used as a characteristic image training set;
step 2: inputting the characteristic images in the characteristic image training set into a DAG network for training
Step 2.1, establishing an optimized DAG network, wherein the optimized DAG network is composed of a trunk, branches, an add layer, a pooling layer avpool and a Full connection layer Full connect, the trunk is divided into five groups, each group is composed of a convolution layer Conv, a normalization layer BN and an activation function layer relu, the branches are convolution layers skipConv, and the trunk and the branches are simultaneously linked with the add layer;
step 2.2, inputting the images of the feature image training set into the optimized DAG network, realizing multi-feature fusion and learning, and outputting a multi-feature fusion result F i add
F i add =(X i +Y i +Z i )*K=X i *K+Y i *K+Z i *K
Wherein K represents convolution kernel, X represents convolution i Indicating the characteristics of retinal hemorrhage plaques, Y i Representing the characteristics of the fundus new blood vessels; z i Representing retinal varicose veins characteristics;
and step 3: fusing the results F with multiple features i add Sending the data to a softmax classifier, calculating the prediction probability of normal or pathological changes according to the following formula, and realizing effective classification of diabetic retinopathy and normal;
Figure FDA0003585877680000023
wherein, R is a prediction probability, i is 1,2, M represents the ith image, M is the total number of retinal images, and e is a parameter;
when Y belongs to (0,0.5), the image is judged to be normal, and when Y belongs to [0.5,1), the image is judged to be pathological change.
CN202210362909.6A 2022-04-08 2022-04-08 Multi-feature DAG network-based diabetic retinopathy classification modeling method Active CN114898172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210362909.6A CN114898172B (en) 2022-04-08 2022-04-08 Multi-feature DAG network-based diabetic retinopathy classification modeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210362909.6A CN114898172B (en) 2022-04-08 2022-04-08 Multi-feature DAG network-based diabetic retinopathy classification modeling method

Publications (2)

Publication Number Publication Date
CN114898172A true CN114898172A (en) 2022-08-12
CN114898172B CN114898172B (en) 2024-04-02

Family

ID=82715928

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210362909.6A Active CN114898172B (en) 2022-04-08 2022-04-08 Multi-feature DAG network-based diabetic retinopathy classification modeling method

Country Status (1)

Country Link
CN (1) CN114898172B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116864109A (en) * 2023-07-13 2023-10-10 中世康恺科技有限公司 Medical image artificial intelligence auxiliary diagnosis system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
CN110210570A (en) * 2019-06-10 2019-09-06 上海延华大数据科技有限公司 The more classification methods of diabetic retinopathy image based on deep learning
WO2019196268A1 (en) * 2018-04-13 2019-10-17 博众精工科技股份有限公司 Diabetic retina image classification method and system based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108021916A (en) * 2017-12-31 2018-05-11 南京航空航天大学 Deep learning diabetic retinopathy sorting technique based on notice mechanism
WO2019196268A1 (en) * 2018-04-13 2019-10-17 博众精工科技股份有限公司 Diabetic retina image classification method and system based on deep learning
CN110210570A (en) * 2019-06-10 2019-09-06 上海延华大数据科技有限公司 The more classification methods of diabetic retinopathy image based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李琼;柏正尧;刘莹芳;: "糖尿病性视网膜图像的深度学习分类方法", 中国图象图形学报, no. 10, 31 October 2018 (2018-10-31), pages 1594 - 1603 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116864109A (en) * 2023-07-13 2023-10-10 中世康恺科技有限公司 Medical image artificial intelligence auxiliary diagnosis system

Also Published As

Publication number Publication date
CN114898172B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
Bilal et al. Diabetic retinopathy detection and classification using mixed models for a disease grading database
CN109635862B (en) Sorting method for retinopathy of prematurity plus lesion
Tian et al. Multi-path convolutional neural network in fundus segmentation of blood vessels
CN109726743B (en) Retina OCT image classification method based on three-dimensional convolutional neural network
CN114926477B (en) Brain tumor multi-mode MRI image segmentation method based on deep learning
CN110751636B (en) Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network
CN112837805B (en) Eyelid topological morphology feature extraction method based on deep learning
Raja et al. Analysis of vasculature in human retinal images using particle swarm optimization based Tsallis multi-level thresholding and similarity measures
CN112419248B (en) Ear sclerosis focus detection and diagnosis system based on small target detection neural network
KR102537470B1 (en) Diagnosis method for Graves’orbitopathy using orbital computed tomography (CT) based on neural network-based algorithm
CN113786185B (en) Static brain network feature extraction method and system based on convolutional neural network
CN115035127B (en) Retina blood vessel segmentation method based on generation type countermeasure network
CN113610118A (en) Fundus image classification method, device, equipment and medium based on multitask course learning
CN114038564A (en) Noninvasive risk prediction method for diabetes
Aranha et al. Deep transfer learning strategy to diagnose eye-related conditions and diseases: An approach based on low-quality fundus images
Zeng et al. Automated detection of diabetic retinopathy using a binocular siamese-like convolutional network
CN114898172B (en) Multi-feature DAG network-based diabetic retinopathy classification modeling method
Bali et al. Analysis of deep learning techniques for prediction of eye diseases: A systematic review
Elsawy et al. A novel network with parallel resolution encoders for the diagnosis of corneal diseases
CN112102234B (en) Ear sclerosis focus detection and diagnosis system based on target detection neural network
CN110610480A (en) MCASPP neural network eyeground image optic cup optic disc segmentation model based on Attention mechanism
CN117058467A (en) Gastrointestinal tract lesion type identification method and system
CN117496217A (en) Premature infant retina image detection method based on deep learning and knowledge distillation
Sharma et al. Automatic glaucoma diagnosis in digital fundus images using convolutional neural network
Balakrishnan et al. A hybrid PSO-DEFS based feature selection for the identification of diabetic retinopathy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant