CN110751636A - Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network - Google Patents
Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network Download PDFInfo
- Publication number
- CN110751636A CN110751636A CN201910966530.4A CN201910966530A CN110751636A CN 110751636 A CN110751636 A CN 110751636A CN 201910966530 A CN201910966530 A CN 201910966530A CN 110751636 A CN110751636 A CN 110751636A
- Authority
- CN
- China
- Prior art keywords
- artery
- arterial
- fundus
- retinal arteriosclerosis
- arteriosclerosis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention provides a fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network, which comprises the following steps: 1) collecting fundus images diagnosed as retinal arteriosclerosis by an ophthalmologist, fitting pathological blood vessels in the fundus images, and calculating and counting reflection parameters to determine a retinal arteriosclerosis detection threshold; 2) an Incepton ResnetV2 module and a residual error attention mechanism module are used for improving a coding and decoding network, and the coding and decoding network is applied to the segmentation of arterial blood vessels and arterial reflective bands; 3) screening an effective area, sampling the effective area, fitting by using four segments of Gaussian models to obtain a blood vessel gray level distribution curve, calculating a reflection parameter according to a fitting result, and comparing the reflection parameter with a threshold value to judge whether the patient suffers from retinal arteriosclerosis. The invention determines the retinal arteriosclerosis quantitative detection threshold, and utilizes the deep learning technology to solve the problem that the traditional method can not accurately segment the fundus arteriovenous vessels and the arterial reflective zone, thereby completing the retinal arteriosclerosis detection.
Description
Technical Field
The invention relates to a fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network, which solves the problem that the conventional method cannot accurately segment fundus arteriovenous vessels and arterial reflective bands, reduces the interference of light and other fundus tissue characteristics on the segmentation of the arterial reflective bands, improves the transmission efficiency of characteristic and gradient information, realizes high-accuracy segmentation, determines a retinal arteriosclerosis detection threshold value, and realizes high-accuracy retinal arteriosclerosis detection. Belonging to the fields of image processing, deep learning, medical imaging and the like.
Background
Retinal arteriosclerosis status is related to the systemic arteriosclerosis level. Since the fundus blood vessel is the only blood vessel which can be directly observed in a non-invasive way, the degree of the systemic arteriosclerosis can be known by regularly detecting the retinal arteriosclerosis, and doctors are assisted to intervene in advance on the continuous aggravation of the systemic arteriosclerosis. Retinal arteriosclerosis can affect the caliber of the artery vessels at the bottom of the eye and the reflecting degree of the reflecting belts of the artery vessels. Currently, hospitals mainly use fundus cameras to observe human eye conditions. When arteriosclerosis appears, the artery reflective band in the fundus image is widened, and the color of the blood column is changed into a bright copper color; when the arteriosclerosis continues to increase, the blood vessels appear white silvery glistenings.
Clinically, doctors observe fundus images of patients through experience, and judge whether the patients have arteriosclerosis according to whether the width of the arterial reflex zone is increased compared with the width of the arterial blood vessel and the reflex degree of the arterial reflex zone is enhanced in the images, so that the width ratio and the gray scale ratio of the arterial blood vessel to the arterial reflex zone are used as the basis for detecting the arteriosclerosis. Since the retinal arteriosclerosis detection needs to perform gray scale fitting on the artery blood vessels and the artery reflective bands, the artery blood vessels and the artery reflective bands in the fundus image need to be extracted. Although the segmentation problem of the fundus image blood vessels has been widely focused, the study on the segmentation of the arterial blood vessels and the arterial reflex zones is relatively rare. Meanwhile, the shape and the trend of the vein blood vessel are similar to those of the artery blood vessel, when the fundus image shooting quality is poor, the segmentation of the artery blood vessel is possibly interfered, and the shape of the artery reflective belt is slender and is easily influenced by ambient light, so that the segmentation difficulty of the artery blood vessel and the artery reflective belt is slightly high.
Aiming at the problems, the invention provides a fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network, which can be used for large-scale retinal arteriosclerosis screening.
Disclosure of Invention
The invention provides a method for detecting retinal arteriosclerosis of an eye fundus image based on an improved coding and decoding network. Firstly, collecting fundus images diagnosed as retinal arteriosclerosis by an ophthalmologist, taking a sampling straight line perpendicular to an arterial blood vessel and an arterial reflective band in the fundus images, fitting pixel points on the sampling straight line by using a four-section Gaussian model to obtain a gray level distribution curve of a blood vessel cross section, then calculating a reflective parameter-bandwidth ratio and a gray level ratio, and determining a retinal arteriosclerosis quantitative detection threshold; then, artery and vein vessels and artery reflective bands in the fundus image training set are respectively marked, so that interference caused by similarity of characteristics of the artery and vein vessels can be reduced; extracting multi-scale feature information at a coding part by means of 4 inclusion Resnet V2 modules based on an improved coding and decoding network, adding a residual error attention mechanism module behind each feature extraction module to enhance target feature information and improve network performance, generating a sparse feature map by using four upper sampling layers at a decoding part, generating a dense feature map by using four convolution layers, and realizing classification of each pixel by SoftMax; then, searching all connected domains of the reflective band by using a Canny edge detection operator, screening out the maximum connected domain, and taking 3 sampling straight lines vertical to the region; for 3 groups of sampling pixel points, respectively fitting by using four segments of Gaussian models to obtain a gray distribution curve of the cross section of 3 blood vessels; and finally, calculating the reflection parameters, namely a gray scale ratio and a bandwidth ratio, of the 3 groups of artery blood vessels and artery reflection bands according to the fitting result. Comparing three sets of reflectance parameters to a detection threshold, there are three conditions: (1) when the three groups of reflection parameters are smaller than the threshold value, judging that retinal arteriosclerosis does not exist in the fundus; (2) when one of the three groups of reflection parameters is larger than the threshold value, judging that retinal arteriosclerosis exists in the fundus oculi; (3) when other conditions occur, the fundus is judged to be suspected retinal arteriosclerosis.
The technical scheme for realizing the invention comprises the following steps:
step 1: collecting fundus images diagnosed as retinal arteriosclerosis by ophthalmologists, taking a sampling straight line perpendicular to an arterial blood vessel and an arterial reflective band in the fundus images, fitting pixel points on the sampling straight line by using a four-section Gaussian model to obtain a gray level distribution curve of a blood vessel cross section, and then calculating a reflective parameter-bandwidth ratio and a gray level ratio to determine a retinal arteriosclerosis quantitative detection threshold;
step 2: respectively labeling arteriovenous vessels and artery reflecting bands in fundus images by different colors, dividing all fundus images into 224 multiplied by 224 image blocks according to the input requirements of the network, and using the image blocks as a training set of an improved encoding and decoding network;
and step 3: segmenting an artery blood vessel and an artery reflective band of the fundus image by using an improved coding and decoding network, and screening to obtain an artery reflective band effective area for subsequent detection;
and 4, step 4: taking three sampling straight lines perpendicular to the artery blood vessels and the artery reflecting zones in the effective area, fitting pixel points on the sampling straight lines by using four segments of Gaussian models to respectively obtain gray level distribution curves of the cross sections of 3 blood vessels, and respectively calculating boundary coordinates and average gray values of 3 groups of artery blood vessels and artery reflecting zones according to fitting results;
and 5: and (3) respectively calculating the ratio of width to width and the gray scale ratio of the reflection parameters of the artery blood vessels and the artery reflection bands of the 3 groups of fundus images, comparing the ratio with a quantitative detection threshold value of retinal arteriosclerosis, and judging whether the patient has retinal arteriosclerosis.
Compared with the prior art, the invention has the beneficial effects that:
by utilizing the deep learning technology, the problem that the traditional method cannot accurately divide the fundus arteriovenous blood vessel and the artery reflecting band is solved, the interference of light and other fundus tissue characteristics on the division of the artery reflecting band is reduced, the transmission efficiency of characteristics and gradient information is improved, the accurate division of the artery blood vessel and the artery reflecting band is realized, the retinal arteriosclerosis quantitative detection threshold is determined, and the retinal arteriosclerosis detection with high accuracy is realized.
Drawings
FIG. 1 is a general framework schematic of the present invention;
FIG. 2 is a structure of an improved codec network;
FIG. 3 is the structure of the Inception Resnet V2 module;
FIG. 4 is a structure of a residual attention mechanism module;
FIG. 5 is a process example of screening the effective region of the arterial reflective zone;
FIG. 6 is an example of gray scale distribution of pixels in arterial blood vessels and arterial reflective bands;
fig. 7 example of gaussian fit of arterial blood vessels and arterial retro-reflective bands.
Detailed Description
The present invention will be described in further detail with reference to specific embodiments.
The general frame schematic diagram of the invention is shown in figure 1, firstly, collecting a fundus image diagnosed as retinal arteriosclerosis by an ophthalmologist, taking a sampling straight line vertical to an arterial blood vessel and an arterial reflective band in the fundus image, fitting pixel points on the sampling straight line by using a four-segment Gaussian model to obtain a gray level distribution curve of a blood vessel cross section, then calculating a reflective parameter-bandwidth ratio and a gray level ratio, and determining a retinal arteriosclerosis quantitative detection threshold; secondly, as an image database about arteriovenous blood vessels and arterial reflex zones of fundus images is not disclosed, fundus images of a hospital are collected and manually marked by an ophthalmologist by using a marking tool to obtain a training sample; the improved coding and decoding network extracts multi-scale characteristic information in a coding part by means of 4 inclusion Resnet V2 modules, a residual error attention mechanism module is added behind each characteristic extraction module to enhance target characteristic information and improve network performance, a sparse characteristic map is generated in a decoding part by four upper sampling layers, a dense characteristic map is generated by four convolution layers, classification of each pixel is realized by SoftMax, and the classification is applied to segmentation of fundus image artery blood vessels and artery reflecting bands; detecting the edge of an arterial reflective band by using a Canny operator, taking the maximum connected domain of the arterial reflective band as a detection effective region, and taking 3 sampling straight lines vertical to arterial blood vessels in the effective region; for sampling pixel points, four segments of Gaussian functions are used for fitting respectively to obtain a gray distribution curve of the cross section of 3 blood vessels, reflection parameters of 3 groups of artery blood vessels and artery reflection bands are calculated according to fitting results, and the reflection parameters are compared with a detection threshold value, so that three conditions exist: (1) when the three groups of reflection parameters are smaller than the threshold value, judging that retinal arteriosclerosis does not exist in the fundus; (2) when one of the three groups of reflection parameters is larger than the threshold value, judging that retinal arteriosclerosis exists in the fundus oculi; (3) when other conditions occur, the fundus is judged to be suspected retinal arteriosclerosis.
The following describes a specific implementation process of the technical solution of the present invention with reference to the accompanying drawings.
1. Determining retinal arteriosclerosis detection threshold
Collecting 30 fundus images diagnosed as retinal arteriosclerosis by ophthalmologists, respectively taking a sampling straight line perpendicular to an arterial blood vessel and an arterial reflective band in the fundus images, fitting pixel points on the sampling straight line by using four segments of Gaussian models to obtain a gray level distribution curve of a blood vessel cross section, then calculating a reflective parameter-bandwidth ratio and a gray level ratio, and determining a retinal arteriosclerosis quantitative detection threshold: ratio of belt widthAnd the gray scale ratio
Compared to the quantitative detection threshold, there are three cases: (1) when the three groups of reflection parameters are smaller than the threshold value, judging that retinal arteriosclerosis does not exist in the fundus; (2) when one of the three groups of reflection parameters is larger than the threshold value, judging that retinal arteriosclerosis exists in the fundus oculi; (3) when other conditions occur, the fundus is judged to be suspected retinal arteriosclerosis.
2. Test object
The training set used in the present invention includes 918 training samples of 224 × 244 and 168 test samples. Because arteriovenous blood vessels are similar in characteristics and the shape characteristics of the arterial reflective band are slender, the arteriovenous blood vessels are easy to be detected mistakenly or missed, but because a data map library of fundus arteriovenous blood vessels and arterial reflective bands is not disclosed, the arteriovenous blood vessels and the arterial reflective band in a training sample need to be marked manually by using a graphic marking tool.
3. Improved coding and decoding network
The main body of the improved coding and decoding network adopted by the invention is a coding-decoding network. The feature extraction structure of the coding part is similar to that of the VGG16 network and mainly comprises 4 convolution blocks, wherein the convolution blocks mainly comprise a convolution layer, batch standardization, an activation function and a pooling layer. Meanwhile, the encoding part is a multi-scale input structure by means of an increment Resnet V2 module, and convolution layers in the increment Resnet V2 module use convolution kernels with the sizes of 1 × 1, 3 × 3, 1 × 3, 3 × 1, 1 × 7 and 7 × 1, so that the network width is expanded, and feature information of images in different scales is extracted. The encoding portion down-samples using maximum pooling to preserve salient features, reduce feature dimensionality, and add padding to preserve boundary information. In addition, a residual attention mechanism module is added after the maximum pooling layer, so that the network can select a focused position and enhance the feature representation at the position. The main structure of the decoding part is similar to that of the SegNet network, 4 upsampling blocks are adopted, the upsampling blocks comprise a convolution layer, an activation function layer, a batch standardization layer and an upsampling layer, a SoftMax classifier is added at the end of the decoding part, the segmentation of the whole network is realized, and the structure of the improved coding and decoding network is shown in figure 2.
(1) Coding part
The image input layer to the last pooling layer belong to the coding part, the network width is expanded to extract multi-scale feature information by adopting an Inception Resnet V2 module in the stage, and the network is focused on a target area and the network performance is enhanced by adopting a residual attention mechanism module. Firstly, 5 Incepton Resnet V2-A modules are used for extracting features of 224 multiplied by 224 images, convolution kernels of the modules comprise 1 multiplied by 1 and 3 multiplied by 3, the convolution operation of 1 multiplied by 1 is used for limiting the number of input channels, two continuous convolution operations of 3 multiplied by 3 can be regarded as feature extraction performed by using convolution kernels of 5 multiplied by 5, then the input and output have the same dimension by using the convolution operation of 1 multiplied by 1, and the features are fused in a residual error connection mode; the feature map is reduced to 112 × 112 after passing through the maximum pooling layer, and is used as an input of a residual Attention mechanism Stage1 module, wherein the residual Attention mechanism Stage1 module is a Spatial Attention module (Spatial Attention), all channels at each position are constrained by L2 regularization, and finally an Attention map with consistent Spatial dimensions is output; then, 5 Incepton Resnet V2-B modules are connected, convolution kernels of the modules are 1 × 1, 1 × 7 and 7 × 1 respectively, the last two convolution operations are equivalent to the convolution kernels using 7 × 7, extracted features are fused and sent to a maximum pooling layer, and a 56 × 56 feature map can be obtained; feeding the 56 × 56 feature map into a residual attention mechanism module Stage2, wherein the Stage2 module belongs to a channel attention module and constrains all feature values on each channel similarly to a SEnet network, so that a one-dimensional vector with the same length as the number of channels is output as a privilege weight; then, 5 inclusion Resnet V2-C modules are connected, convolution kernels used by the modules are respectively 1 × 1, 1 × 3 and 3 × 1, the last two convolution operations are equivalent to the convolution kernels using 3 × 3, and the feature map is subjected to maximum pooling to obtain a 28 × 28 feature map; connecting a residual attention mechanism Stage3 module, belonging to an attention focusing module and introducing a nonlinear factor by using Sigmoid for each channel and each space; finally, connect 5 IncepotionResnet V2-C modules again to extract high-level features and connect the max pooling layers to reduce feature map dimensions to get a 14 × 14 feature map. Since the residual attention mechanism is to focus on the target in the larger feature map, it is not necessary to add the residual attention mechanism when the feature map size is small enough.
(2) Decoding section
The decoding part starts from the upsampling layer after the maximum pooling layer to the end of the SoftMax layer. The decoding part mainly decodes and transmits the information of the coding part layer by layer through an up-sampling layer and a convolutional layer. After the 14 × 14 feature map is subjected to an upsampling layer (Upsample), a sparse element map is generated, then a dense feature map of 28 × 28 is generated through a convolution operation of 3 × 3, the upsampling and convolution operations are performed four times in the same way, and finally the high-dimensional features at the output end are sent to SoftMax for classification of each element
3.1 Inception Resnet V2 module
In semantic segmentation, since the areas occupied by objects in different images are different in size, there is a great difference in position information. When the information distribution is balanced, the image is suitable for adopting a larger convolution kernel; when the information distribution is more concentrated, the image is fit with a smaller convolution kernel, so it is difficult to select a proper convolution kernel size for the convolution operation. If the convolutional neural network is simply stacked with larger convolutional layers to extract information, the network is very easy to be over-fitted, gradient information is difficult to be transmitted to the whole network, and the cost of computing resources is increased. The Incepton Resnet V2 module adopts convolution kernels with a plurality of sizes at the same level, so that the network width is expanded, and characteristic information of a plurality of scales can be extracted. Meanwhile, in order to reduce information loss, three different types of inclusion Resnet V2 modules are constructed, namely, an inclusion Resnet V2-A, an inclusion Resnet V2-B and an inclusion Resnet V2-C, and a connection mode of a residual error network is introduced, so that a more efficient feature extraction method is realized, and the structure of the inclusion Resnet V2 module is shown in FIG. 3.
3.2 residual error attention mechanism module
The residual attention network is a convolutional neural network by stacking attention mechanism modules that combines an end-to-end training approach with the latest feed-forward network structure. As the number of layers of the network increases, the attention characteristics of different attention mechanism modules can be adaptively changed, so that not only can a focusing position be selected to capture different types of attention in an image, but also the characteristic representation of an object at the position can be enhanced, and the network performance is improved. Meanwhile, the network adopts a residual attention learning method and can be used for training a deep convolutional neural network. Each residual attention module may be divided into 2 branches: the system comprises a mask branch and a main branch, wherein the main branch adopts a pre-activation residual error module and an inclusion module as basic units of a network; in the mask branch, the processing of the feature map mainly includes down-sampling and up-sampling in the forward direction. The down sampling can be carried out for coding rapidly and obtaining the global characteristics of the characteristic diagram, the up sampling function is to combine the extracted global high-dimensional characteristics with the characteristics which are not down sampled, so that the characteristics of high and low dimensions can be combined better, attention characteristic diagrams with consistent dimensions are output, then the characteristic diagrams of two branches are combined by dot multiplication operation to obtain a final output characteristic diagram, and the output result is shown as the following formula:
Hi,c(x)=(1+Mi,c(x))×Fi,c(x)
wherein M isi,c(x) Feature output graph representing mask branching, Fi,c(x) Is the result of the residual between the output and the input, is fit learned by a deep convolutional neural network, i denotes the range of spatial positions, c denotes the channel index.
The residual attention mechanism module is divided into three types, including space attention, channel attention and attention concentration, and the structure of the residual attention mechanism module is shown in fig. 4.
4. Arterial reflective zone effective area screening
In order to reduce the workload of subsequent arteriosclerosis detection, the segmentation result needs to be screened by an artery reflection band segmentation effective region. An example of a screening process for segmenting an effective region by using an arterial reflex zone is shown in fig. 5, in which fig. 5(a) shows the arterial reflex zone extracted from the segmentation result; FIG. 5(b) shows the result of binarization of the arterial reflective band; FIG. 5(c) is the result of using Canny operator to find the connected domain of the reflection band of artery; and finally, screening to obtain the maximum connected domain of the artery reflective band, and obtaining the artery blood vessel and the artery reflective band to be detected subsequently, as shown in fig. 5 (d).
5. Gauss fitting of arterial blood vessel and arterial reflective band
Fig. 6 shows an example of gray level distribution of pixels in an artery blood vessel and an artery reflection band, where an upper curve in the graph is a radial gray level distribution curve of the artery blood vessel, a lower curve in the graph is a first derivative of the gray level distribution curve, a minimum point C, D of the gray level distribution curve is an artery reflection band boundary, and M is a highest point of the gray level distribution curve. Since the artery reflection band is positioned in the center of the artery blood vessel and the brightness of the artery reflection band is higher than that of the blood vessel, the radial gray distribution of the artery blood vessel conforms to the Gaussian function distribution rule, and the coordinate of the minimum value C, D can be determined by calculating the first derivative of a gray distribution curve. Therefore, the invention provides a method for describing the gray distribution of the cross section of the artery blood vessel and the artery reflecting band by using a four-section Gaussian model, the model is not limited by the asymmetry of the gray distribution of the artery blood vessel and the artery reflecting band, and the boundary of the artery reflecting band can be conveniently and accurately found.
The expression of the four-section Gaussian function adopted by the invention is as follows:
wherein a is1、a2、a3、a4Is the peak of the Gaussian curve, b1、b2、b3、b4Is the peak coordinate, c1、c2、c3、c4Is the standard deviation.
In order to accurately describe the radial gray distribution of the artery blood vessel, the invention performs three times of sampling on the artery blood vessel to be measured, and performs gaussian fitting respectively, wherein examples of the gaussian fitting of the artery blood vessel and the artery reflection band are shown in fig. 7. FIG. 7(a) is a graph of arterial blood vessel sampling, which is performed by determining a sampling straight line perpendicular to the blood vessel to perform four-segment Gaussian fitting on sampling pixels according to the present invention; FIG. 7(b) is a result of fitting the gray value of the sampling pixel point by using four segments of Gaussian functions; fig. 7(c) is the first derivative of the gaussian fit of the gray value from which the coordinates of the minimum point C, D can be calculated. The definition of each region in the fitting result of the gray distribution of FIG. 7(b) is shown in the table1, and the following components: A. point B is the arterial vessel boundary, A-And B+For the retinal background, C and D are the minimum points of the gray scale distribution, i.e. the boundaries of the arterial reflection band, CM and MD are the arterial reflection band regions, and M is the peak value of the CD segment.
TABLE 1 definition of arterial vessel and arterial reflector regions in the example of the Gray-scale Profile
Region(s) | Region definition |
A≤x||x≥B | Background (A)-,B+) |
A<x<C||D<x<B | Arterial blood vessel (AC, DB) |
C<x≤M||M<x<D | Artery reflecting belt (CM, MD) |
6. Retinal arteriosclerosis detection
The reflecting parameters of the artery blood vessel and the artery reflecting band comprise a band ratio BR and a gray scale ratio GR. And obtaining the coordinates of the boundaries of the artery blood vessels and the artery reflecting bands according to the fitting results of the artery blood vessels and the artery reflecting bands, then respectively solving the widths and the average gray scale of the artery blood vessels and the artery reflecting bands on the three linear sampling points, and finally calculating the gray scale ratio and the bandwidth ratio. When retinal arteriosclerosis occurs, the arterial reflecting zone is widened and hyperreflexia is generated, so that the reflecting parameters of the arterial blood vessel and the arterial reflecting zone are increased, and therefore, the group of values with the maximum reflecting parameter in the three groups of data is selected as the reflecting parameter for the arteriosclerosis detection of the patient. The calculation formula of the light reflection parameter is as follows:
the belt width ratio:
wherein XA、XB、XCAnd XDIs the blood vessel and artery reflection band boundary coordinate, YiFor the gray value, n, corresponding to each point1、n2The number of pixels, X, of the reflecting zone of the artery and the blood vessel respectivelyD-XCIs the width, X, of the reflecting belt of the arteryB-XAIn order to be the width of the arterial vessel,is the average gray value of the arterial reflection band,mean gray values of the arterial vessels.
53 fundus images provided by a hospital are taken as test images, wherein the test images comprise 16 normal fundus images, 14 fundus images suspected of suffering from retinal arteriosclerosis and 23 fundus images suffering from arteriosclerosis. The invention is adopted to detect retinal arteriosclerosis of the fundus image. Three groups of reflection parameters of the fundus image to be detected are compared with a detection threshold value, and three conditions exist: (1) when the three groups of reflection parameters are smaller than the threshold value, judging that retinal arteriosclerosis does not exist in the fundus; (2) when one of the three groups of reflection parameters is larger than the threshold value, judging that retinal arteriosclerosis exists in the fundus oculi; (3) when other conditions occur, the fundus is judged to be suspected retinal arteriosclerosis. The detection accuracy of the normal fundus image is 93.7%, the detection accuracy of the fundus image suspected to be suffering from retinal arteriosclerosis is 92.8%, the detection accuracy of the fundus image suffering from retinal arteriosclerosis is 91.3%, and the average detection accuracy is 92.6%. In conclusion, the invention determines the retinal arteriosclerosis detection threshold, and provides the fundus image retinal arteriosclerosis detection method based on the improved coding and decoding network, which can accurately segment retinal artery and vein vessels and an arterial reflective band, solves the problem that the fundus artery and vein vessels and the arterial reflective band cannot be accurately segmented by the traditional method, reduces the interference of light and other fundus tissues on the segmentation of the arterial reflective band, improves the transmission efficiency of characteristic and gradient information, realizes the segmentation with high accuracy, and completes the retinal arteriosclerosis detection.
While the foregoing is directed to the preferred embodiment of the present invention, and is not intended to limit the scope of the invention, it will be understood that the invention is not limited to the embodiments described herein, which are described to assist those skilled in the art in practicing the invention. Further modifications and improvements may readily occur to those skilled in the art without departing from the spirit and scope of the invention, and it is intended that the invention be limited only by the terms and scope of the appended claims, as including all alternatives and equivalents which may be included within the spirit and scope of the invention as defined by the appended claims.
Claims (6)
1. A fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network comprises the following steps:
step 1: collecting fundus images diagnosed as retinal arteriosclerosis by ophthalmologists, taking a sampling straight line perpendicular to arterial blood vessels and arterial reflective bands in the fundus images, fitting pixel points on the sampling straight line by using a four-section Gaussian model to obtain a gray level distribution curve of a blood vessel cross section, and then calculating a reflective parameter-bandwidth ratio and a gray level ratio to determine a retinal arteriosclerosis quantitative detection threshold;
step 2: respectively labeling arteriovenous vessels and artery reflecting bands in fundus images by different colors, dividing all fundus images into 224 multiplied by 224 image blocks according to the input requirements of the network, and using the image blocks as a training set of an improved encoding and decoding network;
and step 3: segmenting an artery blood vessel and an artery reflective band of the fundus image by using an improved coding and decoding network, and screening to obtain an artery reflective band effective area for subsequent detection;
and 4, step 4: taking three sampling straight lines perpendicular to the artery blood vessels and the artery reflecting zones in the effective area, fitting pixel points on the sampling straight lines by using four segments of Gaussian models to respectively obtain gray level distribution curves of the cross sections of 3 blood vessels, and respectively calculating boundary coordinates and average gray values of 3 groups of artery blood vessels and artery reflecting zones according to fitting results;
and 5: and respectively calculating the reflection parameters, namely the belt width ratio and the gray scale ratio of the arterial blood vessels and the arterial reflection belts of the 3 groups of fundus images, comparing the reflection parameters with a retinal arteriosclerosis quantitative detection threshold value, and judging whether the patient has retinal arteriosclerosis.
2. A fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network as claimed in claim 1, characterized in that in step 1, a fundus image diagnosed as retinal arteriosclerosis by an ophthalmologist is collected, a sampling straight line perpendicular to an arterial blood vessel and an arterial reflective band in the fundus image is taken, pixel points on the sampling straight line are fitted by using four segments of Gaussian models to obtain a gray distribution curve of a blood vessel cross section, then calculation of a reflective parameter-bandwidth ratio and a gray ratio is carried out, and a retinal arteriosclerosis quantitative detection threshold value is determined.
3. A fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network according to claim 1, characterized in that in step 2, arteriovenous vessels and arterial reflective bands in the fundus image are respectively marked with different colors, so as to reduce the interference of the venous vessels on segmentation.
4. A fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network as claimed in claim 1, characterized in that in step 3, multi-scale feature information is extracted in a volume block of a coding part by means of 4 inclusion Resnet V2 modules, and a residual error attention mechanism module is added behind each feature extraction module to enhance target features and improve network performance; in the decoding part, four upper sampling layers are used for generating a sparse feature map, then four convolution layers are used for generating a dense feature map, classification of each pixel is realized through SoftMax, then the method is applied to segmentation of the artery and vein blood vessels and the artery reflective band of the fundus image, and then the artery reflective band effective area for subsequent detection is obtained through screening.
5. The method for detecting retinal arteriosclerosis in fundus images based on an improved coding and decoding network as claimed in claim 1, wherein in step 4, three sampling straight lines perpendicular to arterial blood vessels and arterial reflective bands are taken in the effective area, four segments of Gaussian models are used for fitting pixel points on the sampling straight lines to respectively obtain gray level distribution curves of cross sections of 3 blood vessels, and the boundary coordinates and the average gray level values of 3 groups of arterial blood vessels and arterial reflective bands are respectively calculated according to the fitting results.
6. A fundus image retinal arteriosclerosis detection method based on an improved coding and decoding network as claimed in claim 1, wherein in step 5, the reflection parameters of 3 groups of artery blood vessels and artery reflection bands are respectively calculated, and the reflection parameters are compared with a detection threshold, so that three conditions exist: (1) when the three groups of reflection parameters are smaller than the threshold value, judging that retinal arteriosclerosis does not exist in the fundus; (2) when one of the three groups of reflection parameters is larger than the threshold value, judging that retinal arteriosclerosis exists in the fundus oculi; (3) when other conditions occur, the fundus is judged to be suspected retinal arteriosclerosis.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910966530.4A CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910966530.4A CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110751636A true CN110751636A (en) | 2020-02-04 |
CN110751636B CN110751636B (en) | 2023-04-21 |
Family
ID=69278073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910966530.4A Active CN110751636B (en) | 2019-10-12 | 2019-10-12 | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751636B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415342A (en) * | 2020-03-18 | 2020-07-14 | 北京工业大学 | Attention mechanism fused automatic detection method for pulmonary nodule image of three-dimensional convolutional neural network |
CN111415361A (en) * | 2020-03-31 | 2020-07-14 | 浙江大学 | Method and device for estimating brain age of fetus and detecting abnormality based on deep learning |
CN111932554A (en) * | 2020-07-31 | 2020-11-13 | 青岛海信医疗设备股份有限公司 | Pulmonary blood vessel segmentation method, device and storage medium |
CN112347977A (en) * | 2020-11-23 | 2021-02-09 | 深圳大学 | Automatic detection method, storage medium and device for induced pluripotent stem cells |
CN112927236A (en) * | 2021-03-01 | 2021-06-08 | 南京理工大学 | Clothing analysis method and system based on channel attention and self-supervision constraint |
CN113192074A (en) * | 2021-04-07 | 2021-07-30 | 西安交通大学 | Artery and vein automatic segmentation method suitable for OCTA image |
CN113269783A (en) * | 2021-04-30 | 2021-08-17 | 北京小白世纪网络科技有限公司 | Pulmonary nodule segmentation method and device based on three-dimensional attention mechanism |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657612A (en) * | 2017-10-16 | 2018-02-02 | 西安交通大学 | Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
US20190014982A1 (en) * | 2017-07-12 | 2019-01-17 | iHealthScreen Inc. | Automated blood vessel feature detection and quantification for retinal image grading and disease screening |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Eye fundus image judgment method and equipment |
CN110276356A (en) * | 2019-06-18 | 2019-09-24 | 南京邮电大学 | Eye fundus image aneurysms recognition methods based on R-CNN |
-
2019
- 2019-10-12 CN CN201910966530.4A patent/CN110751636B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190014982A1 (en) * | 2017-07-12 | 2019-01-17 | iHealthScreen Inc. | Automated blood vessel feature detection and quantification for retinal image grading and disease screening |
CN107657612A (en) * | 2017-10-16 | 2018-02-02 | 西安交通大学 | Suitable for full-automatic the retinal vessel analysis method and system of intelligent and portable equipment |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109658385A (en) * | 2018-11-23 | 2019-04-19 | 上海鹰瞳医疗科技有限公司 | Eye fundus image judgment method and equipment |
CN110276356A (en) * | 2019-06-18 | 2019-09-24 | 南京邮电大学 | Eye fundus image aneurysms recognition methods based on R-CNN |
Non-Patent Citations (2)
Title |
---|
霍妍佼;杨丽红;崔蕊;魏文斌: "共焦激光扫描检眼镜与眼底彩色照相对视网膜和脉络膜病变的检出比较" * |
黄文博: "彩色眼底视网膜图像中相关目标检测方法研究" * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111415342A (en) * | 2020-03-18 | 2020-07-14 | 北京工业大学 | Attention mechanism fused automatic detection method for pulmonary nodule image of three-dimensional convolutional neural network |
CN111415342B (en) * | 2020-03-18 | 2023-12-26 | 北京工业大学 | Automatic detection method for pulmonary nodule images of three-dimensional convolutional neural network by fusing attention mechanisms |
CN111415361A (en) * | 2020-03-31 | 2020-07-14 | 浙江大学 | Method and device for estimating brain age of fetus and detecting abnormality based on deep learning |
CN111932554A (en) * | 2020-07-31 | 2020-11-13 | 青岛海信医疗设备股份有限公司 | Pulmonary blood vessel segmentation method, device and storage medium |
CN111932554B (en) * | 2020-07-31 | 2024-03-22 | 青岛海信医疗设备股份有限公司 | Lung vessel segmentation method, equipment and storage medium |
CN112347977A (en) * | 2020-11-23 | 2021-02-09 | 深圳大学 | Automatic detection method, storage medium and device for induced pluripotent stem cells |
CN112347977B (en) * | 2020-11-23 | 2021-07-20 | 深圳大学 | Automatic detection method, storage medium and device for induced pluripotent stem cells |
CN112927236A (en) * | 2021-03-01 | 2021-06-08 | 南京理工大学 | Clothing analysis method and system based on channel attention and self-supervision constraint |
CN112927236B (en) * | 2021-03-01 | 2021-10-15 | 南京理工大学 | Clothing analysis method and system based on channel attention and self-supervision constraint |
CN113192074A (en) * | 2021-04-07 | 2021-07-30 | 西安交通大学 | Artery and vein automatic segmentation method suitable for OCTA image |
CN113192074B (en) * | 2021-04-07 | 2024-04-05 | 西安交通大学 | Automatic arteriovenous segmentation method suitable for OCTA image |
CN113269783A (en) * | 2021-04-30 | 2021-08-17 | 北京小白世纪网络科技有限公司 | Pulmonary nodule segmentation method and device based on three-dimensional attention mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN110751636B (en) | 2023-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340789B (en) | Fundus retina blood vessel identification and quantification method, device, equipment and storage medium | |
CN110751636B (en) | Fundus image retinal arteriosclerosis detection method based on improved coding and decoding network | |
US11562491B2 (en) | Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network | |
Liu et al. | A framework of wound segmentation based on deep convolutional networks | |
CN112150428A (en) | Medical image segmentation method based on deep learning | |
CN114926477B (en) | Brain tumor multi-mode MRI image segmentation method based on deep learning | |
CN106940816A (en) | Connect the CT image Lung neoplasm detecting systems of convolutional neural networks entirely based on 3D | |
CN109389584A (en) | Multiple dimensioned rhinopharyngeal neoplasm dividing method based on CNN | |
CN113239755B (en) | Medical hyperspectral image classification method based on space-spectrum fusion deep learning | |
CN111882566B (en) | Blood vessel segmentation method, device, equipment and storage medium for retina image | |
CN111161287A (en) | Retinal vessel segmentation method based on symmetric bidirectional cascade network deep learning | |
CN108629769B (en) | Fundus image optic disk positioning method and system based on optimal brother similarity | |
CN111951288A (en) | Skin cancer lesion segmentation method based on deep learning | |
Qin et al. | A review of retinal vessel segmentation for fundus image analysis | |
Wang et al. | EAR-U-Net: EfficientNet and attention-based residual U-Net for automatic liver segmentation in CT | |
CN113643353B (en) | Measurement method for enhancing resolution of vascular caliber of fundus image | |
CN115471470A (en) | Esophageal cancer CT image segmentation method | |
Sun et al. | Automatic detection of retinal regions using fully convolutional networks for diagnosis of abnormal maculae in optical coherence tomography images | |
CN115471512A (en) | Medical image segmentation method based on self-supervision contrast learning | |
Biswal et al. | Robust retinal optic disc and optic cup segmentation via stationary wavelet transform and maximum vessel pixel sum | |
CN110992309B (en) | Fundus image segmentation method based on deep information transfer network | |
Kumar et al. | Automated white corpuscles nucleus segmentation using deep neural network from microscopic blood smear | |
Tan et al. | Lightweight pyramid network with spatial attention mechanism for accurate retinal vessel segmentation | |
Bhuvaneswari et al. | Contrast enhancement of retinal images using green plan masking and whale optimization algorithm | |
Gupta et al. | Comparative study of different machine learning models for automatic diabetic retinopathy detection using fundus image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |