CN111179275B - Medical ultrasonic image segmentation method - Google Patents
Medical ultrasonic image segmentation method Download PDFInfo
- Publication number
- CN111179275B CN111179275B CN201911409096.6A CN201911409096A CN111179275B CN 111179275 B CN111179275 B CN 111179275B CN 201911409096 A CN201911409096 A CN 201911409096A CN 111179275 B CN111179275 B CN 111179275B
- Authority
- CN
- China
- Prior art keywords
- data
- layer
- image
- convolution
- enhancement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000003709 image segmentation Methods 0.000 title claims abstract description 11
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims description 29
- 230000009466 transformation Effects 0.000 claims description 24
- 238000012795 verification Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 230000001965 increasing effect Effects 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000002604 ultrasonography Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 9
- 238000013135 deep learning Methods 0.000 abstract description 4
- 238000003062 neural network model Methods 0.000 abstract description 3
- 230000010365 information processing Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 208000009453 Thyroid Nodule Diseases 0.000 description 1
- 208000024770 Thyroid neoplasm Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003449 preventive effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method. The method disclosed by the invention is based on a general image segmentation neural network model, integrates multiple novel technologies such as a multiple-input multiple-output technology, a hole convolution technology, small sample medical data enhancement and the like, and mainly solves the problems of difficult pain points such as small sample learning, low ultrasonic image contrast, fuzzy nodule edges and the like, so as to obtain the optimal segmentation strategy.
Description
Technical Field
The invention belongs to the technical field of deep learning computer vision and medical information processing, and particularly relates to a medical ultrasonic image segmentation method.
Background
Along with the progress of scientific technology, medical imaging technology has developed to a great extent, and ultrasonic imaging technology has important value in preventive diagnosis and treatment due to the advantages of simple operation, no radiation damage, low cost and the like. Currently, segmentation of regions of interest in medical images is the basis for image analysis and lesion recognition. The ultrasonic image is segmented by a manual segmentation method widely used clinically, and a clinician with abundant experience manually sketches the interested field according to own professional knowledge. However, manual segmentation is time-consuming, extremely dependent on the expertise and the abundant experience of doctors, and is difficult to distinguish visually by human eyes due to the characteristics of blurred edges, low contrast and the like of ultrasonic images. Therefore, how to automatically and efficiently segment an ultrasound image has become an urgent need for solving the problem.
In recent years, a deep neural network model, namely a Convolutional Neural Network (CNN), provides great technical support for improving the segmentation performance of biomedical images. The convolutional neural network can automatically learn low-level visual features and high-level semantic features in the image, and avoids the complex process of manually designing and extracting the image features in the traditional algorithm. However, conventional CNNs cannot reasonably propagate the underlying features to higher layers. In a semantic segmentation model (U-NET), channel fusion of low-dimensional features and high-dimensional features can be realized through methods such as jump connection and the like, and a good segmentation effect is achieved.
Disclosure of Invention
The invention aims to provide an ultrasonic image segmentation design scheme of a network Multi-related-Unet (MD-Unet) based on deep learning in ultrasonic medical image processing so as to obtain better segmentation performance.
The technical scheme adopted by the invention is as follows:
a medical ultrasound image segmentation method comprising the steps of:
step 1, preprocessing ultrasonic image data to be segmented to obtain training set data and verification set data;
step 2, data enhancement is carried out on training set data and verification set data, and the step comprises the following steps:
1) Increasing the data volume of training data using offline enhancement: adopting rotation transformation and horizontal overturning transformation to perform 10 times enhancement;
2) Generalization of the network model is enhanced by utilizing online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator mode;
step 3, constructing a multi-input multi-output cavity convolution U-shaped network, which comprises the following steps:
1) Multiple-input downsampling module: the downsampling module is 4 layers in total, the multi-input adopts an image multi-scale idea, the input data is scaled into four pairs of data of which the size is 8:4:2:1, and the four pairs of data are respectively fused with a network-entering downsampling layer of which the size is two, three and four; the downsampling module completes bottom layer feature acquisition by utilizing a convolution layer and a maximum pooling layer, and feature graphs are sequentially obtained; the convolution kernel size of each layer is 3×3, and hole convolution r=2 is adopted, namely, an interval is added in the conventional convolution kernels so as to increase the image receptive field, and the convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively;
2) Up-sampling module: the up-sampling module adopts a deconvolution as an up-sampling mode, and sequentially enlarges the size of the characteristic image by using the up-sampling module, reduces the number of channels and finally obtains a prediction graph with the same size as input data; the convolution kernel size of each layer is 3×3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, 32 respectively;
3) Depth supervision multi-output module: performing size transformation on the label for 4 times to form four pairs of data of 8:4:2:1, and sequentially taking the four pairs of data as training labels of 4-layer up-sampled output layers;
and step 5, inputting the preprocessed ultrasonic image data to be segmented into a trained U-shaped network to obtain a segmentation result of each pixel.
The invention has the beneficial effects that: the invention provides a segmentation method for an ultrasonic medical image, which is based on a general image segmentation neural network model, integrates multiple novel technologies such as a multiple-input multiple-output technology, a hole convolution technology, small sample medical data enhancement and the like, and mainly solves the problems of difficulty pain points such as small sample learning, low ultrasonic image contrast, fuzzy nodule edges and the like, so as to obtain an optimal segmentation strategy.
Drawings
Fig. 1 is a schematic diagram of a medical image segmentation method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a data processing module in step 1 according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a data enhancement module in step 2 according to an embodiment of the present invention.
Fig. 4 is a schematic diagram showing the overall structure of the MD-Unet of step 3 according to the embodiment of the present invention.
Fig. 5 is a schematic diagram of accuracy and loss of a training set and a verification set according to an embodiment of the present invention, where fig. (a) is a schematic diagram of a training set and a verification set loss function obtained by training using an MD-Unet network, and fig. (b) is a schematic diagram of accuracy of the training set and the verification set.
Fig. 6 is a schematic diagram of an original label and a segmented image according to an embodiment of the present invention, wherein the left side of fig. 6 is a label image, and the right side of fig. 6 is a segmented result.
Detailed Description
The invention is described in detail below with reference to the drawings and simulations:
the invention provides a network segmentation method based on thyroid nodule ultrasonic images, which comprises 5 steps, mainly comprises 5 modules of data set acquisition, image preprocessing, network model construction, network training, network testing and evaluation, and a flow chart of the method is shown in figure 1. In this embodiment, the specific steps are as follows:
1. preprocessing the ultrasonic image data to be segmented to obtain training set data and test set data, wherein the data processing flow is shown in figure 2.
1) Removing privacy information and medical image instrument marks, and screening out original ultrasonic images which are not manually marked by image doctors;
2) Manually marking the labels under the guidance of a sonographer;
3) Enhancing image quality while preserving image detail texture features
3-1) reduction of noise and non-uniform plaque Using adaptive mean filtering
3-2) use of two morphological operations-on and off to enhance filtration
3-3) histogram equalization
3-4) Sobel operator edge enhancement
4) Training sets, validation sets and test sets were divided into data at a ratio of 6:2:2
5) The image is subjected to de-coloring treatment to obtain a gray image, and scale normalization is carried out to unify the resolution into 256×256
6) Binarizing the data label and normalizing the data label into a [0,1] interval
2. Data enhancement is performed on the training set small sample data, and the flow is shown in fig. 3.
The result of deep learning is closely related to the quality and quantity of data, but medical samples are difficult to collect, the data quantity is small, so that the situation of over fitting is avoided, the segmentation precision is improved, and the defect of small sample data is overcome by adopting two enhanced combination modes.
1) The data volume of training data is increased by adopting off-line enhancement, and mainly rotation transformation and horizontal overturning transformation are adopted to carry out 10 times enhancement.
2) The generalization of the network model is enhanced by utilizing online enhancement. Mainly adopts rotation transformation, scale transformation, scaling transformation, translation transformation, color contrast transformation and the like, and reduces the memory pressure while enhancing the data diversity by using an online iterator mode.
3. And constructing a multi-input multi-output cavity convolution U-shaped network, wherein the overall structure of the network is shown in figure 4.
1) Multi-input downsampling module
The multiple-input downsampling module is shown in the left half of the U type network of fig. 4.
1-1) firstly, the input data is scaled into four pairs of data of 8:4:2:1 by utilizing the multi-input image multi-scale idea, and the four pairs of data are respectively fused with a network-entering one-two-three-four downsampling layer.
1-2) the downsampling module has 4 layers, and the characteristic images with more channels and smaller size are sequentially obtained by mainly using a convolution layer and a maximum pooling layer to complete the acquisition of the bottom layer characteristics. The convolution kernel size of each layer is 3×3, and the hole convolution r=2 is used, i.e., a space is added to the conventional convolution kernel to increase the image receptive field. The number of convolution kernels of the first layer to the fourth layer is 32, 64, 128, 256, respectively.
2) Upsampling module
The up-sampling module structure is shown in the right half of the U type network of fig. 4. The up-sampling module is 4 layers in total, and takes deconvolution as an up-sampling mode. The up-sampling module sequentially enlarges the size of the characteristic image, reduces the number of channels, and finally obtains a prediction graph with the same size as the input data. The convolution kernel size of each layer is 3×3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, 32, respectively.
3) Depth supervision multi-output module
And performing size transformation on the label for 4 times to form four pairs of data of 8:4:2:1, and sequentially taking the four pairs of data as training labels of 4-layer up-sampled output layers.
4. Inputting training set data into a designed network for training to obtain a convolutional neural network model after learning
1) The loss and segmentation accuracy of each training are recorded.
2) And modifying parameters and retraining the network according to the loss and the accuracy rate on the verification set. Until the best model and its corresponding parameters are selected.
5. And inputting the preprocessed ultrasonic image data to be segmented into the learned convolutional neural network model to obtain a segmentation result of each pixel.
The final effect of the practice of the invention is shown herein and the results are shown in figures 5 and 6. Fig. 5 is a schematic diagram of accuracy and loss of a training set and a verification set provided by an embodiment of the present invention, fig. (a) is a schematic diagram of a training set and a verification set loss function obtained by training using an MD-Unet network, and fig. (b) is a schematic diagram of accuracy of the training set and the verification set. Fig. 6 is a schematic diagram of an original label and a segmented image according to an embodiment of the present invention, wherein the left side of fig. 6 is a label image, and the right side of fig. 6 is a segmented result.
Claims (2)
1. A medical ultrasound image segmentation method, comprising the steps of:
step 1, preprocessing ultrasonic image data to be segmented to obtain training set data and verification set data;
step 2, data enhancement is carried out on training set data and verification set data, and the step comprises the following steps:
1) Increasing the data volume of training data using offline enhancement: adopting rotation transformation and horizontal overturning transformation to perform 10 times enhancement;
2) Generalization of the network model is enhanced by utilizing online enhancement: the method adopts rotation transformation, scale transformation, scaling transformation, translation transformation and color contrast transformation, and reduces the memory pressure while enhancing the data diversity by using an online iterator mode;
step 3, constructing a multi-input multi-output cavity convolution U-shaped network, which comprises the following steps:
1) Multiple-input downsampling module: the downsampling module is 4 layers in total, the multi-input adopts an image multi-scale idea, the input data is scaled into four pairs of data of which the size is 8:4:2:1, and the four pairs of data are respectively fused with a network-entering downsampling layer of which the size is two, three and four; the downsampling module completes bottom layer feature acquisition by utilizing a convolution layer and a maximum pooling layer, and feature graphs are sequentially obtained; the convolution kernel size of each layer is 3×3, and hole convolution r=2 is adopted, namely, an interval is added in the conventional convolution kernels so as to increase the image receptive field, and the convolution kernels of the first layer to the fourth layer are 32, 64, 128 and 256 respectively;
2) Up-sampling module: the up-sampling module adopts a deconvolution as an up-sampling mode, and sequentially enlarges the size of the characteristic image by using the up-sampling module, reduces the number of channels and finally obtains a prediction graph with the same size as input data; the convolution kernel size of each layer is 3×3, and the number of convolution kernels from the first layer to the fourth layer is 256, 128, 64, 32 respectively;
3) Depth supervision multi-output module: performing size transformation on the label for 4 times to form four pairs of data of 8:4:2:1, and sequentially taking the four pairs of data as training labels of 4-layer up-sampled output layers;
step 4, inputting training set data into the constructed U-shaped network for training to obtain a learned convolutional neural network model, and performing parameter adjustment on the verification set until an optimal model and corresponding parameters thereof are obtained to obtain a trained U-shaped network;
and step 5, inputting the preprocessed ultrasonic image data to be segmented into a trained U-shaped network to obtain a segmentation result of each pixel.
2. The medical ultrasound image segmentation method according to claim 1, wherein in the data enhancement and the hole convolution U-shaped network module, the data enhancement includes:
1) The utilization rate of the data is improved through offline enhancement of the original data;
2) The on-line enhancement of the original data is adopted, so that the robustness of the network is further enhanced, and the memory pressure of the server is reduced;
the cavity convolution U-shaped network module comprises:
1) Scaling the image data through the multi-input module and fusing the image data with the downsampling layer so as to further enhance the image utilization rate and improve the capability of the network for extracting image features;
2) And a cavity convolution layer is added in the processes of downsampling and upsampling, so that the size of a receptive field is increased, and the problem of image detail loss caused by convolution is solved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911409096.6A CN111179275B (en) | 2019-12-31 | 2019-12-31 | Medical ultrasonic image segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911409096.6A CN111179275B (en) | 2019-12-31 | 2019-12-31 | Medical ultrasonic image segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111179275A CN111179275A (en) | 2020-05-19 |
CN111179275B true CN111179275B (en) | 2023-04-25 |
Family
ID=70650617
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911409096.6A Active CN111179275B (en) | 2019-12-31 | 2019-12-31 | Medical ultrasonic image segmentation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111179275B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111861929A (en) * | 2020-07-24 | 2020-10-30 | 深圳开立生物医疗科技股份有限公司 | Ultrasonic image optimization processing method, system and device |
CN111915626B (en) * | 2020-08-14 | 2024-02-02 | 东软教育科技集团有限公司 | Automatic segmentation method, device and storage medium for heart ultrasonic image ventricular region |
CN113034507A (en) * | 2021-05-26 | 2021-06-25 | 四川大学 | CCTA image-based coronary artery three-dimensional segmentation method |
CN113610859B (en) * | 2021-06-07 | 2023-10-31 | 东北大学 | Automatic thyroid nodule segmentation method based on ultrasonic image |
CN113920129A (en) * | 2021-09-16 | 2022-01-11 | 电子科技大学长三角研究院(衢州) | Medical image segmentation method and device based on multi-scale and global context information |
CN116824146B (en) * | 2023-07-05 | 2024-06-07 | 深圳技术大学 | Small sample CT image segmentation method, system, terminal and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107680678A (en) * | 2017-10-18 | 2018-02-09 | 北京航空航天大学 | Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system |
WO2018028255A1 (en) * | 2016-08-11 | 2018-02-15 | 深圳市未来媒体技术研究院 | Image saliency detection method based on adversarial network |
CN108090904A (en) * | 2018-01-03 | 2018-05-29 | 深圳北航新兴产业技术研究院 | A kind of medical image example dividing method and device |
CN108268870A (en) * | 2018-01-29 | 2018-07-10 | 重庆理工大学 | Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN108898606A (en) * | 2018-06-20 | 2018-11-27 | 中南民族大学 | Automatic division method, system, equipment and the storage medium of medical image |
CN109064455A (en) * | 2018-07-18 | 2018-12-21 | 清华大学深圳研究生院 | A kind of classification method of the breast ultrasound Image Multiscale fusion based on BI-RADS |
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109671086A (en) * | 2018-12-19 | 2019-04-23 | 深圳大学 | A kind of fetus head full-automatic partition method based on three-D ultrasonic |
CN109816657A (en) * | 2019-03-03 | 2019-05-28 | 哈尔滨理工大学 | A kind of brain tumor medical image cutting method based on deep learning |
CN110189334A (en) * | 2019-05-28 | 2019-08-30 | 南京邮电大学 | The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism |
CN110415253A (en) * | 2019-05-06 | 2019-11-05 | 南京大学 | A kind of point Interactive medical image dividing method based on deep neural network |
CN110570431A (en) * | 2019-09-18 | 2019-12-13 | 东北大学 | Medical image segmentation method based on improved convolutional neural network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080081998A1 (en) * | 2006-10-03 | 2008-04-03 | General Electric Company | System and method for three-dimensional and four-dimensional contrast imaging |
US8398549B2 (en) * | 2010-02-16 | 2013-03-19 | Duke University | Ultrasound methods, systems and computer program products for imaging contrasting objects using combined images |
-
2019
- 2019-12-31 CN CN201911409096.6A patent/CN111179275B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018028255A1 (en) * | 2016-08-11 | 2018-02-15 | 深圳市未来媒体技术研究院 | Image saliency detection method based on adversarial network |
CN107680678A (en) * | 2017-10-18 | 2018-02-09 | 北京航空航天大学 | Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system |
CN108090904A (en) * | 2018-01-03 | 2018-05-29 | 深圳北航新兴产业技术研究院 | A kind of medical image example dividing method and device |
CN108268870A (en) * | 2018-01-29 | 2018-07-10 | 重庆理工大学 | Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study |
CN108734694A (en) * | 2018-04-09 | 2018-11-02 | 华南农业大学 | Thyroid tumors ultrasonoscopy automatic identifying method based on faster r-cnn |
CN108898606A (en) * | 2018-06-20 | 2018-11-27 | 中南民族大学 | Automatic division method, system, equipment and the storage medium of medical image |
CN109064455A (en) * | 2018-07-18 | 2018-12-21 | 清华大学深圳研究生院 | A kind of classification method of the breast ultrasound Image Multiscale fusion based on BI-RADS |
CN109191476A (en) * | 2018-09-10 | 2019-01-11 | 重庆邮电大学 | The automatic segmentation of Biomedical Image based on U-net network structure |
CN109671086A (en) * | 2018-12-19 | 2019-04-23 | 深圳大学 | A kind of fetus head full-automatic partition method based on three-D ultrasonic |
CN109816657A (en) * | 2019-03-03 | 2019-05-28 | 哈尔滨理工大学 | A kind of brain tumor medical image cutting method based on deep learning |
CN110415253A (en) * | 2019-05-06 | 2019-11-05 | 南京大学 | A kind of point Interactive medical image dividing method based on deep neural network |
CN110189334A (en) * | 2019-05-28 | 2019-08-30 | 南京邮电大学 | The medical image cutting method of the full convolutional neural networks of residual error type based on attention mechanism |
CN110570431A (en) * | 2019-09-18 | 2019-12-13 | 东北大学 | Medical image segmentation method based on improved convolutional neural network |
Non-Patent Citations (1)
Title |
---|
基于残差学习U型卷积神经网络的乳腺超声图像肿瘤分割研究;梁舒;《华南理工大学》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111179275A (en) | 2020-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161273B (en) | Medical ultrasonic image segmentation method based on deep learning | |
CN111179275B (en) | Medical ultrasonic image segmentation method | |
CN111145170B (en) | Medical image segmentation method based on deep learning | |
CN109886273B (en) | CMR image segmentation and classification system | |
CN109063712B (en) | Intelligent diagnosis method for multi-model liver diffuse diseases based on ultrasonic images | |
CN110428432B (en) | Deep neural network algorithm for automatically segmenting colon gland image | |
CN109859215B (en) | Automatic white matter high signal segmentation system and method based on Unet model | |
CN112132817B (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN111161271A (en) | Ultrasonic image segmentation method | |
CN113344933B (en) | Glandular cell segmentation method based on multi-level feature fusion network | |
CN111080591A (en) | Medical image segmentation method based on combination of coding and decoding structure and residual error module | |
CN111583285A (en) | Liver image semantic segmentation method based on edge attention strategy | |
CN116309648A (en) | Medical image segmentation model construction method based on multi-attention fusion | |
CN114511502A (en) | Gastrointestinal endoscope image polyp detection system based on artificial intelligence, terminal and storage medium | |
CN111476794B (en) | Cervical pathological tissue segmentation method based on UNET | |
CN112785593A (en) | Brain image segmentation method based on deep learning | |
CN116168052A (en) | Gastric cancer pathological image segmentation method combining self-adaptive attention and feature pyramid | |
CN115661029A (en) | Pulmonary nodule detection and identification system based on YOLOv5 | |
CN116758336A (en) | Medical image intelligent analysis system based on artificial intelligence | |
Kareem et al. | Skin lesions classification using deep learning techniques | |
CN113269778B (en) | Image weak supervision segmentation method based on iteration | |
Huang et al. | DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel | |
Upadhyay et al. | Learning multi-scale deep fusion for retinal blood vessel extraction in fundus images | |
CN115410032A (en) | OCTA image classification structure training method based on self-supervision learning | |
Jalali et al. | VGA‐Net: Vessel graph based attentional U‐Net for retinal vessel segmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |