CN112541871B - Training method of low-dose image denoising network and denoising method of low-dose image - Google Patents
Training method of low-dose image denoising network and denoising method of low-dose image Download PDFInfo
- Publication number
- CN112541871B CN112541871B CN202011437368.6A CN202011437368A CN112541871B CN 112541871 B CN112541871 B CN 112541871B CN 202011437368 A CN202011437368 A CN 202011437368A CN 112541871 B CN112541871 B CN 112541871B
- Authority
- CN
- China
- Prior art keywords
- low
- dose image
- dose
- denoising network
- image denoising
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 81
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000004927 fusion Effects 0.000 claims abstract description 49
- 210000003484 anatomy Anatomy 0.000 claims abstract description 33
- 238000003860 storage Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 55
- 230000004913 activation Effects 0.000 claims description 23
- 238000000605 extraction Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 abstract description 7
- 238000002591 computed tomography Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 11
- 238000005457 optimization Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 4
- 210000004197 pelvis Anatomy 0.000 description 4
- 230000005855 radiation Effects 0.000 description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 3
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 210000004072 lung Anatomy 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 210000003625 skull Anatomy 0.000 description 3
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000003695 paranasal sinus Anatomy 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000002603 single-photon emission computed tomography Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10108—Single photon emission computed tomography [SPECT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium, comprising the following steps: acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a low-dose image, an attribute and a standard-dose image of an anatomical structure; establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module; and training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network. According to the training method of the low-dose image denoising network, the attribute of the anatomical structure is used as the input of the low-dose image denoising network, so that the attribute of the anatomical structure is fused into the image reconstruction process, the low-dose image denoising network obtained through training can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is guaranteed.
Description
Technical Field
The present invention relates to the field of image reconstruction technologies, and in particular, to a training method for a low dose image denoising network, a denoising method for a low dose image, a computer device, and a storage medium.
Background
The Computed Tomography (CT) is an important imaging means for obtaining the internal structural information of an object in a nondestructive mode, has the advantages of high resolution, high sensitivity, multiple layers and the like, is one of medical image diagnosis equipment with the largest installed quantity in China, and is widely applied to various medical clinical examination fields. However, due to the need to use X-rays during CT scanning, the CT radiation dose problem is becoming more and more important with the progressive understanding of the potential hazards of radiation. The reasonable use of low dose (As Low As Reasonably Achievable, ALARA) principle requires that the radiation dose to a patient is reduced as much as possible on the premise of meeting clinical diagnosis, and more noise can appear in the imaging process along with the reduction of the dose, so that the imaging quality is poor, therefore, the research and development of a new low-dose CT imaging method can ensure the CT imaging quality and reduce the harmful radiation dose, and has important scientific significance and application prospect in the field of medical diagnosis. Because different anatomical parts have great difference in structure, the existing low-dose CT imaging method ignores the difference of low-dose anatomical structures, and has poor robustness.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a training method of a low-dose image denoising network, a denoising method of a low-dose image, computer equipment and a storage medium, wherein the attribute of an anatomical structure is fused into the image reconstruction process, so that the robustness of the denoising method and the quality of a reconstructed image are improved.
The specific technical scheme provided by the invention is as follows: a training method of a low dose image denoising network is provided, the training method comprising:
obtaining a training dataset comprising a plurality of input parameter sets, each of the input parameter sets comprising a low dose image, an attribute, a standard dose image of an anatomical structure;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a plurality of spatial information fusion modules and a generation module which are sequentially cascaded;
and training the low-dose image denoising network by using the training data set, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Further, the attribute fusion module comprises a weight prediction unit, a first feature extraction unit and a first fusion unit, wherein the weight prediction unit is used for obtaining a weight mask corresponding to an anatomical structure according to the attribute, the first feature extraction unit is used for extracting features of the low-dose image, and the first fusion unit is used for fusing the weight mask with the features of the low-dose image to obtain weight features.
Further, the weight prediction unit comprises a plurality of convolution layers and a plurality of activation functions, and the convolution layers and the activation functions are alternately cascaded in turn.
Further, the weight prediction unit further includes a splicing layer, where the splicing layer is configured to splice outputs of convolution layers having the same number of output channels from the plurality of convolution layers.
Further, the spatial information fusion module comprises a second feature extraction unit, a third feature extraction unit and a second fusion unit, wherein the second feature extraction unit is used for extracting spatial information of the weight features, the third feature extraction unit is used for extracting image features of the weight features, and the second fusion unit is used for fusing the spatial information with the image features.
Further, the training the low dose image denoising network by using the training data set, obtaining parameters of the low dose image denoising network and updating the low dose image denoising network, includes:
Inputting the low-dose images and the attributes in the input parameter sets into the low-dose image denoising network to obtain a plurality of output images;
Constructing a loss function according to the plurality of output images and standard dose images in the plurality of input parameter sets respectively;
And optimizing the loss function, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Further, the loss function is:
Where θ represents network parameters of the low-dose image denoising network, loss (θ) represents a loss function, n represents the number of input parameter sets in the training data set, G (X i;ai; θ) represents the ith output image, and Y i represents a standard dose image in the ith input parameter set.
The invention also provides a denoising method of the low-dose image, which comprises the following steps: the low-dose image to be denoised is input into a low-dose image denoising network obtained by using the training method of the low-dose image denoising network as described above, and a reconstructed low-dose image is obtained.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to implement the training method as claimed in any one of the preceding claims.
The invention also provides a computer readable storage medium having stored thereon computer instructions which when executed by a processor implement a training method as described in any of the above.
According to the training method of the low-dose image denoising network, the attribute of the anatomical structure is used as the input of the low-dose image denoising network, so that the attribute of the anatomical structure is fused into the image reconstruction process, the low-dose image denoising network obtained through training can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is guaranteed.
Drawings
The technical solution and other advantageous effects of the present invention will be made apparent by the following detailed description of the specific embodiments of the present invention with reference to the accompanying drawings.
FIG. 1 is a flowchart of a training method of a low-and-medium dose image denoising network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a low-dose image denoising network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a weight prediction unit according to an embodiment of the present invention;
FIG. 4 is a flowchart of step S3 in the first embodiment of the present invention;
FIGS. 5 a-5 c are schematic diagrams of standard dose images, low dose images, output images according to a first embodiment of the present invention;
FIG. 6 is a schematic diagram of a training system for a low-dose image denoising network according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device in a fourth embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the specific embodiments set forth herein. Rather, these embodiments are provided to explain the principles of the invention and its practical application so that others skilled in the art will be able to understand the invention for various embodiments and with various modifications as are suited to the particular use contemplated. In the drawings, like numbers will be used to indicate like elements throughout.
The training method of the low-dose image denoising network provided by the invention comprises the following steps:
obtaining a training dataset comprising a plurality of input parameter sets, each of the input parameter sets comprising a low dose image, an attribute, a standard dose image of an anatomical structure;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module;
and training the low-dose image denoising network by using the training data set, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
According to the training method of the low-dose image denoising network, the attribute of the anatomical structure is used as the input of the low-dose image denoising network, so that the attribute of the anatomical structure is fused into the image reconstruction process, the low-dose image denoising network obtained through training can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is guaranteed.
In the following, a training method of a low dose image denoising network, a denoising method of a low dose image, a computer device and a storage medium of the present application will be described in detail by taking a CT image as an example, and several specific embodiments are combined with the accompanying drawings, where the CT image is taken as an example and not used to limit the application field of the present application, and the present application can also be applied to other medical image imaging fields such as PET, SPECT, etc.
Example 1
Referring to fig. 1, the training method of the low dose image denoising network in this embodiment includes the steps of:
s1, acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a low-dose image, an attribute and a standard-dose image of an anatomical structure;
S2, establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a spatial information fusion module and a generation module;
And S3, training the low-dose image denoising network by using the training data set, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Specifically, in step S1, the training data set in this embodiment is:
D={(x1,y1),(x2,y2),......,(xi,yi),......,(xn,yn)},
Wherein n represents the number of input parameter sets in the training dataset, x i represents the low-dose image in the ith input parameter set, y i represents the standard-dose image in the ith input parameter set, n low-dose images { x 1,x2,......,xi,......,xn } comprise low-dose CT images of different anatomical sites, i.e., n low-dose images { x 1,x2,......,xi,......,xn } have different attributes, and n low-dose images { x 1,x2,......,xi,......,xn } and x i and y i with the same subscript in n standard-dose images { y 1,y2,......,yi,......,yn } represent the low-dose CT images and standard-dose CT images of the same anatomical site. Wherein the different anatomical sites may include the skull, orbit, sinus, neck, lung cavity, abdomen, pelvis (male), pelvis (female), knee, lumbar, etc.
It should be noted that, the low dose image and the standard dose image in the training data set for training in this embodiment are selected from the sample data set commonly used in the art, and are not specifically limited herein.
Referring to fig. 2, the low-dose image denoising network constructed in the embodiment includes an attribute fusion module 1, a plurality of spatial information fusion modules 2 and a generation module 3, which are sequentially cascaded. The attribute fusion module 1 is used for fusing the attribute of the low-dose image with the feature of the low-dose image to generate a weight feature. The spatial information fusion modules 2 are used for acquiring spatial information and image characteristics of the weight characteristics and generating spatial information fusion characteristics according to the spatial information and the image characteristics. The generation module 3 is used for generating a standard dose image according to the spatial information fusion characteristic.
Specifically, the attribute fusion module 1 includes a weight prediction unit 11, a first feature extraction unit, and a first fusion unit 13. The weight prediction unit 11 is configured to generate a weight mask of the anatomical structure according to the attribute of the anatomical structure, the first feature extraction unit is configured to extract features of the low dose image of the anatomical structure, and the first fusion unit 13 is configured to fuse the weight mask with the features of the low dose image.
Referring to fig. 3, the weight prediction unit 11 includes a plurality of convolution layers 111, a plurality of activation functions 112, and the plurality of convolution layers 111 and the plurality of activation functions 112 are alternately cascaded in sequence. The attributes of the anatomy are compressed and expanded across the channels by a plurality of convolution layers 111 to obtain a weight mask for a predetermined number of channels. After the convolution operation is performed by the convolution layer 111, nonlinear processing is further required to be performed on the data after the convolution operation by the activation function 112.
The attribute of the anatomical structure in this embodiment is encoded by using a one-hot encoding method, and for each anatomical structure, only the attribute bit corresponding to the anatomical structure is 1, and other attribute bits are 0, for example, taking the anatomical structure including the skull, the orbit, the paranasal sinus, the neck, and the lung cavity as an example, {0,1, 0} represents the attribute of the anatomical structure of the orbit, and so on.
In order to retain more context information, the weight prediction unit 11 in this embodiment further includes a stitching layer 113. The splicing layer 113 is used to splice the output data of the convolution layers 111 having the same number of channels among the plurality of convolution layers 111.
The exemplary weight prediction unit 11 shown in fig. 3 includes 7 convolutional layers 111, 7 activation functions 112, and 2 splice layers 113, and the parameter settings of the weight prediction unit 11 are shown in the following table:
Parameters of the table-weight prediction unit
Unit cell | Convolution kernel | Number of input channels | Number of output channels |
First convolution layer | 1x1 | 10 | 64 |
Second convolution layer | 1x1 | 64 | 32 |
Third convolution layer | 1x1 | 32 | 16 |
Fourth convolution layer | 1x1 | 16 | 32 |
Fifth convolution layer | 1x1 | 64 | 64 |
Sixth convolution layer | 1x1 | 128 | 64 |
Seventh convolution layer | 1x1 | 64 | 64 |
The number of output channels of the first convolution layer 111 and the number of output channels of the fifth convolution layer 111 are both 64, so that a splicing layer 113 is cascaded between the fifth activation function 112 and the sixth convolution layer 111, and the splicing layer 113 may be spliced by using multiple splicing methods, in this embodiment, in order to reduce the computational complexity, the splicing layer 113 uses the simplest image splicing method, for example, the output signal of the first activation function 112 is 512×512×64, and the output signal of the fifth activation function 112 is also 512×512×64, and the spliced image may be 512×512×128. Similarly, the number of output channels of the second convolution layer 111 and the number of output channels of the fourth convolution layer 111 are both 32, and a splice layer 113 is cascaded between the fourth activation function 112 and the fifth convolution layer 111. Wherein, the first to sixth activation functions 112 are ReLU functions, the seventh activation function 112 is Sigmod functions, and finally a weight mask with the number of channels being 64 is generated.
Referring again to fig. 2, the first feature extraction unit includes a convolution layer 12, the convolution kernel of the convolution layer 12 has a size of 3×3, the number of input channels is 1, the number of output channels is 64, and features of the low dose image are extracted by the convolution layer 12.
The first fusion unit 13 includes a multiplier 131, a splicing layer 132, and a convolution layer 133, where the multiplier 131 is configured to dot multiply the weight mask and the feature of the low-dose image to obtain the feature carrying the attribute information, and the splicing layer 132 is configured to splice the feature carrying the attribute information and the feature of the low-dose image, so as to better preserve the original image information and avoid the loss of the image information.
The convolution kernel of the convolution layer 133 has a size of 3×3, the number of input channels is 128, and the number of output channels is 64, and the convolution layer 133 convolves the output data of the stitching layer 132 to obtain a weight feature obtained by fusing the weight mask and the feature of the low dose image.
Each spatial information fusion module 2 includes a second feature extraction unit 21, a third feature extraction unit 22, and a second fusion unit 23. The second feature extraction unit 21 is used for extracting spatial information of the weight features, the third feature extraction unit 22 is used for extracting image features of the weight features, and the second fusion unit 23 is used for fusing the spatial information with the image features.
Specifically, the second feature extraction unit 21 includes two convolution layers 211 and two activation functions 212, the two convolution layers 211 and the two activation functions 212 are alternately cascaded in order, and spatial information of the weight features is extracted by the two convolution layers 211. After the convolution layer 211 performs a convolution operation, nonlinear processing is further required on the convolved data by the activation function 212. Wherein the first activation function 212 is a ReLU function and the second activation function 212 is a Sigmod function, and the output of the second feature extraction unit 21 is constrained to be between 0 and 1 by Sigmod function.
The third feature extraction unit 22 includes two convolution layers 221 and one activation function 222, the activation function 222 being connected to the two convolution layers 221, respectively, and extracting the image features of the weight features through the two convolution layers 221. After the convolution operation performed by the first convolution layer 221, nonlinear processing is further required for the convolved data by the activation function 222. Wherein the activation function 222 is a ReLU function.
The second fusion unit 23 includes a multiplier 231, a splicing layer 232, a convolution layer 233 and an adder 234, where the multiplier 231 is configured to dot multiply the spatial information with the image feature output by the third feature extraction unit 22 to obtain a feature carrying the spatial information, and the splicing layer 132 is configured to splice the feature carrying the spatial information with the image feature output by the third feature extraction unit 22, so as to better retain the original image information and avoid the loss of the image information. The convolution layer 133 performs convolution processing on the data spliced by the splicing layer 132, and the adder 234 fuses the data output by the convolution layer 133 with the data input to the multiplier 231, to finally obtain the image characteristics of the fused spatial information.
In order to better obtain the characteristics of the low-dose image, the embodiment constructs a deeper network structure model by setting a plurality of spatial information fusion modules 2, preferably, the number of the spatial information fusion modules 2 in the embodiment is 15, and it should be noted here that, in fig. 2, only the case that the low-dose image denoising network includes 3 spatial information fusion modules 2 is given by way of example, but the number of the spatial information fusion modules is not limited.
The parameters of the spatial information fusion module 2 in the present embodiment are given in table two, and of course, the parameters of the spatial information fusion module 2 may be set according to actual needs, which is also shown here only as an example. Specifically, the parameters of the spatial information fusion module 2 are shown in the following table:
parameters of table two space information fusion module
And processing the images through a plurality of spatial information fusion modules 2 to obtain spatial information fusion characteristics, and finally generating standard dose images through a generation module 3. The generating module 3 includes an adder 31 and a convolution layer 32, where the adder 31 fuses the data output by the last spatial information fusion module 2 and the data output by the attribute fusion module 1, so as to better preserve the original image information and avoid the loss of the image information. The convolution layer 32 reconstructs the fused data to obtain a standard dose image.
Referring to fig. 4, in step S3, training the low dose image denoising network using the training data set to obtain parameters of the low dose image denoising network and update the low dose image denoising network specifically includes the steps of:
s31, inputting low-dose images and attributes in a plurality of input parameter sets into a low-dose image denoising network to obtain a plurality of output images;
S32, constructing a loss function according to the plurality of output images and standard dose images in the plurality of input parameter sets respectively;
And S33, optimizing the loss function, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
Specifically, in step S32, a formula for constructing a loss function from the plurality of output images and the standard dose image in the plurality of input parameter sets, respectively, is as follows:
Where θ represents network parameters of the low-dose image denoising network, loss (θ) represents a loss function, n represents the number of input parameter sets in the training data set, G (X i;ai; θ) represents the ith output image, and Y i represents a standard dose image in the ith input parameter set.
In this embodiment, the absolute value difference is used as a loss function, so that differentiation between each region of the image can be increased, and the boundary between each region in the image is clearer.
And S33, optimizing the minimum value of the loss function to obtain the optimized network parameters.
In step S33, optimizing the minimum value of the loss function by adopting an Adam optimization algorithm, to obtain an optimized network parameter, wherein the iterative process of the Adam optimization algorithm is as follows:
Calculating the gradient:
there is a partial first moment estimation: s (k+1) =ρ 1s(k)+(1-ρ1) g;
Biased moment estimation: r (k+1) =ρ 2r(k)+(1-ρ2) g ≡g;
Correcting the first moment:
Correcting the second moment:
Parameter correction value:
updating network parameters: θ=θ+Δθ;
Judging whether the iteration times are equal to the preset termination iteration times, if so, outputting updated network parameters theta, and if not, continuing the next iteration until the iteration times are equal to the preset termination iteration times. The number of iterations may be set according to actual needs, and is not limited herein.
In the above optimization algorithm, the initial value condition of the first iteration is the initial network parameter θ, k=0, s (k) =0, r (k) =0; Representing a gradient operator, wherein the default value of ρ 1 is 0.9, the default value of ρ 2 is 0.999, k is the number of iterations, ε represents the learning rate, and ε has a default value of 0.0001; delta is a small constant, and the default value of delta is 10 -8.
The present embodiment constructs the loss function according to the mean square error of the plurality of output images and the standard dose image in the plurality of input parameter sets, respectively, however, the present embodiment may also construct the loss function in other manners, for example, may construct the loss function according to the absolute value errors of the plurality of output images and the standard dose image in the plurality of input parameter sets, respectively.
In step S33, a corresponding optimization method may be selected to optimize the loss function according to the actual application, for example, the low-dose image denoising network in the embodiment is applied to supervised learning, and then an Adam optimization method is adopted to optimize the loss function; when the low-dose image denoising network in the embodiment is applied to generating an countermeasure model, an SDG optimization method is adopted to optimize the loss function.
After the optimization, the updated low-dose image denoising network can be obtained, and the attribute of the anatomical structure is used as the input of the low-dose image denoising network, so that the attribute of the anatomical structure is fused into the image reconstruction process, the low-dose image denoising network obtained through training can be suitable for different anatomical structures, the robustness is improved, and the quality of the reconstructed image is ensured. Referring to fig. 5a to 5c, fig. 5a to 5c show a standard dose image, a low dose image, and an output image in this embodiment by way of example, it can be seen from the figures that the output image reconstructed by using the low dose image denoising network in this embodiment can well retain image details, and the reconstructed image has high definition.
Example two
Referring to fig. 6, the present embodiment provides a training system of a low dose image denoising network, which includes a training data set acquisition module 100, a network construction module 101, and a training module 102.
The training data set acquisition module 100 is configured to acquire a training data set, where the training data set includes a plurality of input parameter sets, and each input parameter set includes a low dose image, an attribute, and a standard dose image of an anatomical structure. The training data set in this embodiment is:
D={(x1,y1),(x2,y2),......,(xi,yi),......,(xn,yn)},
Wherein n represents the number of input parameter sets in the training dataset, x i represents the low-dose image in the ith input parameter set, y i represents the standard-dose image in the ith input parameter set, n low-dose images { x 1,x2,......,xi,......,xn } comprise low-dose CT images of different anatomical sites, i.e., n low-dose images { x 1,x2,......,xi,......,xn } have different attributes, and n low-dose images { x 1,x2,......,xi,......,xn } and x i and y i with the same subscript in n standard-dose images { y 1,y2,......,yi,......,yn } represent the low-dose CT images and standard-dose CT images of the same anatomical site. Wherein the different anatomical sites may include the skull, orbit, sinus, neck, lung cavity, abdomen, pelvis (male), pelvis (female), knee, lumbar, etc.
It should be noted that, the low dose image and the standard dose image in the training data set for training in this embodiment are selected from the sample data set commonly used in the art, and are not specifically limited herein.
The network construction module 101 is configured to establish a low dose image denoising network, where the low dose image denoising network includes an attribute fusion module, a spatial information fusion module, and a generation module.
The training module 102 is configured to train the low-dose image denoising network by using the training data set, obtain parameters of the low-dose image denoising network, and update the low-dose image denoising network.
Example III
The embodiment provides a denoising method of a low-dose image, which comprises the following steps: and inputting the low-dose image to be denoised into a low-dose image denoising network obtained by the training method of the low-dose image denoising network in the first embodiment to obtain a reconstructed low-dose image.
It should be noted that, the denoising method in this embodiment includes two implementations, where in the first implementation, the trained low-dose image denoising network in the first embodiment is used as the denoising network of the low-dose image, and the low-dose image to be denoised is input into the low-dose image denoising network, so that the reconstructed low-dose image can be obtained. The second embodiment is to train the low-dose image denoising network by using the training method of the low-dose image denoising network according to the first embodiment, and then input the low-dose image to be denoised into the trained low-dose image denoising network to obtain the reconstructed low-dose image.
The denoising method can be suitable for different anatomical structures, and can better extract details of an original image, so that the reconstructed image is clearer.
Example IV
Referring to fig. 7, the present embodiment provides a computer device, including a processor 200 and a memory 201, and a computer program stored on the memory 201, where the processor 200 executes the computer program to implement the training method according to the first embodiment.
The memory 201 may include a high-speed random access memory (Random Access Memory, RAM) and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 200 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the training method described in the first embodiment may be implemented by an integrated logic circuit of hardware in the processor 200 or an instruction in software form. The processor 200 may also be a general purpose processor including a central processing unit (Central Processing Unit, CPU), network processor (Network Processor, NP), etc., as well as a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The memory 201 is used for storing a computer program, and the processor 200 executes the computer program after receiving the execution instruction to implement the training method according to the first embodiment.
The present embodiment also provides a computer storage medium, in which a computer program is stored, and the processor 200 is configured to read and execute the computer program stored in the computer storage medium, so as to implement the training method according to the first embodiment.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer storage medium or transmitted from one computer storage medium to another computer storage medium, for example, from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer storage media may be any available media that can be accessed by a computer or a data storage device such as a server, data center, or the like that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Drive (SSD)), etc.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely illustrative of the embodiments of this application and it will be appreciated by those skilled in the art that variations and modifications may be made without departing from the principles of the application, and it is intended to cover all modifications and variations as fall within the scope of the application.
Claims (9)
1. A training method for a low dose image denoising network, the training method comprising:
Acquiring a training data set, wherein the training data set comprises a plurality of input parameter sets, and each input parameter set comprises a low-dose image, an attribute and a standard-dose image of an anatomical part;
establishing a low-dose image denoising network, wherein the low-dose image denoising network comprises an attribute fusion module, a plurality of spatial information fusion modules and a generation module which are sequentially cascaded;
Training the low-dose image denoising network by using the training data set to obtain parameters of the low-dose image denoising network and updating the low-dose image denoising network;
the attribute fusion module comprises a weight prediction unit, a first feature extraction unit and a first fusion unit, wherein the weight prediction unit is used for obtaining a weight mask corresponding to an anatomical structure according to attributes, the first feature extraction unit is used for extracting features of the low-dose image, and the first fusion unit is used for fusing the weight mask with the features of the low-dose image to obtain weight features.
2. The training method of claim 1, wherein the weight prediction unit comprises a plurality of convolution layers and a plurality of activation functions, the plurality of convolution layers and the plurality of activation functions being alternately cascaded in sequence.
3. The training method of claim 2, wherein the weight prediction unit further comprises a splicing layer for splicing outputs of convolution layers having the same number of output channels among the plurality of convolution layers.
4. The training method according to claim 2, wherein the spatial information fusion module includes a second feature extraction unit for extracting spatial information of the weight feature, a third feature extraction unit for extracting image features of the weight feature, and a second fusion unit for fusing the spatial information with the image features.
5. The training method of any one of claims 1 to 4, wherein training the low-dose image denoising network using the training data set, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network, comprises:
Inputting the low-dose images and the attributes in the input parameter sets into the low-dose image denoising network to obtain a plurality of output images;
Constructing a loss function according to the plurality of output images and standard dose images in the plurality of input parameter sets respectively;
And optimizing the loss function, obtaining parameters of the low-dose image denoising network and updating the low-dose image denoising network.
6. The training method of claim 5, wherein the loss function is:
Where θ represents network parameters of the low-dose image denoising network, loss (θ) represents a loss function, n represents the number of input parameter sets in the training data set, G (X i;ai; θ) represents the ith output image, and Y i represents a standard dose image in the ith input parameter set.
7. A method of denoising a low dose image, the method comprising: inputting a low-dose image to be denoised into a low-dose image denoising network obtained by the training method of the low-dose image denoising network according to any one of claims 1 to 6, and obtaining a reconstructed low-dose image.
8. A computer device comprising a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the training method of any of claims 1-6.
9. A computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the training method of any of claims 1 to 6.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437368.6A CN112541871B (en) | 2020-12-07 | 2020-12-07 | Training method of low-dose image denoising network and denoising method of low-dose image |
PCT/CN2020/136210 WO2022120883A1 (en) | 2020-12-07 | 2020-12-14 | Training method for low-dose image denoising network and denoising method for low-dose image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011437368.6A CN112541871B (en) | 2020-12-07 | 2020-12-07 | Training method of low-dose image denoising network and denoising method of low-dose image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112541871A CN112541871A (en) | 2021-03-23 |
CN112541871B true CN112541871B (en) | 2024-07-23 |
Family
ID=75019870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011437368.6A Active CN112541871B (en) | 2020-12-07 | 2020-12-07 | Training method of low-dose image denoising network and denoising method of low-dose image |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112541871B (en) |
WO (1) | WO2022120883A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113298900B (en) * | 2021-04-30 | 2022-10-25 | 北京航空航天大学 | Processing method based on low signal-to-noise ratio PET image |
CN113256752B (en) * | 2021-06-07 | 2022-07-26 | 太原理工大学 | Low-dose CT reconstruction method based on double-domain interleaving network |
CN115526857A (en) * | 2022-09-26 | 2022-12-27 | 深圳先进技术研究院 | PET image denoising method, terminal device and readable storage medium |
CN116385319B (en) * | 2023-05-29 | 2023-08-15 | 中国人民解放军国防科技大学 | Radar image speckle filtering method and device based on scene cognition |
CN117541481B (en) * | 2024-01-09 | 2024-04-05 | 广东海洋大学 | Low-dose CT image restoration method, system and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110992290A (en) * | 2019-12-09 | 2020-04-10 | 深圳先进技术研究院 | Training method and system for low-dose CT image denoising network |
CN111179366A (en) * | 2019-12-18 | 2020-05-19 | 深圳先进技术研究院 | Low-dose image reconstruction method and system based on anatomical difference prior |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019019199A1 (en) * | 2017-07-28 | 2019-01-31 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for image conversion |
WO2019147767A1 (en) * | 2018-01-24 | 2019-08-01 | Rensselaer Polytechnic Institute | 3-d convolutional autoencoder for low-dose ct via transfer learning from a 2-d trained network |
CN111325686B (en) * | 2020-02-11 | 2021-03-30 | 之江实验室 | Low-dose PET three-dimensional reconstruction method based on deep learning |
-
2020
- 2020-12-07 CN CN202011437368.6A patent/CN112541871B/en active Active
- 2020-12-14 WO PCT/CN2020/136210 patent/WO2022120883A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110992290A (en) * | 2019-12-09 | 2020-04-10 | 深圳先进技术研究院 | Training method and system for low-dose CT image denoising network |
CN111179366A (en) * | 2019-12-18 | 2020-05-19 | 深圳先进技术研究院 | Low-dose image reconstruction method and system based on anatomical difference prior |
Also Published As
Publication number | Publication date |
---|---|
CN112541871A (en) | 2021-03-23 |
WO2022120883A1 (en) | 2022-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112541871B (en) | Training method of low-dose image denoising network and denoising method of low-dose image | |
CN110992290B (en) | Training method and system for low-dose CT image denoising network | |
Zhao et al. | Study of low-dose PET image recovery using supervised learning with CycleGAN | |
US10043295B2 (en) | Reconstruction and combination of pet multi-bed image | |
Liu et al. | Deep learning with noise‐to‐noise training for denoising in SPECT myocardial perfusion imaging | |
Cheng et al. | Applications of artificial intelligence in nuclear medicine image generation | |
CN111179366B (en) | Anatomical structure difference priori based low-dose image reconstruction method and system | |
CN107527359A (en) | A kind of PET image reconstruction method and PET imaging devices | |
US9082159B2 (en) | Non-rigid composition of multiple overlapping medical imaging volumes | |
US11514621B2 (en) | Low-dose image reconstruction method and system based on prior anatomical structure difference | |
CN111340903B (en) | Method and system for generating synthetic PET-CT image based on non-attenuation correction PET image | |
CN112365560B (en) | Image reconstruction method, system, readable storage medium and device based on multi-level network | |
CN111105472B (en) | Attenuation correction method, apparatus, computer device and storage medium for PET image | |
Huang et al. | U‐net‐based deformation vector field estimation for motion‐compensated 4D‐CBCT reconstruction | |
CN111899315B (en) | Method for reconstructing low-dose image by using multi-scale feature perception depth network | |
CN111489406B (en) | Training and generating method, device and storage medium for generating high-energy CT image model | |
CN110874855B (en) | Collaborative imaging method and device, storage medium and collaborative imaging equipment | |
CN112489158A (en) | Enhancement method for low-dose PET image by using cGAN-based adaptive network | |
CN115457054A (en) | Image segmentation method, device and equipment and readable storage medium | |
CN113989110A (en) | Lung image registration method and device, computer equipment and storage medium | |
Park et al. | Denoising of pediatric low dose abdominal CT using deep learning based algorithm | |
Clark et al. | Deep learning based spectral extrapolation for dual‐source, dual‐energy x‐ray computed tomography | |
CN111626964B (en) | Optimization method and optimization device for scanned image and medical scanning system | |
CN111862255A (en) | Regularization image reconstruction method, system, readable storage medium and device | |
CN112488949A (en) | Low-dose PET image restoration method, system, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |