CN118447123B - Nuclear magnetic resonance image artifact removal method and system - Google Patents
Nuclear magnetic resonance image artifact removal method and system Download PDFInfo
- Publication number
- CN118447123B CN118447123B CN202410902714.5A CN202410902714A CN118447123B CN 118447123 B CN118447123 B CN 118447123B CN 202410902714 A CN202410902714 A CN 202410902714A CN 118447123 B CN118447123 B CN 118447123B
- Authority
- CN
- China
- Prior art keywords
- image
- artifact
- training
- model
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000005481 NMR spectroscopy Methods 0.000 title claims abstract description 31
- 238000012549 training Methods 0.000 claims abstract description 173
- 238000012545 processing Methods 0.000 claims abstract description 35
- 238000012360 testing method Methods 0.000 claims abstract description 33
- 238000004088 simulation Methods 0.000 claims abstract description 24
- 238000010606 normalization Methods 0.000 claims abstract description 11
- 230000006870 function Effects 0.000 claims description 24
- 230000008447 perception Effects 0.000 claims description 20
- 230000009466 transformation Effects 0.000 claims description 19
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 18
- 238000006243 chemical reaction Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 15
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 14
- 238000012216 screening Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 238000006073 displacement reaction Methods 0.000 claims description 6
- 239000012535 impurity Substances 0.000 claims description 6
- 238000011084 recovery Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013519 translation Methods 0.000 claims description 5
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000002595 magnetic resonance imaging Methods 0.000 description 35
- 238000004891 communication Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 239000000126 substance Substances 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 229910052751 metal Inorganic materials 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 239000007943 implant Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The invention provides a nuclear magnetic resonance image artifact removal method and a system, wherein the method comprises the steps of carrying out normalization processing on a historical artifact-free MRI data image; carrying out artifact simulation on the artifact-free normalized image, and storing the artifact-free normalized image and the simulated artifact image into a training data set; inputting the training data set into a first training model for training to obtain a large-parameter artifact removal model; inputting the training data set into a second training model for training to obtain a small-parameter artifact removal model; inputting the test data set into a small-parameter artifact removal model for artifact removal processing to obtain artifact removal image data, the method provided by the invention can well remove or reduce the artifacts in the nuclear magnetic resonance image, has stronger universality and can ensure the authenticity of the result.
Description
Technical Field
The invention belongs to the technical field of image artifact removal, and particularly relates to a nuclear magnetic resonance image artifact removal method and system.
Background
Magnetic Resonance Imaging (MRI) is a safe and noninvasive medical imaging technique that is widely used in modern hospitals and clinics, and is one of the important diagnostic tools. It is widely used to diagnose any brain and spinal abnormalities/disorders because it is able to effectively capture the tissue structures of the body.
The information of each point in the MRI image is determined by frequency and phase codes, which when received information is subject to external interference, will lead to the appearance of image artifacts. For example, artifacts, folds, chemical shifts, truncations, partial volumes, and data errors associated with image processing; artifact related to hardware, nonuniform magnetic field, nonuniform radio frequency, and nonuniform gradient field; patient-related artifacts, physiological motion, patient motion, metal implants, chemical shifts, magnetic sensitivity; environmental related artifacts, radio frequency leakage/interference, moving metals, abrupt temperature changes, and other factors causing artifacts. In most cases, the artifacts will affect the interpretation of the image by the physician, which is important for the diagnosis of the disease, for identification and for trying to eliminate/reduce these artifacts.
Once artifact data is present, the technician can rescan, typically by changing the frequency encoding direction, increasing the bandwidth, increasing the FOV, reducing the layer thickness, etc., to obtain an artifact-free or artifact-free MRI image. However, this clearly increases the scanning time of the patient and requires the patient to remain stationary for a long time, which may cause some discomfort, and the existing artifact removal method cannot remove or reduce the generated artifacts well, thereby affecting the diagnosis and treatment efficiency.
Disclosure of Invention
In order to solve the technical problems, the invention provides a nuclear magnetic resonance image artifact removal method and a system, which are used for solving the technical problems in the prior art.
In one aspect, the present invention provides the following technical solutions, and a method for removing an artifact of a nuclear magnetic resonance image, including:
Acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
Performing artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
Constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model;
Constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training to obtain a small-parameter artifact removal model;
acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model to perform artifact removal processing so as to obtain artifact removal image data;
The step of constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model comprises the following steps:
Constructing a LLM model and a VQ-GAN model, inserting the LLM model between an encoder and a decoder in the VQ-GAN model to obtain a first training model, inputting the training data set into the VQ-GAN model, and encoding the training data set through the encoder in the VQ-GAN model to obtain high-dimensional perception characteristics in a continuous space;
calculating the similarity among the data in the training dataset through a medical big model PubMedCLIP to obtain a plurality of candidate words, and storing the plurality of candidate words into a codebook;
Performing discretization processing on the high-dimensional perception feature by replacing the pre-trained fixed codebook in the LLM model with the codebook to obtain discretized features, inputting the discretized features into the decoder to restore the discretized features to obtain high-dimensional restoration features, and performing iterative training on the first training model based on the high-dimensional restoration features, the high-dimensional perception features and pyramid semantic losses to obtain a large-parameter artifact removal model, wherein the pyramid semantic losses The method comprises the following steps:
;
In the method, in the process of the invention, Representing the calculated similarity of the medical large model PubMedCLIP,Represent the firstThe text of the layer is embedded,The representation mark generates a function of the text embedding,The codebook is represented as a codebook in which,Indicating the number of embedded layers,Represent the firstThe degree of similarity of the individual training data,Representing the first in the codebookCandidate words.
Compared with the prior art, the invention has the beneficial effects that: firstly, acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image; then carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set; then constructing a first training model, and inputting a training data set into the first training model for training to obtain a large-parameter artifact removal model; then constructing a second training model based on the first training model and parameters thereof, and inputting a training data set into the second training model for training to obtain a small-parameter artifact removal model; finally, a test data set is obtained, the test data set is input into a small-parameter artifact removal model for artifact removal processing to obtain artifact removal image data, artifacts existing in a nuclear magnetic resonance image can be well removed or reduced, and the method provided by the invention has stronger universality and can ensure the authenticity of results.
Preferably, the step of performing artifact simulation on the artifact-free normalized image to obtain a simulated artifact image includes:
performing Fourier transform on the artifact-free normalized image to obtain corresponding K space sequence data;
performing rotational translation on the K space sequence data to obtain first processed image data :
;
In the method, in the process of the invention,The data representing the sequence of K-space,Representing the complex function of the complex variables,Indicating the centre of rotation of the wheel,、、Respectively representing a first weight, a second weight and a third weight,、、Respectively represent the rotation angles in three directions,、、Respectively represent the displacement amounts in three directions,The position of the spatial sequence data is represented,、、Representing within vector spaceIs a direction of (2);
selecting a sequence data at will from the K space sequence data as reference data, selecting a plurality of sequence data as data to be spliced on two adjacent sides of the reference data according to the position of the reference data in the sequence, and respectively intercepting part of images in the data to be spliced and the reference data and performing image splicing to obtain second processed image data;
Storing the first processed image data and the second processed image data into a data set to be processed, and performing inverse Fourier transform on the data in the data set to be processed to obtain the image set to be processed;
and performing impurity removal and Gaussian blur processing on the image to be processed to obtain a simulation artifact image.
Preferably, the step of performing the impurity removal and gaussian blur processing on the image to be processed to obtain the simulated artifact image includes:
carrying out abnormal screening on the image set to be processed to obtain a screened image set;
placing a discrete approximation of a gaussian function on each pixel of each image in the set of screening images to obtain discrete pixels;
Multiplying the discrete pixel points by pixel values of the pixel points in the neighborhood range of the discrete pixel points element by element to obtain a plurality of pixel products;
and adding the pixel products to obtain a fuzzy value of the discrete pixel points, and adding Gaussian blur for each pixel point of each image in the screening image set based on the fuzzy value to obtain a simulation artifact image.
Preferably, the step of constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training to obtain the small-parameter artifact removal model includes:
constructing an MRI automatic encoder capable of encoding an MRI image into continuous perception features and discrete text marks through the first training model so as to obtain a second training model;
Loss calculation of training data set by frozen MRI auto encoder to get dabs loss :
;
In the method, in the process of the invention,Representing data within a continuous space of the artifact free normalized image output by the frozen MRI automatic encoder in the training dataset,Representing data in a continuous space of the simulated artifact image output by the frozen MRI auto-encoder in the training dataset,Representation ofThe data in the discrete space obtained by the quantizer,Representation ofData in the discrete space obtained by the quantizer;
loss of the dabs And carrying out weighted sum on the L1 loss and the perception loss to obtain a final loss function, and carrying out iterative training on the second training model based on the final loss function to obtain a small-parameter artifact removal model.
Preferably, the step of obtaining a test dataset, and inputting the test dataset into the small-parameter deghosting model for deghosting to obtain deghosting image data includes:
obtaining a test data set, inputting the test data set into a residual block in the small-parameter artifact removal model to perform first feature extraction so as to obtain a feature image, wherein the small-parameter artifact removal model sequentially comprises a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower and a second convolution block;
inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model for secondary characteristic extraction to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks;
inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex-to-real conversion, convolution operation, real-to-complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information;
And sequentially inputting the second high-dimensional characteristic information into an up-sampling tower and a second convolution block to perform characteristic recovery so as to obtain artifact-removed image data.
In a second aspect, the present invention provides a system for removing an artifact of a nuclear magnetic resonance image, where the system adopts the method for removing an artifact of a nuclear magnetic resonance image, and the system includes:
The processing module is used for acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
the simulation module is used for carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
the first training module is used for constructing a first training model, and inputting the training data set into the first training model for training so as to obtain a large-parameter artifact removal model;
The second training module is used for constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training so as to obtain a small-parameter artifact removal model;
And the artifact removal module is used for acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model for artifact removal processing so as to obtain artifact removal image data.
Preferably, the artifact removal module includes:
The first extraction submodule is used for acquiring a test data set, inputting the test data set into a residual block in the small-parameter artifact removal model to perform first feature extraction so as to obtain a feature image, and the small-parameter artifact removal model sequentially comprises a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower and a second convolution block;
The second extraction sub-module is used for inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model to perform second characteristic extraction so as to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks;
The conversion submodule is used for inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex to real conversion, convolution operation, real to complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information;
And the restoration submodule is used for sequentially inputting the second high-dimensional characteristic information into the up-sampling tower and the second convolution block to perform characteristic restoration so as to obtain artifact-removed image data.
In a third aspect, the present invention provides a computer, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the method for removing a nuclear magnetic resonance image artifact as described above when executing the computer program.
In a fourth aspect, the present invention provides a storage medium having a computer program stored thereon, the computer program, when executed by a processor, implementing a method for removing a nuclear magnetic resonance image artifact as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for removing artifacts in a mri according to a first embodiment of the present invention;
FIG. 2 is a diagram of a first training model according to an embodiment of the present invention;
FIG. 3 is a diagram of a second training model according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a small-parameter de-artifact model according to an embodiment of the present invention;
FIG. 5 is a diagram showing the comparison of the first embodiment of the present invention before and after artifact removal;
fig. 6 is a block diagram of a system for removing artifacts in mri according to a second embodiment of the present invention;
Fig. 7 is a schematic hardware structure of a computer according to another embodiment of the invention.
Embodiments of the present invention will be further described below with reference to the accompanying drawings.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended to illustrate embodiments of the invention and should not be construed as limiting the invention.
In the description of the embodiments of the present invention, it should be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate description of the embodiments of the present invention and simplify description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the embodiments of the present invention, the meaning of "plurality" is two or more, unless explicitly defined otherwise.
In the embodiments of the present invention, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured" and the like are to be construed broadly and include, for example, either permanently connected, removably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the embodiments of the present invention will be understood by those of ordinary skill in the art according to specific circumstances.
Example 1
In a first embodiment of the present invention, as shown in fig. 1, a method for removing artifacts in a nuclear magnetic resonance image includes:
s1, acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
specifically, the historical artifact-free MRI data image herein may be obtained by acquiring scan image data of a common 2DMRI multiple times, and in this step, the normalization formula is a common maximum and minimum normalization formula.
S2, carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
Specifically, the step S2 includes:
s21, carrying out Fourier transform on the artifact-free normalized image to obtain corresponding K space sequence data.
S22, performing rotary translation on the K space sequence data to obtain first processed image data:
;
In the method, in the process of the invention,The data representing the sequence of K-space,Representing the complex function of the complex variables,Indicating the centre of rotation of the wheel,、、Respectively representing a first weight, a second weight and a third weight,、、Respectively represent the rotation angles in three directions,、、Respectively represent the displacement amounts in three directions,The position of the spatial sequence data is represented,、、Representing within vector spaceIs a direction of (2);
specifically, based on the reduction of the image quality and the structural deformation caused by the artifacts, simulation schemes with different intensities, that is, different rotation angles and displacements, can be adopted, so that the first processed image data can be obtained after the original K-space sequence data is rotated and translated, and for the specific artifacts, the first processed image data can be motion artifacts, cross artifacts, truncation artifacts, chemical displacement artifacts and the like.
S23, randomly selecting a sequence data from the K space sequence data as reference data, selecting a plurality of sequence data as data to be spliced on two adjacent sides of the sequence data according to the position of the reference data in the sequence, and respectively intercepting part of images in the data to be spliced and the reference data and performing image splicing to obtain second processed image data;
Specifically, after the first processed image data is obtained, in order to preserve the diversity of the data to the greatest extent, a stitching manner is required to obtain the second processed image, for example, one data is selected from two sides of the reference data as data to be stitched, the data to be stitched on the left side and the right side of the reference data and the reference data are equally divided into three equal parts, one third of the image data on the left side is selected from the data to be stitched on the left side of the reference data, one third of the image data on the middle is selected from the reference data, one third of the image data on the right side is selected from the data to be stitched on the right side of the reference data, then the three image data are stitched, so that the second processed image data can be obtained, and the number of the data to be stitched can be sequentially and incrementally selected to obtain a plurality of second processed image data.
S24, storing the first processed image data and the second processed image data into a data set to be processed, and performing inverse Fourier transform on the data in the data set to be processed to obtain the image set to be processed;
Specifically, after the data set to be processed is obtained, the data in the data set is sequence data in K space, so that the data is converted into an image domain sequence after inverse fourier transformation, and the image set to be processed can be obtained.
S25, performing impurity removal and Gaussian blur processing on the image to be processed to obtain a simulation artifact image;
Wherein, the step S25 includes:
S251, carrying out abnormal screening on the image set to be processed to obtain a screened image set;
in particular, this step is mainly used to screen out some obviously unreasonable simulated images.
S252, placing the discrete approximation of the Gaussian function on each pixel point of each image in the screening image set to obtain discrete pixel points;
specifically, the discrete approximation of the gaussian function is typically a two-dimensional matrix.
S253, multiplying the discrete pixel points by pixel values of the pixel points in the neighborhood range of the discrete pixel points element by element to obtain a plurality of pixel products.
And S254, adding the pixel products to obtain a fuzzy value of the discrete pixel points, and adding Gaussian blur for each pixel point of each image in the screening image set based on the fuzzy value to obtain a simulation artifact image.
S3, constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model;
As shown in fig. 2, the first training model includes a LLM model and a VQ-GAN model, the LLM model mainly includes a quantizer in the figure, which is mainly used to add more medical image information to the high-dimensional information output by the encoder, and then the result is significantly improved after the information is recovered by a decoder, the VQ-GAN model mainly includes a decoder and an encoder in the figure, the encoder is mainly used to encode the image to generate high-dimensional perceptual features in continuous space, the decoder is mainly used to perform feature recovery on the high-dimensional features, and the medical large model shown in the figure is specifically a visual language large model pubmed clip based on a public medical knowledge base, and the LLM Codebook is specifically a Codebook.
Wherein, the step S3 includes:
S31, constructing a LLM model and a VQ-GAN model, inserting the LLM model between an encoder and a decoder in the VQ-GAN model to obtain a first training model, inputting the training data set into the VQ-GAN model, and encoding the training data set through the encoder in the VQ-GAN model to obtain high-dimensional perception characteristics in a continuous space;
Specifically, the first training process needs to be implemented in a continuous feature space and a discrete marker-embedded space, and is implemented by a pre-trained Large Language Model (LLM) -guided MRI (nuclear magnetic resonance) auto-encoder guided by artifact-free images, where the auto-encoder is built on a VQ-GAN architecture consisting of encoder, decoder and quantizer, and the VQ-GAN is a generative model that covers data compression techniques.
S32, calculating the similarity among the data in the training dataset through a medical big model PubMedCLIP to obtain a plurality of candidate words, and storing the plurality of candidate words into a codebook;
Specifically, for the codebook, the common encoder (encoder) +decoder structure (decoder), the decoder directly restores the high-dimensional features obtained by the encoder, after the quaternizer (quantizer) is added, the high-dimensional features output by the encoder are subjected to quaternizer processing, namely discretization processing is performed on the high-dimensional features, namely discretization representation of the high-dimensional features is performed, and the quaternizer obtained after pre-training is used, which is equivalent to adding more medical image information to the high-dimensional information output by the encoder, so that the result obtained by the restoration of the decoder is obviously improved.
S33, replacing the codebook with a pre-trained fixed codebook in the LLM model to perform discretization processing on the high-dimensional perception feature to obtain discretized features, inputting the discretized features into the decoder to restore the discretized features to obtain high-dimensional restored features, and performing iterative training on the first training model based on the high-dimensional restored features, the high-dimensional perception feature and pyramid semantic loss to obtain a large-parameter artifact removal model, wherein the pyramid semantic lossThe method comprises the following steps:
;
In the method, in the process of the invention, Representing the calculated similarity of the medical large model PubMedCLIP,Represent the firstThe text of the layer is embedded,The representation mark generates a function of the text embedding,The codebook is represented as a codebook in which,Indicating the number of embedded layers,Represent the firstThe degree of similarity of the individual training data,Representing the first in the codebookCandidate words.
Specifically, the codebook obtained by learning in the above steps is replaced by a pre-trained fixed codebook from the LLM model, and the codebook is obtained by calculating similarity scores through PubMedCLIP to form different candidate words, so that the obtained fixed codebook has the information closer to medical images;
At the same time, hierarchical semantic information is learned in the quantizer using pyramid semantic losses, which is done in the quantizer quater, just as the above-mentioned "high-dimensional perceptual features in continuous space then go through nearest neighbor lookups", in such a way that the discrete representation obtained by the quater lacks specific semantic information.
S4, constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training to obtain a small-parameter artifact removal model;
As shown in fig. 3, the encoder is the encoder in the first training model, specifically, after obtaining the LLM-guided MRI automatic encoder capable of encoding MRI images into continuous perceptual features and discrete text labels, calculating the artifact-free image and the artifact-free image by defining dabs loss and using the frozen automatic encoder, thereby training the network to obtain the final small-parameter artifact removal model, that is, the above process can be expressed as that the encoder and the quantizer obtained by training in the first training model are used to train the model with a smaller number of parameters to obtain the final small-parameter artifact removal model;
wherein, the step S4 includes:
S41, constructing an MRI automatic encoder capable of encoding an MRI image into continuous perception features and discrete text marks through the first training model so as to obtain a second training model.
Specifically, the MRI automatic encoder is the encoder in the training of the first training model.
S42, performing loss calculation on the training data set through the frozen MRI automatic encoder to obtain dabs loss:
;
In the method, in the process of the invention,Representing data within a continuous space of the artifact free normalized image output by the frozen MRI automatic encoder in the training dataset,Representing data in a continuous space of the simulated artifact image output by the frozen MRI auto-encoder in the training dataset,Representation ofThe data in the discrete space obtained by the quantizer,Representation ofData in discrete space obtained by the quantizer.
S43, losing the dabsAnd carrying out weighted sum on the L1 loss and the perception loss to obtain a final loss function, and carrying out iterative training on the second training model based on the final loss function to obtain a small-parameter artifact removal model.
S5, acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model to perform artifact removal processing so as to obtain artifact removal image data.
Wherein, the step S5 includes:
As shown in fig. 4, S51, a test data set is acquired, the test data set is input into a residual block in the small-parameter artifact removal model to perform a first feature extraction, so as to obtain a feature image, and the small-parameter artifact removal model sequentially includes a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower, and a second convolution block.
S52, inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model for second characteristic extraction to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks.
The method comprises the steps that residual error connection is added between a double-domain residual block and a first double-domain convolution block, a lower sampling tower and an upper sampling tower respectively comprise a plurality of groups of double-domain residual blocks and first double-domain convolution blocks which are alternately arranged, the lower sampling tower and the upper sampling tower are connected with each other through jump connection, frequency domain information can be obtained through Fourier transformation and image domain information can be obtained through inverse Fourier transformation for the first convolution block and the second convolution block, the frequency domain information is a vector matrix, real parts and imaginary parts are respectively mapped onto two channels through complex form expression, corresponding frequency domain feature diagrams are obtained through convolution calculation, and the frequency domain feature diagrams are opposite for the second convolution block.
And S53, inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex to real conversion, convolution operation, real to complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information.
And S54, sequentially inputting the second high-dimensional characteristic information into an up-sampling tower and a second convolution block for characteristic recovery so as to obtain artifact-removed image data.
Specifically, as shown in fig. 5, by comparing the method for removing the artifacts in the nuclear magnetic resonance image provided by the invention, the artifacts in the nuclear magnetic resonance image can be effectively removed, so as to ensure the authenticity of the result.
Compared with the prior art, the nuclear magnetic resonance image artifact removal method provided by the embodiment of the invention has the beneficial effects that: firstly, acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image; then carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set; then constructing a first training model, and inputting a training data set into the first training model for training to obtain a large-parameter artifact removal model; then constructing a second training model based on the first training model, and inputting a training data set into the second training model for training to obtain a small-parameter artifact removal model; finally, a test data set is obtained, the test data set is input into a small-parameter artifact removal model for artifact removal processing to obtain artifact removal image data, artifacts existing in a nuclear magnetic resonance image can be well removed or reduced, and the method provided by the invention has stronger universality and can ensure the authenticity of results.
Example two
As shown in fig. 6, in a second embodiment of the present invention, there is provided a nmr image artifact removal system, which adopts the nmr image artifact removal method according to the first embodiment, the system includes:
the processing module 1 is used for acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
The simulation module 2 is used for carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
the first training module 3 is used for constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model;
the second training module 4 is configured to construct a second training model based on the large-parameter artifact removal model and parameters thereof, and input the training data set into the second training model for training to obtain a small-parameter artifact removal model;
And the artifact removal module 5 is used for acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model for artifact removal processing so as to obtain artifact removal image data.
The simulation module 2 includes:
The space transformation sub-module is used for carrying out Fourier transformation on the artifact-free normalized image so as to obtain corresponding K space sequence data;
a rotation translation sub-module for performing rotation translation on the K space sequence data to obtain first processed image data :
;
In the method, in the process of the invention,The data representing the sequence of K-space,Representing the complex function of the complex variables,Indicating the centre of rotation of the wheel,、、Respectively representing a first weight, a second weight and a third weight,、、Respectively represent the rotation angles in three directions,、、Respectively represent the displacement amounts in three directions,The position of the spatial sequence data is represented,、、Representing within vector spaceIs a direction of (2);
The splicing sub-module is used for arbitrarily selecting a sequence data from the K space sequence data as reference data, selecting a plurality of sequence data as data to be spliced on two adjacent sides of the sequence data according to the position of the reference data in the sequence, and respectively intercepting part of images in the data to be spliced and the reference data and splicing the images to obtain second processed image data;
the inverse transformation submodule is used for storing the first processed image data and the second processed image data into a data set to be processed, and performing inverse Fourier transformation on the data in the data set to be processed to obtain the image set to be processed;
and the blurring sub-module is used for carrying out impurity removal and Gaussian blurring on the image to be processed so as to obtain a simulation artifact image.
The blurring submodule includes:
the screening unit is used for carrying out abnormal screening on the image set to be processed so as to obtain a screened image set;
A discrete unit for placing a discrete approximation of a gaussian function on each pixel of each image in the set of screening images to obtain discrete pixels;
the multiplication unit is used for multiplying the discrete pixel points by the pixel values of the pixel points in the neighborhood range of the discrete pixel points element by element to obtain a plurality of pixel products;
And the accumulation unit is used for adding the pixel products to obtain a fuzzy value of the discrete pixel points, and adding Gaussian blur to each pixel point of each image in the screening image set based on the fuzzy value to obtain a simulation artifact image.
The first training module 3 comprises:
the encoding submodule is used for constructing a LLM model and a VQ-GAN model, inserting the LLM model between an encoder and a decoder in the VQ-GAN model to obtain a first training model, inputting the training data set into the VQ-GAN model, and encoding the training data set through the encoder in the VQ-GAN model to obtain high-dimensional perception characteristics in a continuous space;
The codebook submodule is used for calculating the similarity among the data in the training dataset through the medical big model PubMedCLIP so as to obtain a plurality of candidate words, and storing the plurality of candidate words into a codebook;
a first training sub-module for performing discretization processing on the high-dimensional perception feature by replacing the codebook with a pre-trained fixed codebook in the LLM model to obtain discretized features, inputting the discretized features into the decoder to recover the discretized features to obtain high-dimensional recovery features, and performing iterative training on the first training model based on the high-dimensional recovery features, the high-dimensional perception features and pyramid semantic losses to obtain a large-parameter artifact removal model, wherein the pyramid semantic losses The method comprises the following steps:
;
In the method, in the process of the invention, Representing the calculated similarity of the medical large model PubMedCLIP,Represent the firstThe text of the layer is embedded,The representation mark generates a function of the text embedding,The codebook is represented as a codebook in which,Indicating the number of embedded layers,Represent the firstThe degree of similarity of the individual training data,Representing the first in the codebookCandidate words.
The second training module 4 comprises:
A construction sub-module for constructing an MRI automatic encoder capable of encoding an MRI image into continuous perception features and discrete text labels through the first training model to obtain a second training model;
A loss sub-module for performing loss calculation on the training data set through the frozen MRI automatic encoder to obtain dabs loss :
;
In the method, in the process of the invention,Representing data within a continuous space of the artifact free normalized image output by the frozen MRI automatic encoder in the training dataset,Representing data in a continuous space of the simulated artifact image output by the frozen MRI auto-encoder in the training dataset,Representation ofThe data in the discrete space obtained by the quantizer,Representation ofData in the discrete space obtained by the quantizer;
A second training sub-module for losing the dabs And carrying out weighted sum on the L1 loss and the perception loss to obtain a final loss function, and carrying out iterative training on the second training model based on the final loss function to obtain a small-parameter artifact removal model.
The de-artifacting module 5 includes:
The first extraction submodule is used for acquiring a test data set, inputting the test data set into a residual block in the small-parameter artifact removal model to perform first feature extraction so as to obtain a feature image, and the small-parameter artifact removal model sequentially comprises a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower and a second convolution block;
The second extraction sub-module is used for inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model to perform second characteristic extraction so as to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks;
The conversion submodule is used for inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex to real conversion, convolution operation, real to complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information;
And the restoration submodule is used for sequentially inputting the second high-dimensional characteristic information into the up-sampling tower and the second convolution block to perform characteristic restoration so as to obtain artifact-removed image data.
In other embodiments of the present invention, as shown in fig. 7, a computer is provided, which includes a memory 102, a processor 101, and a computer program stored in the memory 102 and executable on the processor 101, where the processor 101 implements the method for removing a nuclear magnetic resonance image artifact as described above when executing the computer program.
In particular, the processor 101 may include a Central Processing Unit (CPU), or an Application SPECIFIC INTEGRATED Circuit (ASIC), or may be configured as one or more integrated circuits that implement embodiments of the present invention.
Memory 102 may include, among other things, mass storage for data or instructions. By way of example, and not limitation, memory 102 may comprise a hard disk drive (HARD DISK DRIVE, abbreviated HDD), a floppy disk drive, a Solid state drive (Solid STATE DRIVE, abbreviated SSD), flash memory, an optical disk, a magneto-optical disk, a magnetic tape, or a universal serial bus (Universal Serial Bus, abbreviated USB) drive, or a combination of two or more of these. Memory 102 may include removable or non-removable (or fixed) media, where appropriate. The memory 102 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 102 is a Non-Volatile (Non-Volatile) memory. In particular embodiments, memory 102 includes Read-Only Memory (ROM) and random access Memory (Random Access Memory, RAM). Where appropriate, the ROM may be a mask-programmed ROM, a programmable ROM (Programmable Read-Only Memory, abbreviated PROM), an erasable PROM (Erasable Programmable Read-Only Memory, abbreviated EPROM), an electrically erasable PROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, abbreviated EEPROM), an electrically rewritable ROM (ELECTRICALLY ALTERABLE READ-Only Memory, abbreviated EAROM), or a FLASH Memory (FLASH), or a combination of two or more of these. The RAM may be a Static Random-Access Memory (SRAM) or a dynamic Random-Access Memory (Dynamic Random Access Memory DRAM), where the DRAM may be a fast page mode dynamic Random-Access Memory (Fast Page Mode Dynamic Random Access Memory, FPMDRAM), an extended data output dynamic Random-Access Memory (Extended Date Out Dynamic Random Access Memory, EDODRAM), a synchronous dynamic Random-Access Memory (Synchronous Dynamic Random-Access Memory, SDRAM), or the like, as appropriate.
Memory 102 may be used to store or cache various data files that need to be processed and/or communicated, as well as possible computer program instructions for execution by processor 101.
The processor 101 reads and executes the computer program instructions stored in the memory 102 to implement the above-described mri artifact removal method.
In some of these embodiments, the computer may also include a communication interface 103 and a bus 100. The processor 101, the memory 102, and the communication interface 103 are connected to each other by the bus 100 and perform communication with each other.
The communication interface 103 is used to implement communications between modules, devices, units, and/or units in embodiments of the invention. The communication interface 103 may also enable communication with other components such as: and the external equipment, the image/data acquisition equipment, the database, the external storage, the image/data processing workstation and the like are used for data communication.
Bus 100 includes hardware, software, or both, coupling components of a computer device to each other. Bus 100 includes, but is not limited to, at least one of: data Bus (Data Bus), address Bus (Address Bus), control Bus (Control Bus), expansion Bus (Expansion Bus), local Bus (Local Bus). By way of example, and not limitation, bus 100 may comprise a graphics acceleration interface (ACCELERATED GRAPHICS Port, abbreviated as AGP) or other graphics Bus, an enhanced industry standard architecture (Extended Industry Standard Architecture, abbreviated as EISA) Bus, a Front Side Bus (Front Side Bus, abbreviated as FSB), a HyperTransport (abbreviated as HT) interconnect, an industry standard architecture (Industry Standard Architecture, abbreviated as ISA) Bus, a wireless bandwidth (InfiniBand) interconnect, a Low Pin Count (LPC) Bus, a memory Bus, a micro channel architecture (Micro Channel Architecture, abbreviated as MCA) Bus, a peripheral component interconnect (PERIPHERAL COMPONENT INTERCONNECT, abbreviated as PCI) Bus, a PCI-Express (PCI-X) Bus, a serial advanced technology attachment (SERIAL ADVANCED Technology Attachment, abbreviated as SATA) Bus, a video electronics standards Association local (Video Electronics Standards Association Local Bus, abbreviated as VLB) Bus, or other suitable Bus, or a combination of two or more of these. Bus 100 may include one or more buses, where appropriate. Although embodiments of the invention have been described and illustrated with respect to a particular bus, the invention contemplates any suitable bus or interconnect.
The computer can execute the nuclear magnetic resonance image artifact removal method based on the acquired nuclear magnetic resonance image artifact removal system, thereby realizing nuclear magnetic resonance image artifact removal.
In still other embodiments of the present invention, in combination with the above-mentioned method for removing a nuclear magnetic resonance image artifact, the embodiments of the present invention provide a storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned method for removing a nuclear magnetic resonance image artifact.
Those of skill in the art will appreciate that the logic and/or steps represented in the flow diagrams or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (8)
1. A method for removing artifacts in a nuclear magnetic resonance image, comprising:
Acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
Performing artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
Constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model;
Constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training to obtain a small-parameter artifact removal model;
acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model to perform artifact removal processing so as to obtain artifact removal image data;
The step of constructing a first training model, and inputting the training data set into the first training model for training to obtain a large-parameter artifact removal model comprises the following steps:
Constructing a LLM model and a VQ-GAN model, inserting the LLM model between an encoder and a decoder in the VQ-GAN model to obtain a first training model, inputting the training data set into the VQ-GAN model, and encoding the training data set through the encoder in the VQ-GAN model to obtain high-dimensional perception characteristics in a continuous space;
calculating the similarity among the data in the training dataset through a medical big model PubMedCLIP to obtain a plurality of candidate words, and storing the plurality of candidate words into a codebook;
Performing discretization processing on the high-dimensional perception feature by replacing the pre-trained fixed codebook in the LLM model with the codebook to obtain discretized features, inputting the discretized features into the decoder to restore the discretized features to obtain high-dimensional restoration features, and performing iterative training on the first training model based on the high-dimensional restoration features, the high-dimensional perception features and pyramid semantic losses to obtain a large-parameter artifact removal model, wherein the pyramid semantic losses The method comprises the following steps:
;
In the method, in the process of the invention, Representing the calculated similarity of the medical large model PubMedCLIP,Represent the firstThe text of the layer is embedded,The representation mark generates a function of the text embedding,The codebook is represented as a codebook in which,Indicating the number of embedded layers,Represent the firstThe degree of similarity of the individual training data,Representing the first in the codebookCandidate words;
The step of constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training to obtain a small-parameter artifact removal model comprises the following steps:
constructing an MRI automatic encoder capable of encoding an MRI image into a continuous perception feature and a discrete text mark through the large-parameter artifact removal model and parameters thereof so as to obtain a second training model;
Loss calculation of training data set by frozen MRI auto encoder to get dabs loss :
;
In the method, in the process of the invention,Representing data within a continuous space of the artifact free normalized image output by the frozen MRI automatic encoder in the training dataset,Representing data in a continuous space of the simulated artifact image output by the frozen MRI auto-encoder in the training dataset,Representation ofThe data in the discrete space obtained by the quantizer,Representation ofData in the discrete space obtained by the quantizer;
loss of the dabs And carrying out weighted sum on the L1 loss and the perception loss to obtain a final loss function, and carrying out iterative training on the second training model based on the final loss function to obtain a small-parameter artifact removal model.
2. The method of removing artifacts from a nuclear magnetic resonance image according to claim 1, wherein said step of performing artifact simulation on said artifact-free normalized image to obtain a simulated artifact image comprises:
performing Fourier transform on the artifact-free normalized image to obtain corresponding K space sequence data;
performing rotational translation on the K space sequence data to obtain first processed image data :
;
In the method, in the process of the invention,The data representing the sequence of K-space,Representing the complex function of the complex variables,Indicating the centre of rotation of the wheel,、、Respectively representing a first weight, a second weight and a third weight,、、Respectively represent the rotation angles in three directions,、、Respectively represent the displacement amounts in three directions,The position of the spatial sequence data is represented,、、Representing within vector spaceIs a direction of (2);
selecting a sequence data at will from the K space sequence data as reference data, selecting a plurality of sequence data as data to be spliced on two adjacent sides of the reference data according to the position of the reference data in the sequence, and respectively intercepting part of images in the data to be spliced and the reference data and performing image splicing to obtain second processed image data;
Storing the first processed image data and the second processed image data into a data set to be processed, and performing inverse Fourier transform on the data in the data set to be processed to obtain the image set to be processed;
and performing impurity removal and Gaussian blur processing on the image to be processed to obtain a simulation artifact image.
3. The method of removing artifacts from a nuclear magnetic resonance image according to claim 2, wherein said step of performing a process of removing impurities and gaussian blur on said image to be processed to obtain a simulated artifact image comprises:
carrying out abnormal screening on the image set to be processed to obtain a screened image set;
placing a discrete approximation of a gaussian function on each pixel of each image in the set of screening images to obtain discrete pixels;
Multiplying the discrete pixel points by pixel values of the pixel points in the neighborhood range of the discrete pixel points element by element to obtain a plurality of pixel products;
and adding the pixel products to obtain a fuzzy value of the discrete pixel points, and adding Gaussian blur for each pixel point of each image in the screening image set based on the fuzzy value to obtain a simulation artifact image.
4. The method of claim 1, wherein the step of obtaining a test dataset, inputting the test dataset into the small-parameter deghosting model for deghosting, and obtaining deghosting image data comprises:
obtaining a test data set, inputting the test data set into a residual block in the small-parameter artifact removal model to perform first feature extraction so as to obtain a feature image, wherein the small-parameter artifact removal model sequentially comprises a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower and a second convolution block;
inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model for secondary characteristic extraction to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks;
inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex-to-real conversion, convolution operation, real-to-complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information;
And sequentially inputting the second high-dimensional characteristic information into an up-sampling tower and a second convolution block to perform characteristic recovery so as to obtain artifact-removed image data.
5. A nuclear magnetic resonance image artifact removal system employing the nuclear magnetic resonance image artifact removal method of claim 1, the system comprising:
The processing module is used for acquiring a historical artifact-free MRI data image, and carrying out normalization processing on the historical artifact-free MRI data image to obtain an artifact-free normalized image;
the simulation module is used for carrying out artifact simulation on the artifact-free normalized image to obtain a simulated artifact image, and storing the artifact-free normalized image and the simulated artifact image into a training data set;
the first training module is used for constructing a first training model, and inputting the training data set into the first training model for training so as to obtain a large-parameter artifact removal model;
The second training module is used for constructing a second training model based on the large-parameter artifact removal model and parameters thereof, and inputting the training data set into the second training model for training so as to obtain a small-parameter artifact removal model;
And the artifact removal module is used for acquiring a test data set, and inputting the test data set into the small-parameter artifact removal model for artifact removal processing so as to obtain artifact removal image data.
6. The system of claim 5, wherein the de-artifacting module comprises:
The first extraction submodule is used for acquiring a test data set, inputting the test data set into a residual block in the small-parameter artifact removal model to perform first feature extraction so as to obtain a feature image, and the small-parameter artifact removal model sequentially comprises a first convolution block, a downsampling tower, a second double-domain convolution block, an upsampling tower and a second convolution block;
The second extraction sub-module is used for inputting the characteristic image into a downsampling tower in the small-parameter artifact removal model to perform second characteristic extraction so as to obtain first high-dimensional characteristic information, wherein the downsampling tower comprises a plurality of groups of alternately arranged double-domain residual blocks and first double-domain convolution blocks;
The conversion submodule is used for inputting the first high-dimensional characteristic information into the second double-domain convolution block to sequentially perform Fourier transformation, complex to real conversion, convolution operation, real to complex conversion and inverse Fourier transformation so as to obtain second high-dimensional characteristic information;
And the restoration submodule is used for sequentially inputting the second high-dimensional characteristic information into the up-sampling tower and the second convolution block to perform characteristic restoration so as to obtain artifact-removed image data.
7. A computer comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the nuclear magnetic resonance image artifact removal method according to any one of claims 1 to 4 when the computer program is executed.
8. A storage medium having stored thereon a computer program which, when executed by a processor, implements the method of removing nuclear magnetic resonance image artifacts according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410902714.5A CN118447123B (en) | 2024-07-08 | 2024-07-08 | Nuclear magnetic resonance image artifact removal method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410902714.5A CN118447123B (en) | 2024-07-08 | 2024-07-08 | Nuclear magnetic resonance image artifact removal method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118447123A CN118447123A (en) | 2024-08-06 |
CN118447123B true CN118447123B (en) | 2024-09-13 |
Family
ID=92320218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410902714.5A Active CN118447123B (en) | 2024-07-08 | 2024-07-08 | Nuclear magnetic resonance image artifact removal method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118447123B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276736A (en) * | 2019-04-01 | 2019-09-24 | 厦门大学 | A kind of magnetic resonance image fusion method based on weight prediction network |
CN114241070A (en) * | 2021-12-01 | 2022-03-25 | 北京长木谷医疗科技有限公司 | Method and device for removing metal artifacts from CT image and training model |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009134820A2 (en) * | 2008-04-28 | 2009-11-05 | Cornell University | Tool for accurate quantification in molecular mri |
US9294784B2 (en) * | 2009-03-12 | 2016-03-22 | Thomson Licensing | Method and apparatus for region-based filter parameter selection for de-artifact filtering |
CN107507148B (en) * | 2017-08-30 | 2018-12-18 | 南方医科大学 | Method based on the convolutional neural networks removal down-sampled artifact of magnetic resonance image |
US11763502B2 (en) * | 2018-08-06 | 2023-09-19 | Vanderbilt University | Deep-learning-based method for metal reduction in CT images and applications of same |
CN109741409A (en) * | 2018-11-30 | 2019-05-10 | 厦门大学 | Echo-planar imaging eddy current artifacts without reference scan bearing calibration |
EP3745153A1 (en) * | 2019-05-28 | 2020-12-02 | Koninklijke Philips N.V. | A method for motion artifact detection |
CN115867817A (en) * | 2020-07-31 | 2023-03-28 | 马克思-普朗克科学促进协会 | Method and apparatus for acquiring and reconstructing diffusion weighted magnetic resonance image sequences covering a volume |
CN115131452B (en) * | 2022-04-19 | 2024-11-08 | 腾讯医疗健康(深圳)有限公司 | Image processing method and device for artifact removal |
US20230337987A1 (en) * | 2022-04-21 | 2023-10-26 | The General Hospital Corporation | Detecting motion artifacts from k-space data in segmentedmagnetic resonance imaging |
CN116309910A (en) * | 2023-03-12 | 2023-06-23 | 上海大学 | Method for removing Gibbs artifacts of magnetic resonance images |
CN116671933A (en) * | 2023-06-06 | 2023-09-01 | 闽江学院 | Electroencephalogram artifact removing method based on convolutional neural network |
CN117115011A (en) * | 2023-07-25 | 2023-11-24 | 武汉理工大学 | De-artifact method and system based on deep unsupervised learning |
CN117115031A (en) * | 2023-08-31 | 2023-11-24 | 常州博恩中鼎医疗科技有限公司 | CBCT metal artifact removal method and system based on unpaired learning |
-
2024
- 2024-07-08 CN CN202410902714.5A patent/CN118447123B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110276736A (en) * | 2019-04-01 | 2019-09-24 | 厦门大学 | A kind of magnetic resonance image fusion method based on weight prediction network |
CN114241070A (en) * | 2021-12-01 | 2022-03-25 | 北京长木谷医疗科技有限公司 | Method and device for removing metal artifacts from CT image and training model |
Also Published As
Publication number | Publication date |
---|---|
CN118447123A (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bao et al. | Undersampled MR image reconstruction using an enhanced recursive residual network | |
Al-Masni et al. | Stacked U-Nets with self-assisted priors towards robust correction of rigid motion artifact in brain MRI | |
CN111932460A (en) | MR image super-resolution reconstruction method and device, computer equipment and storage medium | |
Park et al. | Autoencoder-inspired convolutional network-based super-resolution method in MRI | |
CN115375711A (en) | Image segmentation method of global context attention network based on multi-scale fusion | |
KR102561214B1 (en) | A method and apparatus for image segmentation using global attention | |
CN112862805A (en) | Automatic auditory neuroma image segmentation method and system | |
CN113344991A (en) | Unsupervised medical image registration method and system based on deep learning | |
CN117611453A (en) | Nuclear magnetic resonance image super-resolution recovery method and model construction method | |
Sander et al. | Autoencoding low-resolution MRI for semantically smooth interpolation of anisotropic MRI | |
CN114066908A (en) | Method and system for brain tumor image segmentation | |
CN112990266A (en) | Method, device, equipment and storage medium for processing multi-modal brain image data | |
Lim et al. | Motion artifact correction in fetal MRI based on a Generative Adversarial network method | |
CN118447123B (en) | Nuclear magnetic resonance image artifact removal method and system | |
Zhang et al. | 3d cross-scale feature transformer network for brain mr image super-resolution | |
CN111462004B (en) | Image enhancement method and device, computer equipment and storage medium | |
CN117710754A (en) | Multi-mode magnetic resonance image generation method, system, equipment and medium based on generation countermeasure network | |
CN116705297A (en) | Carotid artery detector based on multiple information processing | |
JP7369572B2 (en) | MRI device, image processing device, and image processing method | |
CN115578285B (en) | Mammary gland molybdenum target image detail enhancement method and system | |
Xu et al. | Multi-modal brain MRI images enhancement based on framelet and local weights super-resolution | |
Zanzaney et al. | Super Resolution in Medical Imaging | |
Sigillo et al. | Generalizing Medical Image Representations via Quaternion Wavelet Networks | |
CN112508881A (en) | Intracranial blood vessel image registration method | |
CN118469821B (en) | Medical image volume super-resolution method based on diffusion model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |