Nothing Special   »   [go: up one dir, main page]

CN115439423A - CT image-based identification method, device, equipment and storage medium - Google Patents

CT image-based identification method, device, equipment and storage medium Download PDF

Info

Publication number
CN115439423A
CN115439423A CN202211006713.XA CN202211006713A CN115439423A CN 115439423 A CN115439423 A CN 115439423A CN 202211006713 A CN202211006713 A CN 202211006713A CN 115439423 A CN115439423 A CN 115439423A
Authority
CN
China
Prior art keywords
image
nodule
detected
identification model
type
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211006713.XA
Other languages
Chinese (zh)
Other versions
CN115439423B (en
Inventor
张佳琦
高飞
安南
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Yizhun Intelligent Technology Co ltd
Original Assignee
Beijing Yizhun Medical AI Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical AI Co Ltd filed Critical Beijing Yizhun Medical AI Co Ltd
Priority to CN202211006713.XA priority Critical patent/CN115439423B/en
Publication of CN115439423A publication Critical patent/CN115439423A/en
Application granted granted Critical
Publication of CN115439423B publication Critical patent/CN115439423B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Biophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method, an apparatus, a device and a storage medium for identification based on a CT image, which preprocess an initial image to be used as an image to be detected; the image to be detected is input into the nodule identification model to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel, and the noise is increased in the training process of the nodule identification model in a targeted manner so as to ensure the robustness of the model to the target diversified forms in the prediction process, effectively improve the identification degree of the nodule identification model and improve the classification accuracy.

Description

CT image-based identification method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to a method, an apparatus, a device and a storage medium for identifying a CT image.
Background
Computed Tomography (CT) uses a precisely collimated X-ray beam, gamma rays, ultrasonic waves, etc. to scan a cross section of a human body one by one around a certain part of the human body together with a detector having a very high sensitivity, has the characteristics of fast scanning time, clear images, etc., and can be used for the examination of various diseases.
Pulmonary nodules are a common pulmonary disease, and early detection of pulmonary nodules plays a crucial role in the healing and survival of patients. The ground glass nodules belong to the focus in growth, the image characteristics are very light, and in the prior art, no identification detection scheme is specially set for the ground glass nodules, so that the identification is poor, and the accuracy is low.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a device and a storage medium for identification based on CT images, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a CT image-based identification method, including:
preprocessing an initial image to be used as an image to be detected;
and inputting the image to be detected into the nodule identification model to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel.
In one embodiment, the preset deformed convolution kernel includes:
setting preset displacement and preset probability of each pixel point in the deformed convolution kernel;
and when the nodule identification model is trained, randomly generating the positions of all pixel points in the deformation convolution kernel according to the preset displacement and the preset probability so as to increase the nodule form in the training image.
In one possible embodiment, the training the nodule recognition model further comprises:
setting a window width variable value and a window level variable value;
continuously adjusting the window width variable value and the window level variable value through a back propagation algorithm in the process of training the nodule identification model to obtain a window width variable constant value and a window level variable constant value;
and adaptively adjusting the window value of the image input into the nodule identification model through the window width variable constant value and the window level variable constant value.
In an embodiment, the preprocessing the initial image to be used as the image to be measured further includes:
judging the type of a reconstruction algorithm of the initial image;
and if the reconstruction algorithm type of the current initial image is judged to be the lung algorithm type, performing Gaussian smoothing processing on the current initial image to generate a soft algorithm type image.
In an embodiment, the determining the type of the reconstruction algorithm of the initial image includes:
acquiring header field information of the initial image;
determining a lung algorithm type when the header field information contains a specific keyword, wherein the specific keyword comprises: lung, chest, thorax, B70f, and B71f.
In an embodiment, the nodule identification model is a 3D retinet or a 3D Fcos type, and accordingly, the inputting the image to be detected into the nodule identification model to identify the nodule type in the image to be detected includes:
inputting the image to be detected into the feature extraction layer to obtain a first feature image, wherein the feature extraction layer comprises the deformation convolution kernel;
inputting the first feature image into a feature pyramid layer to obtain a second feature image;
and inputting the second characteristic image into a detection layer to identify the nodule type in the image to be detected.
In an implementation manner, the inputting the image to be tested into the feature extraction layer to obtain a first feature image includes:
determining the average position of each pixel point in the deformation convolution kernel as the current deformation convolution kernel according to the preset displacement and the preset probability;
and performing convolution calculation on each pixel point in the image to be detected according to the current deformation convolution kernel so as to obtain the first characteristic image.
According to a second aspect of the present disclosure, there is provided a CT image-based recognition apparatus, including:
the image processing module is used for preprocessing the initial image to be used as an image to be detected;
and the type identification module is used for inputting the image to be detected into the nodule identification model so as to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel.
In an embodiment, the type identification module is further configured to:
setting preset displacement and preset probability of each pixel point in the deformed convolution kernel;
and when the nodule identification model is trained, randomly generating the positions of all pixel points in the deformation convolution kernel according to the preset displacement and the preset probability so as to increase the nodule form in the training image.
In one embodiment, the nodule identification model further comprises:
setting a window width variable value and a window level variable value;
continuously adjusting the window width variable value and the window level variable value through a back propagation algorithm in the process of training the nodule identification model to obtain a window width variable constant value and a window level variable constant value;
and adaptively adjusting the window value of the image input into the nodule identification model through the window width variable constant value and the window level variable constant value.
In one embodiment, the graphics processing module is further configured to:
judging the type of a reconstruction algorithm of the initial image; and if the reconstruction algorithm type of the current initial image is judged to be the lung algorithm type, performing Gaussian smoothing on the current initial image to generate a soft algorithm type image to be detected.
In an embodiment, the image determining module is further configured to:
acquiring header field information of the initial image;
determining as a lung algorithm type when the header field information contains a specific keyword, wherein the specific keyword comprises: lung, chest, thorax, B70f and B71f.
In an embodiment, the nodule identification model is a 3D retinet or a 3D Fcos type, and accordingly, the type identification module is specifically configured to:
inputting the image to be detected into the feature extraction layer to obtain a first feature image, wherein the feature extraction layer comprises the deformation convolution kernel;
inputting the first feature image into a feature pyramid layer to obtain a second feature image;
and inputting the second characteristic image into a detection layer to identify the nodule type in the image to be detected.
In an implementation manner, the type identification module is specifically configured to:
determining the average position of each pixel point in the deformation convolution kernel as the current deformation convolution kernel according to the preset displacement and the preset probability;
and performing convolution calculation on each pixel point in the image to be detected according to the current deformation convolution check to obtain the first characteristic image.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of the present disclosure.
The identification method, the identification device, the identification equipment and the storage medium based on the CT image are used for preprocessing an initial image to be used as an image to be detected; the image to be detected is input into the nodule identification model to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel, and the noise is increased in the training process of the nodule identification model in a targeted manner so as to ensure the robustness of the model to the target diversified forms in the prediction process, effectively improve the identification degree of the nodule identification model and improve the classification accuracy.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1A is a schematic flow chart illustrating an implementation of a CT image-based recognition method according to an embodiment of the present disclosure;
FIG. 1B is a schematic diagram of a ground glass nodule displayed under different reconstruction algorithms according to an embodiment of the present disclosure;
fig. 2A is a schematic flow chart illustrating an implementation of a CT image-based recognition method according to a second embodiment of the present disclosure;
FIG. 2B is a schematic diagram illustrating a 3x3 deformed convolution kernel under different conditions according to a second embodiment of the disclosure;
FIG. 2C is a schematic diagram illustrating a process of performing a morphometric convolution kernel and an image operation according to a second embodiment of the disclosure;
fig. 2D illustrates an overall framework structure diagram of a CT image-based identification method according to a second embodiment of the disclosure;
fig. 3 is a schematic structural diagram illustrating a recognition apparatus based on CT images according to an embodiment of the present disclosure;
fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more obvious and understandable, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
At present, the glass nodule detection deep learning algorithm on the chest CT image can be divided into two-dimensional and three-dimensional technical directions. In two-dimensional technology, for example, each image of a set of CT scan images is subjected to a grinding glass detection calculation layer by layer, or several adjacent images are combined into a multi-channel image to be subjected to a grinding glass focus detection calculation, and three images are commonly combined into a three-channel image. In three-dimensional technology, for example, a set of CT images are subjected to a glass grinding test as a whole voxel map.
The prior art treatment of ground glass nodules, whether two-dimensional or three-dimensional, is generally divided into two steps:
1) And (4) preprocessing the image. In the image preprocessing stage, the prior art scheme normalizes the image to a standard lung window image (-1350 Hu-150 Hu) according to the habit of the imaging physician.
2) And (5) deep learning modeling. In the deep learning modeling stage, the detection schemes applied in the prior art are all universal detection methods.
As described above, the prior art does not optimize and detect the characteristics of ground glass nodules, and therefore ground glass nodules are often missed due to their morphological features. The embodiment of the disclosure provides a CT image-based identification method, which is specially designed for the characteristics of ground glass nodule images, and can effectively improve the detection accuracy of ground glass nodules.
Example one
Fig. 1A is a flowchart of a CT image-based recognition method according to an embodiment of the present disclosure, where the method may be performed by a CT image-based recognition apparatus according to an embodiment of the present disclosure, and the apparatus may be implemented in software and/or hardware. The method specifically comprises the following steps:
and S110, preprocessing the initial image to be used as an image to be detected.
The initial image may be a three-dimensional voxel image in which all CT tomographic images are stacked. The CT tomographic image is an image captured by a CT tomographic apparatus, and each image shows a cross-sectional image of a certain layer of the body. The initial image in this embodiment is a three-dimensional voxel image formed by stacking such a plurality of two-dimensional tomographic images.
Wherein, the image to be detected is an image of the input nodule identification model. The resolution of the scanned CT images varies due to the size, height, and thinness of each person. Therefore, in order to facilitate the nodule identification model to identify the image to be detected, in this embodiment, before the image to be detected is input into the nodule identification model, the image to be detected needs to be subjected to a transformation process of normalizing the scale, for example, the image to be detected may be adjusted to the same voxel spacing. Similarly, in order to reduce the training difficulty of the nodule identification model, the training images used for training the nodule identification model can be the same as the images to be detected, and the sizes of the training images are unified.
In the embodiment of the disclosure, before unifying the sizes of the images to be detected, the initial images need to be preprocessed to generate the images to be detected with the same reconstruction algorithm. Therefore, the present embodiment needs to determine the type of the reconstruction algorithm of the initial image; and if the reconstruction algorithm type of the current initial image is judged to be the lung algorithm type, performing Gaussian smoothing on the current initial image to generate a soft algorithm type image to be detected.
Fig. 1B is a schematic diagram of a ground glass nodule displayed under different reconstruction algorithm conditions according to an embodiment of the present disclosure, which includes (a) a ground glass nodule image of a soft tissue reconstruction algorithm and (B) a ground glass nodule image of a lung reconstruction algorithm, and fig. 1B is an image obtained by two different reconstruction algorithms of the same person scanning at the same time.
As can be seen from the image (a) in fig. 1B, a faint blurred region is present in the middle of the mark frame, which is a ground glass nodule, but the ground glass nodule is not clearly shown in the image (B). Therefore, the same image can obtain two images with different sharpening styles due to different reconstruction algorithms, and the displayed images have macroscopic differences. Therefore, in this embodiment, before the initial image is input into the nodule identification model, the initial image needs to be preprocessed to unify the CT image style as the to-be-measured image of the soft algorithm type, which is not only helpful for training the nodule identification model, but also is helpful for the model to identify the ground glass nodule in the image. Similarly, the training image learned as the nodule recognition model also performs the same operation.
In an embodiment of the present disclosure, determining the type of the reconstruction algorithm of the initial image includes: acquiring header field information of an initial image; when the header field information contains a specific keyword, determining the type of the lung algorithm, wherein the specific keyword comprises: lung, chest, thorax, B70f, and B71f.
The header field information of the initial image can be obtained from a Digital Imaging and Communications in Medicine (Dicom) file for reading CT. The Dicom file is a transmission protocol for standardized medical images, and includes CT images of patient scans and header field information of the images. The header field information displays the spatial position information of the CT image, the corresponding relationship between the spatial position information of the CT image and the actual position information, and information such as the type of the reconstruction algorithm of the CT image.
Specifically, since the keywords of the lung reconstruction algorithm images set by manufacturers of different brands are different, the reconstruction algorithm of the CT initial image may be determined according to the header field information seresescription in the Dicom file in the embodiment.
Specifically, the present embodiment may determine whether the header field information of the initial image contains a specific keyword, so as to determine whether the image is of a lung algorithm type. Wherein, the special keyword comprises: lung, chest, thorax, B70f, and B71f. That is, when a certain initial image contains the specific keyword, it can be determined that the image is a lung algorithm reconstruction image, and the other images are regarded as soft tissue reconstruction algorithm images. If the image is judged to be the lung algorithm type image, the Gaussian smoothing processing is performed, the sharpening degree of the smoothed image is reduced, and the image is close to the style of a soft tissue image. For example, the present embodiment may perform image conversion processing on the lung algorithm reconstructed image by using a 3x3 gaussian kernel and setting the sigma =1 parameter. And if the image is judged to be the soft tissue algorithm reconstruction image, the image is kept as it is. In addition, in the present embodiment, after the reconstruction algorithm types of the input images are unified, the sizes are unified.
And S120, inputting the image to be detected into the nodule identification model so as to identify the nodule type in the image to be detected.
Wherein, the nodule identification model comprises a preset deformation convolution kernel. The deformation convolution kernel is positioned in a feature extraction layer of the nodule identification model and is used for performing convolution operation with an image to be detected so as to form a first feature image.
The nodule identification model in this embodiment may be any neural network learning model having a convolutional layer in a backbone network (backbone). Since the embodiment needs to perform a deformation operation through the convolutional layer in the backbone network as the feature extraction layer, the embodiment only needs to include the convolutional layer in the backbone network, and does not limit the specific type of the model.
For example, the nodule identification model in this embodiment may adopt a 3D retinet or a 3D Fcos frame model to perform deep learning modeling, and replace an original residual network with a random deformed convolutional residual network (random deformed convolutional residual network), while keeping the rest of network structures in the frame model unchanged. The random deformation convolution residual error network can be used as a feature extraction layer, and convolution kernels in the original residual error network are replaced by the random deformation convolution kernels to extract the first feature image. For example, the nodule identification model of the present embodiment may also employ another network model having a convolutional layer, such as DenseNet.
The nodule type in the image to be detected can be any type of lung nodule, including calcified, solid, sub-solid, ground glass nodule and other types. It should be noted that the nodule identification model provided in this embodiment is a special training for performing targeted identification on one of the four nodule types, especially on the ground glass nodule that is the most difficult to identify. Specifically, in this embodiment, the image to be detected is input into the nodule identification model, and then whether the ground glass nodule exists in the image to be detected can be identified through the nodule identification model.
In an embodiment of the present disclosure, constructing the nodule identification model includes: setting a window width variable value and a window level variable value; continuously adjusting the window width variable value and the window level variable value through a back propagation algorithm in the process of training the nodule identification model to obtain a window width variable constant value and a window level variable constant value; and adaptively adjusting the window value of the image in the input nodule identification model through the window width variable constant value and the window level variable constant value.
The window width is the range of CT value displayed on the CT image, the tissue and lesion in the range of CT value are displayed in different simulated gray scale, and the tissue structure with CT value higher or lower than the range is displayed in white shadow or black shadow. The width of the window width directly affects the contrast and sharpness of the image. Wherein, the window level refers to the average of the upper and lower limits of the window width. The same window width includes different CT values in the CT value range due to different window levels.
The window width variable value and the window level variable value are two parameters added to the nodule identification model in the embodiment, and since the prior art generally normalizes the image to the standard lung window image (-1350 Hu to 150 Hu) for nodule classification, but the milled glass nodule is exemplified by the milled glass nodule, and the density range of the milled glass nodule is (-700 Hu to 400 Hu), the lung window parameter is not the optimal window for observing the milled glass nodule. Therefore, in the present embodiment, a window width variable value a and a window level variable value b are added to the nodule identification model, so that the nodule identification model has an adaptive window function. It should be noted that, because the types of neural network models used by the nodule identification model are different, the window width variable value a and the window level variable value b related to different neural network models are different.
Specifically, in this embodiment, an adaptive window modifier (AWN) is designed in the nodule identification model, and the AWN module introduces two parameters (a, b), which can be learned together during the training of the nodule identification model and have an initial value of a =0 and b =0. Before training begins, the present embodiment needs to set an initialized window value, which may be set to WW =1500hu, wl = -600Hu, and an adaptive adjustment window value is set to (WW · e), since the model in the present embodiment aims at pulmonary nodule recognition a WL + b), where e is the natural logarithm.
Specifically, in the present embodiment, a = b =0 is set at the initial time of training the nodule recognition model. After the lung window image is input, along with the continuous iteration of the model training, the difference between the analog value and the true value output by the model training can be continuously compared, and the unreasonable parameter in the model is calculated through the back propagation algorithm to be adjusted, so that the value a and the value b are changed finally, and the optimal viewing window is different from the lung window. In this embodiment, the a value and the b value obtained after the training are respectively fixed as the window width variable constant value and the window level variable constant value, and the window value (WW × e) is adaptively adjusted a WL + b) obtains a fixed parameter specially adapted to the nodule identification model, and then improves the identification accuracy of the nodule identification model.
The identification method based on the CT image preprocesses an initial image to be used as an image to be detected; the method comprises the steps of inputting a nodule identification model into an image to be detected to identify a nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel, increasing noise in a training process of the nodule identification model in a pertinence manner, and effectively improving the identification degree of the nodule identification model and the classification accuracy for ensuring the robustness of the model to the target diversified forms in prediction.
Example two
Fig. 2A is a flowchart of a CT image-based identification method provided in the second embodiment of the present disclosure, where the nodule identification model is a 3 dretinnet type or a 3D Fcos type, and correspondingly, the step of inputting the image to be detected into the nodule identification model to identify the nodule type in the image to be detected includes: inputting an image to be detected into a feature extraction layer to obtain a first feature image; inputting the first characteristic image into the characteristic pyramid layer to obtain a second characteristic image; and inputting the second characteristic image into the detection layer to identify the nodule type in the image to be detected. The method specifically comprises the following steps:
s210, preprocessing the initial image to be used as an image to be detected.
S220, inputting the image to be detected into the feature extraction layer to obtain a first feature image.
The feature extraction layer is a main network layer which is firstly entered after the image enters the nodule identification model and is provided with a plurality of deformation convolution kernels, and each pixel point in the image is subjected to convolution calculation through the deformation convolution kernels to obtain a new image which is used as a first feature image.
Wherein the first feature images may be one or more sets of feature images with different resolutions related according to a training task. Specifically, the nodule identification model provided in this embodiment is of a 3D retinet or 3D Fcos type, and thus the feature extraction layer is of a random convolution residual network structure.
In this embodiment of the present disclosure, before the extracting, by the feature extraction layer, the first feature image through the deformed convolution kernel, the method further includes: setting preset displacement and preset probability of each pixel point in a deformation convolution kernel; when the nodule identification model is trained, the positions of all pixel points in the deformation convolution kernel are randomly generated according to the preset displacement and the preset probability so as to increase the nodule form in the training image.
The convolution kernel is a coefficient matrix used for performing convolution processing on an image to be detected, such as a two-dimensional 3x3 convolution or a 5x5 convolution, and a three-dimensional 3x3x3 convolution or a 5x5x5 convolution, where the convolution kernel in this embodiment is a 3-dimensional convolution of 3x3x 3.
The preset displacement is a displacement set for realizing that the conventional convolution kernel generates the deformation convolution kernel, and the displacement can act on each pixel point in the convolution kernel, so that each pixel point can move randomly in each direction in a preset range. The preset probability is a probability value set for a moving direction of the preset displacement. It should be noted that the preset displacement and the preset probability in the present embodiment may be any values set according to empirical values. For convenience of understanding, the present embodiment is described by taking the two-dimensional 3 × 3 convolution kernel of fig. 2B as an example.
Fig. 2B is a schematic diagram of a 3x3 warped convolution kernel under different conditions according to a second embodiment of the present disclosure, including (c) a schematic diagram of an initial position of the 3x3 warped convolution kernel; (d) During training, randomly sampling a schematic diagram of an offset acting on the initial position of a convolution kernel according to a preset probability; (e) And during prediction, calculating an average position according to the preset displacement and the preset probability, and then obtaining a schematic diagram of a convolution kernel.
For example, in order to generate a random effect, the deformed convolution kernel in this embodiment does not keep the original position of the pixel points in the kernel. Specifically, for example, as shown in (d) in fig. 2B, the light dots are pixels at original positions, and the dark dots are pixels at randomly shifted positions. For example, taking a light gray pixel point at the upper left corner as an example, random movement in each direction from top to bottom and from left to right can be performed according to a preset displacement, and the preset probability can be set to be that the pixel point moves leftwards with a probability of 25%, moves upwards with a probability of 25%, moves rightwards with a probability of 25%, and moves downwards with a probability of 25%. It should be noted that the preset probability value provided in this embodiment is merely an example for illustration, and a specific numerical value thereof is not limited.
Specifically, this embodiment has set up and has predetermine the displacement and predetermine the probability, and when training the nodule recognition model, the position of each pixel can take place diversified position deformation at random according to predetermineeing the displacement and predetermine the probability in the deformation convolution kernel. Therefore, the input CT image is convoluted through the deformed convolution kernel to generate various first characteristic images, the diversity of the nodule forms in the training image is just increased and provided for the nodule identification model to train, and the robustness of the model to the target diversified forms in the prediction process is ensured by increasing the noise in the training.
It should be noted that, in the embodiment, the preset displacement and the preset probability are set in advance before the training, so as to increase the difficulty of the training of the nodule identification model, and thus reduce the difficulty of identifying the nodule type during the formal prediction. However, after the training is finished, it is not necessary to generate various deformation convolution kernels by presetting displacement and presetting probability, and it is only necessary to calculate the average position of each pixel point in the deformation convolution kernels.
In this embodiment of the present disclosure, before the extracting, by the feature extraction layer, the first feature image by the deformed convolution kernel, the method further includes: determining the average position of each pixel point in the deformation convolution kernel as the current deformation convolution kernel according to the preset displacement and the preset probability; and carrying out convolution calculation on each pixel point in the image to be detected according to the current deformation convolution kernel so as to obtain a first characteristic image.
As shown in (e) in fig. 2B, the light dots are original position pixel points, and the dark dots are pixel points whose average positions are calculated according to the preset displacement and the preset probability. In the actual operation process, each pixel point in the deformation convolution kernel cannot just move to the central position of each pixel, and the moving position of each pixel point can be each position in the area range, so that the average position of each pixel point needs to be calculated according to the preset displacement and the preset probability, the pixel point in the deformation convolution kernel purposefully moves to the calculated average position to form the current deformation convolution kernel, and then the current deformation convolution kernel is used for carrying out convolution calculation on the image to be detected, so that the first characteristic image is obtained, and the convolution calculation process can be shown in fig. 2C.
Fig. 2C is a schematic diagram of a process of performing a deformed convolution kernel and an image operation according to a second embodiment of the present disclosure, which includes an original image 201, a current deformed convolution kernel 202, and a first feature image 203. The original image 201 is an image to be measured or a training image. Specifically, in this embodiment, each pixel point in the original image 201 is subjected to convolution calculation by the current deformed convolution kernel 202, and finally, the first feature image 203 is generated.
And S230, inputting the first characteristic image into the characteristic pyramid layer to obtain a second characteristic image.
The feature pyramid layer is a feature extractor for improving accuracy and speed, and any network model in the prior art can be used in the feature pyramid layer in the embodiment. The second characteristic image is an image with high-order abstract characteristics after the first characteristic image is subjected to fusion processing.
Specifically, since the first feature image is a group of feature images with different resolutions, the feature pyramid layer needs to perform fusion processing on the feature images with different resolutions to obtain a high-order abstract feature image, that is, a second feature image, which can display all the nodule forms, densities and the relationship with the surrounding tissues.
And S240, inputting the second characteristic image into a detection layer to identify the nodule type in the image to be detected.
The detection layer may be any network of detection heads for detection purposes, as long as the type of nodule in the second feature image can be detected.
Fig. 2D shows a schematic overall framework structure diagram of a CT image recognition method according to a second embodiment of the present disclosure. Specifically, in this embodiment, the initial image is preprocessed to unify the reconstruction algorithm and the size of the initial image, and the preprocessed initial image is used as an image to be detected, and then the image to be detected is input into the nodule identification model. After entering the nodule identification model, the image to be detected is subjected to adaptive window-adaptive processing to obtain an optimal viewing window, and then the image subjected to window-adaptive processing is gradually input into the feature extraction layer, the feature pyramid layer and the detection layer to identify the nodule type in the image to be detected.
The nodule identification model provided by the embodiment is superior to a classic frame in the detection rate and stability of ground glass nodules, the recall rate of the ground glass nodules can be improved, and meanwhile, the condition of missed detection of some unknown reasons but obvious lesions can be avoided.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a recognition device based on a CT image according to an embodiment of the present disclosure, where the device specifically includes:
the image processing module 310 is configured to pre-process an initial image to serve as an image to be detected;
the type identification module 320 is configured to input the image to be detected into a nodule identification model to identify a nodule type in the image to be detected, where the nodule identification model includes a preset deformation convolution kernel.
In an embodiment, the type identification module 320 is further configured to: setting preset displacement and preset probability of each pixel point in a deformation convolution kernel; when the nodule identification model is trained, the positions of all pixel points in the deformation convolution kernel are randomly generated according to the preset displacement and the preset probability so as to increase the nodule form in the training image.
In one embodiment, the nodule identification model further comprises: setting a window width variable value and a window level variable value; continuously adjusting the window width variable value and the window level variable value through a back propagation algorithm in the process of training the nodule identification model to obtain a window width variable constant value and a window level variable constant value; and adaptively adjusting the window value of the image in the input nodule identification model through the window width variable constant value and the window level variable constant value.
In one embodiment, the graphics processing module 310 is further configured to: judging the type of a reconstruction algorithm of the initial image; and if the reconstruction algorithm type of the current initial image is judged to be the lung algorithm type, performing Gaussian smoothing on the current initial image to generate a soft algorithm type image to be detected.
In one embodiment, the graphics processing module 310 is further configured to: acquiring header field information of an initial image; and when the header field information contains a specific keyword, determining the type of the lung algorithm, wherein the specific keyword comprises: lung, chest, thorax, B70f, and B71f.
In one embodiment, the nodule identification model is a 3D retinet or a 3D Fcos type, and accordingly, the type identification module 320 is specifically configured to: inputting an image to be detected into a feature extraction layer to obtain a first feature image; inputting the first characteristic image into the characteristic pyramid layer to obtain a second characteristic image; and inputting the second characteristic image into the detection layer to identify the nodule type in the image to be detected.
In one embodiment, the type identification module 320 is specifically configured to: determining the average position of each pixel point in the deformation convolution kernel as the current deformation convolution kernel according to the preset displacement and the preset probability; and carrying out convolution calculation on each pixel point in the image to be detected according to the current deformation convolution kernel so as to obtain a first characteristic image.
Example four
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
FIG. 4 shows a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data required for the operation of the device 400 can also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
A number of components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, or the like; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, or the like; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Computing unit 401 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 401 executes the respective methods and processes described above, such as a CT image-based recognition method. For example, in some embodiments, the CT image-based identification method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When the computer program is loaded into the RAM 403 and executed by the computing unit 401, one or more steps of the above described CT image based identification method may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the CT image based identification method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above, reordering, adding or deleting steps, may be used. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or to implicitly indicate the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one of the feature. In the description of the present disclosure, "a plurality" means two or more unless specifically limited otherwise.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A CT image-based recognition method, the method comprising:
preprocessing an initial image to be used as an image to be detected;
and inputting the image to be detected into the nodule identification model to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel.
2. The method of claim 1, wherein the preset deformable convolution kernel comprises:
setting a preset displacement and a preset probability of each pixel point in the deformed convolution kernel;
and when the nodule identification model is trained, randomly generating the positions of all pixel points in the deformation convolution kernel according to the preset displacement and the preset probability so as to increase the nodule form in the training image.
3. The method of claim 2, wherein the training the nodule recognition model further comprises:
setting a window width variable value and a window level variable value;
continuously adjusting the window width variable value and the window level variable value through a back propagation algorithm in the process of training the nodule identification model to obtain a window width variable constant value and a window level variable constant value;
and adaptively adjusting the window value of the image input into the nodule identification model through the window width variable constant value and the window level variable constant value.
4. The method of claim 3, wherein the pre-processing the initial image as the image under test comprises:
judging the type of a reconstruction algorithm of the initial image;
and if the reconstruction algorithm type of the current initial image is judged to be the lung algorithm type, performing Gaussian smoothing on the current initial image to generate a soft algorithm type image serving as the image to be detected.
5. The method of claim 4, wherein the determining the type of reconstruction algorithm for the initial image comprises:
acquiring header field information of the initial image;
determining a lung algorithm type when the header field information contains a specific keyword, wherein the specific keyword comprises: lung, chest, thorax, B70f, and B71f.
6. The method according to claim 5, wherein the nodule identification model is of a 3D Retianet or 3D Fcos type, and the inputting the image to be tested into the nodule identification model to identify the nodule type in the image to be tested comprises:
inputting the image to be detected into the feature extraction layer to obtain a first feature image, wherein the feature extraction layer comprises the deformation convolution kernel;
inputting the first characteristic image into a characteristic pyramid layer to obtain a second characteristic image;
and inputting the second characteristic image into a detection layer to identify the nodule type in the image to be detected.
7. The method as claimed in claim 6, wherein said inputting the image to be tested into the feature extraction layer to obtain a first feature image comprises:
determining the average position of each pixel point in the deformation convolution kernel according to the preset displacement and the preset probability, and taking the average position as the current deformation convolution kernel;
and carrying out convolution calculation on each pixel point in the image to be detected through the current deformation convolution check so as to obtain the first characteristic image.
8. A CT image-based recognition apparatus, comprising:
the image processing module is used for preprocessing the initial image to be used as an image to be detected;
and the type identification module is used for inputting the image to be detected into the nodule identification model so as to identify the nodule type in the image to be detected, wherein the nodule identification model comprises a preset deformation convolution kernel.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202211006713.XA 2022-08-22 2022-08-22 CT image-based identification method, device, equipment and storage medium Active CN115439423B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211006713.XA CN115439423B (en) 2022-08-22 2022-08-22 CT image-based identification method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211006713.XA CN115439423B (en) 2022-08-22 2022-08-22 CT image-based identification method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115439423A true CN115439423A (en) 2022-12-06
CN115439423B CN115439423B (en) 2023-09-12

Family

ID=84243977

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211006713.XA Active CN115439423B (en) 2022-08-22 2022-08-22 CT image-based identification method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115439423B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245832A (en) * 2023-01-30 2023-06-09 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275699A (en) * 2020-02-11 2020-06-12 腾讯医疗健康(深圳)有限公司 Medical image processing method, device, equipment and storage medium
CN111667459A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN113299369A (en) * 2021-05-14 2021-08-24 杭州电子科技大学 Window adjusting optimization method for medical image
US20210327583A1 (en) * 2018-09-04 2021-10-21 Aidence IP B.V Determination of a growth rate of an object in 3d data sets using deep learning
CN113744183A (en) * 2021-07-27 2021-12-03 山东师范大学 Pulmonary nodule detection method and system
CN114782321A (en) * 2022-03-24 2022-07-22 北京医准智能科技有限公司 Chest CT image selection method, device, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210327583A1 (en) * 2018-09-04 2021-10-21 Aidence IP B.V Determination of a growth rate of an object in 3d data sets using deep learning
CN111275699A (en) * 2020-02-11 2020-06-12 腾讯医疗健康(深圳)有限公司 Medical image processing method, device, equipment and storage medium
CN111667459A (en) * 2020-04-30 2020-09-15 杭州深睿博联科技有限公司 Medical sign detection method, system, terminal and storage medium based on 3D variable convolution and time sequence feature fusion
CN113299369A (en) * 2021-05-14 2021-08-24 杭州电子科技大学 Window adjusting optimization method for medical image
CN113744183A (en) * 2021-07-27 2021-12-03 山东师范大学 Pulmonary nodule detection method and system
CN114782321A (en) * 2022-03-24 2022-07-22 北京医准智能科技有限公司 Chest CT image selection method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANG XUN,ET AL: "An improved yolov3 model for detection location information of ovarian cancer from CT images", INTELLIGENT DATA ANALYSIS, vol. 25, no. 6 *
王生生 等: "《面向隐私保护联邦学习的医学影像目标检测算法》", vol. 33, no. 10, pages 2 - 3 *
黄柳婷,等: "基于深度学习的哮喘患者CT影像黏液栓自动识别", 北京邮电大学学报, vol. 45, no. 4, pages 58 - 63 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116245832A (en) * 2023-01-30 2023-06-09 北京医准智能科技有限公司 Image processing method, device, equipment and storage medium
CN116245832B (en) * 2023-01-30 2023-11-14 浙江医准智能科技有限公司 Image processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115439423B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
EP3449421B1 (en) Classification and 3d modelling of 3d dento-maxillofacial structures using deep learning methods
EP1690230B1 (en) Automatic multi-dimensional intravascular ultrasound image segmentation method
CN111105424A (en) Lymph node automatic delineation method and device
CN111047572A (en) Automatic spine positioning method in medical image based on Mask RCNN
CN110176010B (en) Image detection method, device, equipment and storage medium
CN113012173A (en) Heart segmentation model and pathology classification model training, heart segmentation and pathology classification method and device based on cardiac MRI
JP2013525009A (en) Detection and classification of microcalcifications in radiographic images
US9142030B2 (en) Systems, methods and computer readable storage media storing instructions for automatically segmenting images of a region of interest
CN111145160B (en) Method, device, server and medium for determining coronary artery branches where calcified regions are located
WO2022000192A1 (en) Ct image construction method, ct device, and storage medium
CN116309806A (en) CSAI-Grid RCNN-based thyroid ultrasound image region of interest positioning method
CN115439423B (en) CT image-based identification method, device, equipment and storage medium
CN113989407B (en) Training method and system for limb part recognition model in CT image
CN114926487A (en) Multi-modal image brain glioma target area segmentation method, system and equipment
JP7395668B2 (en) System and method for high speed mammography data handling
CN115969400A (en) Apparatus for measuring area of eyeball protrusion
JP2022179433A (en) Image processing device and image processing method
CN115375787A (en) Artifact correction method, computer device and readable storage medium
CN112927196B (en) Calcification score method and device
CN115984229B (en) Model training method, breast measurement device, electronic equipment and medium
US20240346633A1 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium
WO2024108438A1 (en) Motion artifact correction method for velocity encoding magnetic resonance imaging
US20230334665A1 (en) Learning device, learning method, learning program, and medical use image processing device
Ilyasova et al. Development of the technique for automatic highlighting ranges of interest in lungs x-ray images
Mittal et al. Deep residual learning-based denoiser for medical X-ray images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 3011, 2nd Floor, Building A, No. 1092 Jiangnan Road, Nanmingshan Street, Liandu District, Lishui City, Zhejiang Province, 323000

Patentee after: Zhejiang Yizhun Intelligent Technology Co.,Ltd.

Address before: No. 1202-1203, 12 / F, block a, Zhizhen building, No. 7, Zhichun Road, Haidian District, Beijing 100083

Patentee before: Beijing Yizhun Intelligent Technology Co.,Ltd.