CN118314067B - Tumor radio frequency accurate ablation system based on CT image - Google Patents
Tumor radio frequency accurate ablation system based on CT image Download PDFInfo
- Publication number
- CN118314067B CN118314067B CN202410741729.8A CN202410741729A CN118314067B CN 118314067 B CN118314067 B CN 118314067B CN 202410741729 A CN202410741729 A CN 202410741729A CN 118314067 B CN118314067 B CN 118314067B
- Authority
- CN
- China
- Prior art keywords
- window
- detected
- target
- windows
- degree
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 206010028980 Neoplasm Diseases 0.000 title claims abstract description 46
- 238000002679 ablation Methods 0.000 title claims abstract description 13
- 210000004185 liver Anatomy 0.000 claims abstract description 55
- 230000008859 change Effects 0.000 claims abstract description 30
- 230000011218 segmentation Effects 0.000 claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 14
- 210000001015 abdomen Anatomy 0.000 claims abstract description 12
- 230000009466 transformation Effects 0.000 claims abstract description 12
- 238000011282 treatment Methods 0.000 claims abstract description 5
- 238000006243 chemical reaction Methods 0.000 claims description 14
- 238000010606 normalization Methods 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 239000000203 mixture Substances 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000002591 computed tomography Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 3
- 230000003044 adaptive effect Effects 0.000 description 4
- 210000004204 blood vessel Anatomy 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 201000011510 cancer Diseases 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000002792 vascular Effects 0.000 description 2
- 230000003187 abdominal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011297 radiofrequency ablation treatment Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B18/00—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
- A61B18/04—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body by heating
- A61B18/12—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body by heating by passing a current through the tissue to be heated, e.g. high-frequency current
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B18/00—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
- A61B2018/00315—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body for treatment of particular body parts
- A61B2018/00529—Liver
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B18/00—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body
- A61B2018/00571—Surgical instruments, devices or methods for transferring non-mechanical forms of energy to or from the body for achieving a particular surgical effect
- A61B2018/00577—Ablation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Animal Behavior & Ethology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- High Energy & Nuclear Physics (AREA)
- Optics & Photonics (AREA)
- Pathology (AREA)
- Radiology & Medical Imaging (AREA)
- Plasma & Fusion (AREA)
- Otolaryngology (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image processing, in particular to a tumor radio frequency precise ablation system based on CT images, which comprises: acquiring an abdomen area CT image and a liver area image; obtaining intensity of gray level change in each window to be detected in the liver region image, and obtaining all target windows and target degrees of the target windows according to the intensity of gray level change; obtaining the necessary degree of target window enhancement according to the gray level difference between adjacent windows around each window to be detected; performing linear gray level transformation on the central pixel point of each target window by utilizing the necessary degree of target window enhancement to obtain an enhanced image; and acquiring a tumor area in the enhanced image, and performing corresponding treatment. The method utilizes the linear gray level transformation mode to carry out the regional self-adaptive enhancement on the tumor region, so that the edge of the tumor region is clearer, the tumor region after threshold segmentation is more complete, and the identification of the subsequent tumor region is more accurate.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a tumor radio frequency precise ablation system based on CT images.
Background
The tumors are divided into benign and malignant tumors, before diagnosis of the tumors at the liver is carried out, the tumor areas in the CT images of the liver are required to be completely and accurately segmented, most of the tumors at the liver are malignant, the boundaries of the malignant tumors are unclear, and the segmentation results of the tumors at the liver are inaccurate.
The traditional method acquires the tumor through threshold segmentation, and then overlaps the tumor and the original image to strengthen the tumor area, and the incomplete tumor area is caused by unclear edges of the tumor, so that the incomplete tumor area causes interference to the subsequent recognition and treatment of the tumor. Therefore, it is necessary to enhance the liver CT image so that the segmentation result is more accurate.
In the traditional method for enhancing the image by using the linear gray scale transformation, the contrast and the gray scale offset of the image are improved, but the brightness and the distribution of different areas in the liver CT image are different, so the enhancement degree of the different areas is different, and therefore, the contrast and the gray scale offset of the different areas are required to be self-adaptively obtained according to the characteristic information of the different areas in the liver CT image to obtain the parameters of the linear gray scale transformation.
Disclosure of Invention
The invention provides a tumor radio frequency precise ablation system based on CT images, which aims to solve the existing problems.
The tumor radio frequency precise ablation system based on CT images adopts the following technical scheme:
one embodiment of the present invention provides a tumor radio frequency precise ablation system based on CT images, the system comprising:
The liver region acquisition module is used for acquiring an abdomen region CT image and acquiring a liver region image;
The window analysis module is used for obtaining the intensity of gray level change in each window to be detected in the liver region image and obtaining the target degree of each target window according to the intensity of gray level change;
The necessary degree calculation module is used for obtaining the necessary degree of target window enhancement according to the target degree of each target window and the gray values of the target window and windows in the neighborhood around the target window;
The gray enhancement module is used for carrying out linear gray conversion on the central pixel point of each target window by utilizing the necessary degree of target window enhancement to obtain an enhanced image;
And the boundary segmentation module is used for acquiring the tumor region in the enhanced image and carrying out corresponding treatment.
Preferably, the step of obtaining the intensity of the gray level change in each window to be detected in the liver region image includes the following specific steps:
Marking a window with a preset size taking each pixel point to be detected as a center of each window to be detected, calculating a plurality of gradient vectors of all the pixel points to be detected in the centers of other windows to be detected in the vicinity of each window to be detected and the pixel points to be detected in the centers of the windows to be detected, carrying out arithmetic average on the difference values of all the gradient vectors to obtain an average gradient direction and an average gradient mode of the window to be detected, obtaining the average gradient direction of each window to be detected, and calculating the similarity degree of gradient distribution of each window to be detected;
Acquiring the number of windows to be detected, in which the similarity degree of gradient distribution is greater than a preset gradient threshold value, in a plurality of windows to be detected in the surrounding vicinity of each window to be detected, putting the similarity degree of gradient distribution of the windows to be detected in the surrounding vicinity into a set, and calculating the noise degree of pixels to be detected in each window to be detected;
And finally, acquiring the arithmetic mean value of the gray values of all pixel points in each window to be detected, and calculating the intensity of gray change in each window to be detected in the liver region image.
Preferably, the calculating the similarity degree of the gradient distribution of each window to be detected includes the following specific steps:
Calculating all to-be-detected pixel points in the centers of other to-be-detected windows in the vicinity of each to-be-detected window and a plurality of gradient vectors of the to-be-detected pixel points in the centers of the to-be-detected windows, performing arithmetic average on the difference values of all gradient vector directions and the horizontal direction to obtain an average gradient direction of the to-be-detected window, and performing arithmetic average on the modes of all gradient vectors to obtain an average gradient mode of the to-be-detected window;
dividing the average gradient direction of the window to be detected by 180 and dividing the average gradient modulus of the window to be detected by 255 to obtain two similarity factors, and multiplying the two similarity factors to obtain the similarity degree of gradient distribution of each window to be detected.
Preferably, the calculating the noise degree of the pixel point to be detected in each window to be detected includes the following specific steps:
Acquiring the number of windows to be detected, of which the gradient distribution similarity degree is greater than a gradient threshold value, in a neighborhood around each window to be detected, and recording the number of windows of each window to be detected;
Counting to obtain the maximum value in the similarity degree of gradient distribution of a plurality of windows to be detected in the vicinity of each window to be detected, and recording the maximum value as the maximum degree of each window to be detected;
And multiplying the window number of each window to be detected with the maximum degree to obtain a result, taking the opposite number of the result as an index of a natural base number to obtain the noise degree of the pixel point to be detected in each window to be detected.
Preferably, the specific calculation formula for calculating the intensity of the gray level change in each window to be detected in the liver region image is as follows:
wherein, Represent the firstThe intensity of the gray level change in each window to be detected,For the number of pixel points to be detected in each window to be detected, there are,Represent the firstThe arithmetic mean of the gray values of all pixels in the windows to be detected,Represent the firstThe gray value of the pixel point to be detected in the center of each window to be detected,Represent the firstThe first window to be detectedThe noise level of each pixel to be detected,Indicating that linear normalization calculations are performed on the content in brackets.
Preferably, the obtaining the target degree of each target window according to the intensity degree of the gray level change includes the following specific steps:
Extracting a window to be detected with the intensity of gray level change larger than a preset intensity threshold value, and marking the window as target windows to obtain J target windows, wherein each target window is expressed as a first target window A target window, and has; Calculating the arithmetic mean value of gray values of all pixel points in each target window, and recording the arithmetic mean value as a target gray mean value; counting the number of target windows in all windows in the neighborhood around the target window;
Counting the gray value of each pixel point in the target window and all windows in the surrounding vicinity thereof, and marking the pixel point with the gray value larger than the target gray average value as a connected pixel point of the target window; calculating the convex hull area of the pixel point forming area in the connection in the target window, calculating the convex hull area of the pixel point forming area in all the connection in the target window and the surrounding adjacent area, and finally calculating the target degree of each target window, wherein the specific calculation formula is as follows:
wherein, Represent the firstThe target extent of the individual target windows,Represent the firstThe number of target windows in the neighborhood around the individual target window,Represent the firstConvex hull areas of the pixel point composition areas in the connection of the target windows,Represent the firstThe convex hull area of the region formed by all the connected pixel points in all the windows in the eight adjacent windows around the target window,Indicating that linear normalization calculations are performed on the content in brackets.
Preferably, the obtaining the necessary degree of enhancement of the target window according to the target degree of each target window and the gray value of the target window and the windows in the neighborhood around the target window comprises the following specific steps:
computing a target window and its surrounding vicinity The arithmetic mean value of gray values of all pixel points in each window is recorded as a window gray mean value;
The necessary degree of target window enhancement is calculated through the target degree of the target window and gray values of the target window and windows in the neighborhood around the target window, and a specific calculation formula is as follows:
wherein, Represent the firstThe degree of necessity of the enhancement of the individual target windows,Represent the firstThe target extent of the individual target windows,Represent the firstWithin the vicinity around the target windowThe target degree of each window is as follows,Represent the firstThe number of target windows in the neighborhood around the individual target window,Represent the firstWithin the vicinity around the target windowThe window gray-scale average value of each window,Represent the firstThe window gray scale average value of each target window,Indicating that linear normalization calculations are performed on the content in brackets.
Preferably, the necessary degree of enhancement of each target window is taken as the necessary degree of enhancement of the central pixel point of each target window, and the specific calculation formula for performing linear gray scale transformation on the central pixel point of each target window by using the necessary degree of enhancement of the target window is as follows:
wherein, Represents the gray value of the pixel after the linear gray conversion,Represents the gray value of the pixel before the linear gray conversion,Represent the firstThe degree of enhancement necessary for the center pixel point of the target window,Representing the maximum gray value of the entire image,Indicating that linear normalization calculations are performed on the content in brackets.
Preferably, the step of acquiring the liver region image includes the following specific steps:
Obtaining an abdomen CT image by CT scanning, dividing a liver region and a background region by semantic segmentation, and normalizing CT values in the liver region to Is a range of values.
The technical scheme of the invention has the beneficial effects that: according to the invention, the enhancement degree is analyzed through the target degree of the window and the contrast between the window and the adjacent windows around, so that the target area required by enhancement can be enhanced more accurately, the enhancement degree of other areas is smaller than that of the target area, and the enhancement effect of the target area is better and more obvious through self-adaptive enhancement; meanwhile, the image is divided into a plurality of windows through linear gray level transformation, different change slopes are quantized according to gray level change conditions in the windows, and the contrast ratio of a foreground area and a background area is increased, so that a tumor area segmented by a threshold value is more complete.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a system structure diagram of a tumor radio frequency precise ablation system based on CT images.
Detailed Description
In order to further describe the technical means and effects adopted by the present invention to achieve the preset purpose, the following detailed description is given below of the specific implementation, structure, characteristics and effects of the tumor rf precise ablation system based on CT image according to the present invention with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the tumor radio frequency precise ablation system based on CT images provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a system architecture diagram of a tumor radio frequency precise ablation system based on CT images according to an embodiment of the present invention is shown, and the method includes the following steps:
and the liver region acquisition module is used for acquiring the CT image of the abdomen region and acquiring the image of the liver region.
Since the abdomen CT image obtained by scanning includes not only liver regions, there are many organ components on the image, and these organ regions affect the determination of the tumor condition of the liver regions. According to existing medical knowledge, liver regions are segmented from abdominal CT images by location shape.
Specifically, an abdomen CT image is obtained by CT scanning, and a liver region and a background region are divided by semantic segmentation. In this embodiment, the liver is perceived by semantic segmentation, and the DNN network is an encodable-Decoder structure, and the specific training content is:
Firstly, the data set is an abdomen CT image;
Second, tags fall into two categories, liver and liver background. The method is classification at the pixel level, namely, all pixels in an image are required to be marked with corresponding labels, the value of the pixels belonging to the liver is marked as 1, the value of the pixels belonging to the liver background is marked as 0;
Finally, the Loss function used by the network is a cross entropy Loss function. And performing image processing on the obtained abdomen CT image by using a trained semantic segmentation network model, obtaining a liver tag area from the abdomen CT image, and separating the liver tag area from the abdomen CT image to obtain an image only containing the liver area. And the CT value of the CT image is in the range of In the method, the CT value is normalized to be higher than the gray value rangeIn the range of values:
In the method, in the process of the invention, Represents the gray value of one pixel point,The CT value representing this pixel point is,Indicating that the values in brackets were linearly normalized.
So far, a liver region image is obtained, and the gray values of all pixel points in the liver region image are inIs within the interval of (2).
And the window analysis module is used for obtaining the intensity of gray level change in each window to be detected and fitting the target degree of each target window.
The gray level of the tumor boundary region varies more severely, and the boundary information can be distinguished by the intensity. Because the region with intense gray level change in the liver CT image not only has a tumor boundary, but also has gray level change caused by a blood vessel region and noise, the gray level mean distribution of the regions is more discrete, and the gray level change caused by the tumor boundary and other reasons can be distinguished according to the discrete condition of the gray level distribution. The distribution of the noise points is discrete, and the distribution is irregular, so that all pixel points on the liver region image are provided with a window to be detected, the pixel point in the center of the window to be detected is defined as the pixel point to be detected, the pixel points similar to the distribution of the pixel points can be found out from the eight adjacent areas around the window where the normal pixel points are located according to the distribution consistency of the pixel points of the window around the window to be detected, and the noise points are isolated points and have small possibility of being found out and being similar to the distribution of the pixel points. The gradient is divided into two parameters of angle and amplitude, and the similarity of the two gradients can be indicated when the gradient direction and the amplitude are similar. And calculating the similarity condition of the gradient direction of the pixel points to be detected in the window and the gradient distribution of the pixel points at the same position around the eight adjacent areas of the window. The greater the number of windows in which the gradient distribution of pixels is similar, the greater the degree of similarity, and the lower the likelihood that this pixel is a noise point.
Specifically, each pixel point in the liver region image is recorded as a pixel point to be detected, and in this embodiment, a size of a dimension is established with each pixel point to be detected as a centerThe window of (2) is not particularly limited in this embodiment, in which the size of the window isCan be according to the specific implementation, will be described in the followingThe window with the pixel point to be detected as the center is marked as the firstThe pixel points to be detected on one side of the outermost edge of the liver region image are not considered so as to avoid the condition that the pixel points are not arranged on the outer side of the liver region image. Calculating centers of other windows to be detected in the neighborhood around each window to be detectedA plurality of pixel points to be detected and the pixel points to be detected in the center of the window to be detectedThe surrounding neighborhood in this embodiment takes eight neighbors, namelyThe present embodiment is not specifically limited, where the surrounding neighborhood is determined according to the specific implementation situation, and the average gradient direction of the window to be detected is obtained by performing arithmetic average on the differences between all gradient vector directions and the horizontal direction, the average gradient modulus of the window to be detected is obtained by performing arithmetic average on the modulus of all gradient vectors, and the formula for calculating the similarity degree of gradient distribution of each window to be detected is as follows:
wherein, Represent the firstThe degree of similarity of the gradient distribution of the individual windows to be detected,Represent the firstThe average gradient direction of the individual windows to be detected,Represent the firstAverage gradient modes within the windows to be detected.
Presetting a gradient threshold valueWherein the present embodiment usesThe present embodiment is not specifically limited, and will be described by way of exampleDepending on the particular implementation.
Further, 8 windows to be detected in eight adjacent areas around each window to be detected are obtained, and the similarity degree of gradient distribution is larger than a gradient threshold valueThe number of windows to be detectedAnd obtaining the maximum value in the similarity degree of gradient distribution of 8 windows to be detected in eight adjacent areas around each window to be detectedCalculating the noise degree of the pixel points to be detected in each window to be detected:
wherein, Represent the firstThe noise degree of the pixel points to be detected in the center of the windows to be detected,Represent the firstThe number of windows to be detected with gradient distribution similarity degree larger than a threshold value in eight neighborhoods of the windows to be detected,Is the firstMaximum value in the similarity degree of gradient distribution of 8 windows to be detected in eight adjacent windows to be detected.
Further, the arithmetic mean value of the gray values of all pixel points in each window to be detected is obtained, and the intensity of gray change in each window to be detected is calculated:
wherein, Represent the firstThe intensity of the gray level change in each window to be detected,Represent the firstAn arithmetic mean of gray values of 8 pixel points in each window to be detected,Represent the firstThe gray value of the pixel point to be detected in the center of each window to be detected,Represent the firstThe noise level of each pixel to be detected in each window to be detected,Representing the linear normalization of the content in brackets such thatIs at the calculated result ofWithin the interval.
It should be further noted that, the more intense the gray level change in the window, the more abundant the information contained in the window, the intensity reduces the influence of noise points on the intensity change of the window, but when the tumor boundary information of the liver region is enhanced, the window with intense gray level change contains not only the boundary information of the tumor, but also the vascular region, and the tumor boundary and the vascular region are distinguished. The boundary of the blood vessel region can be in crotch-shaped distribution, the gray value of the blood vessel region is larger than that of a region of normal tissues, and the distribution condition of pixels with larger gray values in a target window is observed, because boundary information cannot be obtained through an isolated window, when the number of windows adjacent to the periphery of a certain target window is larger, the target degree is higher, the degree to be optimized is higher, namely the shape distribution formed by the pixels with larger gray values in the window to be detected and the pixels with larger gray values in the periphery is more regular, and the target degree of the window is higher.
Presetting a severe thresholdWherein the present embodiment usesThe present embodiment is not specifically limited, and will be described by way of exampleDepending on the particular implementation.
Specifically, the intensity of gray level changeGreater than the severe thresholdThe window to be detected is marked as a target window to obtainA plurality of target windows, wherein each target window is represented as a firstA target window, and has; Calculating the arithmetic mean value of gray values of all pixel points in each target window, and recording the arithmetic mean value as a target gray mean value; counting the adjacent areas around the target window; counting the gray value of each pixel point in the 8 windows in the eight adjacent areas around the target window, and marking the pixel point with the gray value larger than the target gray average value as a connected pixel point of the target window; calculating the convex hull area of the pixel point forming area in the target window, and calculating the target window and the surrounding adjacent areasThe convex hull areas of all the connected pixel point composition areas in the windows are calculated as the prior means, and the embodiment is not specifically described; finally, the formula for calculating the target degree of each extracted target window is as follows:
wherein, Represent the firstThe target extent of the individual target windows,Represent the firstThe number of target windows in 8 windows in eight neighbors around the target window,Represent the firstConvex hull areas of the pixel point composition areas in the connection of the target windows,Represent the firstThe convex hull area of the region formed by all the connected pixel points in the 8 windows in the eight adjacent windows around the target window,Representing the linear normalization of the content in brackets such thatIs at the calculated result ofWithin the interval.
Thus, the target degree of the target window is obtained.
The necessary degree calculating module is used for calculating gray level difference between adjacent windows around each window to be detected and quantifying the necessary degree by utilizing the target degree window.
The boundary to be enhanced in the liver region has a relatively clear gray scale, and is inversely related to the target degree of the region, so that the enhancement degree of the window region with a relatively large target degree should be reduced. When the gray scale difference between the gray scale in the window and the gray scale of the adjacent window around is not obvious, the boundary information is blurred, and the enhancement degree of the window should be increased. When the target degree of the window is increased, the necessary degree of the window enhancement is increased, but the single target degree of the window can cause that the gray level difference between the region and the surrounding region is relatively large, the contrast is obviously stretched after the enhancement, and the contrast is possibly stretched too much; when the gray level difference of the window and the adjacent window around the window is used for judging the necessary degree of the enhancement of the window, the gray level distribution of the normal area is similar to that of the area to be enhanced, and the noise of the normal area can be enhanced to influence the subsequent judgment. Thus, when the target degree of a certain area is high and the difference between the gradation distribution and the surroundings is not obvious, the necessity of enhancement of the area increases by combining the two conditions. The information contained in each window is different, the influence degree of adjacent windows around the target window is different, and the influence on the target window when the enhancement degree is calculated is smaller when the target degree of the adjacent windows around the target window is larger, so that the differentiation degree between different target windows, namely the necessary degree of window enhancement, is calculated in order to differentiate the difference between the non-target window and the target window.
Specifically, the target window and the surrounding areas thereof are calculatedThe arithmetic mean value of the gray values of all pixel points in each window is recorded as a window gray mean value, and the calculation formula for calculating the necessary degree of enhancement of the target window through the target degree of the target window and the gray values of the windows in the target window and the surrounding neighborhood is as follows:
wherein, Represent the firstThe degree of necessity of the enhancement of the individual target windows,Represent the firstThe target extent of the individual target windows,Represent the firstEighth neighbor around each target windowThe target degree of each target window is as follows,Represent the firstThe number of target windows in 8 windows in eight neighbors around the target window,Represent the firstEighth neighbor around each target windowThe window gray scale average value of each target window,Represent the firstThe window gray scale average value of each target window,Representing the linear normalization of the content in brackets such thatIs at the calculated result ofWithin the interval. When (when)In the above, in order to avoid the denominator being 0,1 is added to the denominator, andThe calculated result of (2) is directlyBecause when the firstThere is a high degree of necessity that the target window needs to be enhanced when there are no target windows in eight neighborhoods around the target window.
Thus, the necessary degree of target window enhancement is obtained.
And the gray enhancement module is used for carrying out linear gray conversion on the central pixel point of each target window by utilizing the necessary degree to acquire an enhanced image.
It should be noted that the information contained in each window is different, and different enhancement coefficients are generated according to the necessary degree of enhancement of each window. The higher the window is necessary, the larger the contrast conversion coefficient is, and the larger the contrast conversion coefficient is, the larger the degree of stretching is. When the linear enhancement is performed, the contrast transformation coefficient is larger than 1 to represent enhancement, the contrast transformation coefficient 1 represents compression, the necessary degree of enhancement can be represented by adding 1 to the degree of enhancement, and when the slope is overlarge, the situation of out-of-range can exist, so that the whole gray scale is required to be distributed in the range of the gray scale value mapped on the pixel point to be processedAnd (3) inner part.
For a pixel point to be processed in the center of a target window, the existing formula of linear gray scale transformation is as follows:
wherein, Representing the initial gray value of the pixel before the linear gray scale transformation,Represents the gray value of the pixel after the linear gray conversion,As the contrast ratio conversion coefficient(s),Is the luminance transform coefficient.
Specifically, mapping the enhancement necessity degree of each target window into the enhancement necessity degree of each target window center pixel point, namely taking the enhancement necessity degree of each target window as the enhancement necessity degree of each target window center pixel point, adding one to the enhancement necessity degree of each target window center pixel point to obtain an adaptive proportion number, and replacing the contrast ratio conversion coefficient in the formula by the adaptive proportion number; obtaining the maximum gray value of the whole image, subtracting the maximum gray value of the whole image by 255 to obtain an adaptive offset number, replacing the brightness conversion coefficient in the formula by the adaptive offset number, linearly normalizing the formula, and scaling to obtain the final productIn the range of (2), the gray scale change formula of each pixel to be processed which is self-adaptive is obtained as follows:
wherein, Represents the gray value of the pixel after the linear gray conversion,Representing the initial gray value of the pixel before the linear gray scale transformation,Represent the firstThe degree of enhancement necessary for the center pixel point of the target window,Representing the maximum gray value of the entire image,Representing the linear normalization of the content in brackets such thatIs at the calculated result ofWithin the interval.
The method for mapping the gray scale target image to the CT image to obtain the enhanced liver CT image is a known technique, and the embodiment is not repeated here.
Thus, an enhanced liver CT image is obtained.
And the boundary segmentation module is used for acquiring the tumor region in the enhanced image and carrying out corresponding treatment.
It should be noted that, the enhanced liver CT image has a significant gray scale difference between the tumor region and the normal tissue region. The tumor region is segmented using an appropriate threshold.
Specifically, an optimal threshold segmentation gray value is obtained by calculating the enhanced liver CT image by adopting an Ojin method, and the enhanced liver CT image is subjected to threshold segmentation by utilizing the optimal threshold segmentation gray value to obtain a tumor region. The doctor can plan a reasonable puncture route according to the position, shape, size and other characteristics of the tumor area, and carry out radio frequency ablation treatment on the tumor.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
Claims (2)
1. Tumor radio frequency accurate ablation system based on CT image, characterized in that, this system includes:
The liver region acquisition module is used for acquiring an abdomen region CT image and acquiring a liver region image;
The window analysis module is used for obtaining the intensity of gray level change in each window to be detected in the liver region image and obtaining the target degree of each target window according to the intensity of gray level change;
The necessary degree calculation module is used for obtaining the necessary degree of target window enhancement according to the target degree of each target window and the gray values of the target window and windows in the neighborhood around the target window;
The gray enhancement module is used for carrying out linear gray conversion on the central pixel point of each target window by utilizing the necessary degree of target window enhancement to obtain an enhanced image;
the boundary segmentation module is used for acquiring a tumor region in the enhanced image and performing corresponding treatment;
The method for obtaining the intensity of gray level change in each window to be detected in the liver region image comprises the following specific steps:
Marking a window with a preset size taking each pixel point to be detected as a center of each window to be detected, calculating a plurality of gradient vectors of all the pixel points to be detected in the centers of other windows to be detected in the vicinity of each window to be detected and the pixel points to be detected in the centers of the windows to be detected, carrying out arithmetic average on the difference values of all the gradient vectors to obtain an average gradient direction and an average gradient mode of the window to be detected, obtaining the average gradient direction of each window to be detected, and calculating the similarity degree of gradient distribution of each window to be detected;
Acquiring the number of windows to be detected, in which the similarity degree of gradient distribution is greater than a preset gradient threshold value, in a plurality of windows to be detected in the surrounding vicinity of each window to be detected, putting the similarity degree of gradient distribution of the windows to be detected in the surrounding vicinity into a set, and calculating the noise degree of pixels to be detected in each window to be detected;
Finally, acquiring the arithmetic mean value of the gray values of all pixel points in each window to be detected, and calculating the intensity of gray change in each window to be detected in the liver region image;
The calculating of the similarity degree of the gradient distribution of each window to be detected comprises the following specific steps:
Calculating all to-be-detected pixel points in the centers of other to-be-detected windows in the vicinity of each to-be-detected window and a plurality of gradient vectors of the to-be-detected pixel points in the centers of the to-be-detected windows, performing arithmetic average on the difference values of all gradient vector directions and the horizontal direction to obtain an average gradient direction of the to-be-detected window, and performing arithmetic average on the modes of all gradient vectors to obtain an average gradient mode of the to-be-detected window;
Dividing the average gradient direction of the window to be detected by 180 and dividing the average gradient module of the window to be detected by 255 to obtain two similarity factors, and multiplying the two similarity factors to obtain the similarity degree of gradient distribution of each window to be detected;
the calculating the noise degree of the pixel point to be detected in each window to be detected comprises the following specific steps:
Acquiring the number of windows to be detected, of which the gradient distribution similarity degree is greater than a gradient threshold value, in a neighborhood around each window to be detected, and recording the number of windows of each window to be detected;
Counting to obtain the maximum value in the similarity degree of gradient distribution of a plurality of windows to be detected in the vicinity of each window to be detected, and recording the maximum value as the maximum degree of each window to be detected;
multiplying the window number of each window to be detected with the maximum degree to obtain a result, taking the opposite number of the result as an index of a natural base number to obtain the noise degree of the pixel point to be detected in each window to be detected;
the specific calculation formula for calculating the intensity of gray level change in each window to be detected in the liver region image is as follows:
wherein, Represent the firstThe intensity of the gray level change in each window to be detected,For the number of pixel points to be detected in each window to be detected, there are,Represent the firstThe arithmetic mean of the gray values of all pixels in the windows to be detected,Represent the firstThe gray value of the pixel point to be detected in the center of each window to be detected,Represent the firstThe first window to be detectedThe noise level of each pixel to be detected,Representing the linear normalization calculation of the content in brackets;
The method for obtaining the target degree of each target window according to the intensity of gray level change comprises the following specific steps:
Extracting a window to be detected with the intensity of gray level change larger than a preset intensity threshold value, and marking the window as target windows to obtain J target windows, wherein each target window is expressed as a first target window A target window, and has; Calculating the arithmetic mean value of gray values of all pixel points in each target window, and recording the arithmetic mean value as a target gray mean value; counting the number of target windows in all windows in the neighborhood around the target window;
Counting the gray value of each pixel point in the target window and all windows in the surrounding vicinity thereof, and marking the pixel point with the gray value larger than the target gray average value as a connected pixel point of the target window; calculating the convex hull area of the pixel point forming area in the connection in the target window, calculating the convex hull area of the pixel point forming area in all the connection in the target window and the surrounding adjacent area, and finally calculating the target degree of each target window, wherein the specific calculation formula is as follows:
wherein, Represent the firstThe target extent of the individual target windows,Represent the firstThe number of target windows in the neighborhood around the individual target window,Represent the firstConvex hull areas of the pixel point composition areas in the connection of the target windows,Represent the firstThe convex hull area of the region formed by all the connected pixel points in all the windows in the eight adjacent windows around the target window,Representing the linear normalization calculation of the content in brackets;
The method for obtaining the necessary degree of target window enhancement according to the target degree of each target window and the gray values of the target window and windows in the neighborhood around the target window comprises the following specific steps:
computing a target window and its surrounding vicinity The arithmetic mean value of gray values of all pixel points in each window is recorded as a window gray mean value;
The necessary degree of target window enhancement is calculated through the target degree of the target window and gray values of the target window and windows in the neighborhood around the target window, and a specific calculation formula is as follows:
wherein, Represent the firstThe degree of necessity of the enhancement of the individual target windows,Represent the firstThe target extent of the individual target windows,Represent the firstWithin the vicinity around the target windowThe target degree of each window is as follows,Represent the firstThe number of target windows in the neighborhood around the individual target window,Represent the firstWithin the vicinity around the target windowThe window gray-scale average value of each window,Represent the firstThe window gray scale average value of each target window,Representing the linear normalization calculation of the content in brackets;
Taking the enhancement necessity degree of each target window as the enhancement necessity degree of the central pixel point of each target window, wherein the specific calculation formula for carrying out linear gray scale transformation on the central pixel point of each target window by utilizing the enhancement necessity degree of the target window is as follows:
wherein, Represents the gray value of the pixel after the linear gray conversion,Represents the gray value of the pixel before the linear gray conversion,Represent the firstThe degree of enhancement necessary for the center pixel point of the target window,Representing the maximum gray value of the entire image,Indicating that linear normalization calculations are performed on the content in brackets.
2. The CT image-based tumor radio frequency precise ablation system according to claim 1, wherein the acquiring the liver region image comprises the specific steps of:
Obtaining an abdomen CT image by CT scanning, dividing a liver region and a background region by semantic segmentation, and normalizing CT values in the liver region to Is a range of values.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410741729.8A CN118314067B (en) | 2024-06-11 | 2024-06-11 | Tumor radio frequency accurate ablation system based on CT image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410741729.8A CN118314067B (en) | 2024-06-11 | 2024-06-11 | Tumor radio frequency accurate ablation system based on CT image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118314067A CN118314067A (en) | 2024-07-09 |
CN118314067B true CN118314067B (en) | 2024-08-16 |
Family
ID=91728713
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410741729.8A Active CN118314067B (en) | 2024-06-11 | 2024-06-11 | Tumor radio frequency accurate ablation system based on CT image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118314067B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780421A (en) * | 2021-06-07 | 2021-12-10 | 广州天鹏计算机科技有限公司 | Brain PET image identification method based on artificial intelligence |
CN115965624A (en) * | 2023-03-16 | 2023-04-14 | 山东宇驰新材料科技有限公司 | Detection method for anti-wear hydraulic oil pollution particles |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782474B (en) * | 2019-11-04 | 2022-11-15 | 中国人民解放军总医院 | Deep learning-based method for predicting morphological change of liver tumor after ablation |
CN117314801B (en) * | 2023-09-27 | 2024-05-31 | 南京邮电大学 | Fuzzy image optimization enhancement method based on artificial intelligence |
CN116993628B (en) * | 2023-09-27 | 2023-12-08 | 四川大学华西医院 | CT image enhancement system for tumor radio frequency ablation guidance |
CN117474823B (en) * | 2023-12-28 | 2024-03-08 | 大连清东科技有限公司 | CT data processing system for pediatric infectious inflammation detection assistance |
CN117853386B (en) * | 2024-03-08 | 2024-05-28 | 陕西省人民医院(陕西省临床医学研究院) | Tumor image enhancement method |
-
2024
- 2024-06-11 CN CN202410741729.8A patent/CN118314067B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113780421A (en) * | 2021-06-07 | 2021-12-10 | 广州天鹏计算机科技有限公司 | Brain PET image identification method based on artificial intelligence |
CN115965624A (en) * | 2023-03-16 | 2023-04-14 | 山东宇驰新材料科技有限公司 | Detection method for anti-wear hydraulic oil pollution particles |
Also Published As
Publication number | Publication date |
---|---|
CN118314067A (en) | 2024-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110930416B (en) | MRI image prostate segmentation method based on U-shaped network | |
CN109635846B (en) | Multi-type medical image judging method and system | |
CN104794502A (en) | Image processing and mode recognition technology-based rice blast spore microscopic image recognition method | |
CN116503392B (en) | Follicular region segmentation method for ovarian tissue analysis | |
CN110136161A (en) | Image characteristics extraction analysis method, system and device | |
CN116993628B (en) | CT image enhancement system for tumor radio frequency ablation guidance | |
CN116385438B (en) | Nuclear magnetic resonance tumor region extraction method | |
CN117764864B (en) | Nuclear magnetic resonance tumor visual detection method based on image denoising | |
CN109255775A (en) | A kind of gastrointestinal epithelial crypts structure based on optical fiber microendoscopic image quantifies analysis method and system automatically | |
CN112052854B (en) | Medical image reversible information hiding method for realizing self-adaptive contrast enhancement | |
CN117237591A (en) | Intelligent removal method for heart ultrasonic image artifacts | |
CN115578660B (en) | Land block segmentation method based on remote sensing image | |
CN112614093A (en) | Breast pathology image classification method based on multi-scale space attention network | |
CN117558068A (en) | Intelligent device gesture recognition method based on multi-source data fusion | |
CN116824168B (en) | Ear CT feature extraction method based on image processing | |
CN118314067B (en) | Tumor radio frequency accurate ablation system based on CT image | |
CN117218200A (en) | Bone tumor focus positioning method and device based on accurate recognition | |
Mustaghfirin et al. | The comparison of iris detection using histogram equalization and adaptive histogram equalization methods | |
CN112560808B (en) | In-vivo vein identification method and device based on gray information | |
CN113940704A (en) | Thyroid-based muscle and fascia detection device | |
CN118247277B (en) | Self-adaptive enhancement method for lung CT image | |
CN112532938A (en) | Video monitoring system based on big data technology | |
CN117952859B (en) | Pressure damage image optimization method and system based on thermal imaging technology | |
CN118134919B (en) | Rapid extraction method of hand bones for bone age identification | |
CN118229540B (en) | Ultrasonic image intelligent processing method of head-mounted display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |