CN118469950A - Lung cancer intelligent detection system based on image processing and machine learning - Google Patents
Lung cancer intelligent detection system based on image processing and machine learning Download PDFInfo
- Publication number
- CN118469950A CN118469950A CN202410600770.3A CN202410600770A CN118469950A CN 118469950 A CN118469950 A CN 118469950A CN 202410600770 A CN202410600770 A CN 202410600770A CN 118469950 A CN118469950 A CN 118469950A
- Authority
- CN
- China
- Prior art keywords
- lung
- lesion
- image
- feature
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10084—Hybrid tomography; Concurrent acquisition with multiple different tomographic modalities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of medical image processing, in particular to an intelligent lung cancer detection system based on image processing and machine learning, wherein an image processing module is used for acquiring lung image data of a patient and performing blocking processing; the main machine learning model is used for carrying out feature recognition on lesion features in the segmented image; the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in the segmented image; the image processing module is also used for carrying out block fusion on the segmented images by combining with texture information in the segmented images, and extracting lesion areas based on the lung image data after feature fusion; the result output module is used for comparing the extracted lesion area with standard features in a lung disease database and outputting lung cancer classification and diagnosis results. The invention can more accurately capture tiny lesions and abnormal areas in the lung images; each segment may be considered a local area that facilitates accurate identification in lesions of different locations and sizes.
Description
Technical Field
The invention relates to the technical field of medical image processing, in particular to an intelligent lung cancer detection system based on image processing and machine learning.
Background
Lung cancer is a common cancer whose early manifestation is the appearance of nodules in CT images of the lungs, which are of a wide variety, diverse structure, tiny volume, diverse location, and few nodules that adhere to the surrounding environment of the lungs. It is difficult for a doctor to visually find a small lung nodule smaller than 5 mm. With the development of new technological means such as genetic algorithm, fuzzy control, machine learning, big data analysis and the like, the use of artificial intelligence technology to assist medical diagnosis has become a trend of early screening of lung cancer, which is not only helpful for early treatment but also for improving survival rate of patients. Meanwhile, the utility model can also lighten the workload of doctors and improve the utilization efficiency of medical resources.
The existing equipment for detecting the lung comprises X-ray, CT, MRI and PET-CT, each equipment can detect different stages and different degrees of lung lesions, and the detection accuracy and the detection cost are quite different. The existing lung cancer detection method based on image and machine learning generally carries out image recognition on the whole lung image shot by X-ray, CT, MRI, PET-CT and other equipment so as to determine the lung lesion area, thereby assisting medical staff in diagnosing the lung diseases of patients.
The lung cancer detection mode is affected by the lung lesion difference of a patient, and the detection of partial tiny lesion features or discontinuous or inaccurate lesion features at the image boundary is large, especially the tiny structure of the edge area can be ignored, so that the accuracy of the lung cancer detection and identification is limited.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent lung cancer detection system based on image processing and machine learning, which is used for improving the accuracy of part of tiny or discontinuous or inaccurate lesion feature recognition at the image boundary, thereby improving the lung cancer detection accuracy.
In order to achieve the above object, the technical scheme of the present invention is as follows: the lung cancer intelligent detection system based on image processing and machine learning comprises an image processing module, a trained main machine learning model, a trained auxiliary machine learning model and a result output module;
The image processing module is used for acquiring lung image data of a patient, and performing block processing on the lung image data to acquire a plurality of block images;
the main machine learning model is used for carrying out feature recognition on lesion features in the segmented image;
the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in the segmented image;
The image processing module is also used for carrying out block fusion on the segmented images by combining with texture information in the segmented images, carrying out feature fusion on lesion features in the lung image data after the block fusion, and extracting a lesion region based on the lung image data after the feature fusion;
The result output module is used for comparing the extracted lesion area with standard features in a lung disease database and outputting lung cancer classification and diagnosis results.
Further, the image processing module is used for acquiring one or more of lung X-ray image data, lung CT image data, lung MRI image data or PET-CT image data of the patient; and performing format conversion and size conversion on the acquired one or more lung image data to generate one or more lung image data with a first format and a first size, and performing independent blocking processing on each piece of lung image data with the first format and the first size according to a preset size to acquire a plurality of blocking images.
Further, the image processing module is further configured to perform preprocessing on the segmented image, where the preprocessing at least includes: denoising processing, contrast enhancement processing, and edge detection processing.
Further, the main machine learning model is used for carrying out edge recognition on the lesion feature in the segmented image to obtain a first lesion feature edge, and is used for carrying out center recognition on the lesion feature in the segmented image based on the first lesion feature edge to obtain a first lesion feature center, and combining the first lesion feature edge and the first lesion feature center to integrate and output the lesion feature of the segmented image.
Further, the auxiliary machine learning model is used for receiving the segmented image, calculating the gray value relation between pixel pairs in the segmented image, calculating each statistical feature according to the gray level co-occurrence matrix to generate texture features, and representing the texture features as feature descriptors, wherein the feature descriptors correspond to texture information of the segmented image to which the feature descriptors belong.
Further, the auxiliary machine learning model is used for receiving the segmented image, calculating the gradient direction and the gradient size of pixels in the segmented image, forming a direction gradient histogram by the gradient direction and the gradient size of the pixels in the segmented image, and enabling the direction gradient histogram to correspond to texture information of the segmented image.
The image processing module is further used for presetting coding identifiers according to the areas corresponding to the segmented images before the segmented processing of the lung image data, acquiring first lesion feature edges and first lesion feature centers of the segmented images, and generating first fusion information based on the first lesion feature edges and the first lesion feature centers of the segmented images; obtaining texture information of each segmented image edge, and generating second fusion information based on the texture information of each segmented image edge; respectively carrying out block fusion on the block images based on the first fusion information and the second fusion information;
If the superposition rate of the coding identifier corresponding to the lung image data fused based on the first fusion information and the second fusion information is larger than a threshold value, selecting fusion information with larger superposition rate for block fusion;
and if the coincidence rate of the code identifier corresponding to the lung image data fused on the basis of any one piece of the first fusion information or the second fusion information and the preset code identifier is smaller than a threshold value, generating a rechecking characteristic identification signal.
Further, the main machine learning model and the auxiliary machine learning model are both used for receiving the rechecking characteristic identification signal; the main machine learning model is used for carrying out feature recognition on lesion features in each segmented image after receiving the rechecking feature recognition signal; the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in each segmented image after receiving the rechecking feature recognition signals.
Further, the image processing module is used for carrying out feature fusion on the basis of the first lesion feature edge and the first lesion feature center in each segmented image in the lung image data after the block fusion, generating a second lesion feature edge and a second lesion feature center, and extracting a lesion region on the basis of the second lesion feature edge and the second lesion feature center.
The lung cancer diagnosis system further comprises a patient information extraction module, wherein the patient information extraction module is used for extracting other lung disease related data except lung image data, and the result output module is used for classifying and diagnosing lung cancer by combining standard feature comparison results in a lesion area and lung disease database and other lung disease related data except lung image data.
The adoption of the scheme has the following beneficial effects:
1. In the invention, in the feature recognition stage, the lung image data of the patient is divided into a plurality of segmented images, and the segmented images are processed to ensure that the system focuses on local features in the images more, thereby being beneficial to capturing tiny lesions and abnormal areas in the lung images more accurately; each segment may be considered a local area that facilitates accurate identification in lesions of different locations and sizes.
Performing independent feature recognition and texture information recognition on each segmented image, and performing block fusion on the segmented images based on the texture information, wherein on one hand, the block fusion stage performs double verification on the feature recognition and the texture information recognition quality based on a preset coding identifier; on the other hand, the accurate fusion is completed after the identification processing of the lung image data. Compared with the prior art, the method has the advantages that the accuracy of identifying partial tiny or discontinuous or inaccurate lesion features at the image boundary is improved, the accuracy of lung cancer detection is further improved, and the lung cancer diagnosis is carried out by medical staff more effectively.
2. According to the invention, one or more of lung X-ray image data, lung CT image data, lung MRI image data or PET-CT image data can be synchronously processed, lung cancer detection is carried out based on various lung image data, and compared with the prior art, the application range of the detection system is improved by integrating data formats, sizes and the like of different devices.
3. According to the scheme, block fusion based on texture information can integrate the block images which are processed independently into more complete lung image data. By comprehensively considering the texture characteristics of different blocks, the block fusion can eliminate the discontinuity among the blocks and reduce the noise and the artifact in the image, thereby improving the quality and the reliability of the lung image.
And secondly, when lung cancer is detected based on the fused lung image data, the characteristic information and the texture information of each block are integrated, the state of lung tissues can be more comprehensively analyzed, and the accuracy and the reliability of lung cancer detection are improved. At the same time, the fused image may be clearer and more coherent, so that the abnormal region is easier to detect and identify.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
FIG. 1 is a schematic diagram of an embodiment 1 of an intelligent lung cancer detection system based on image processing and machine learning;
FIG. 2 is a schematic flow chart of an embodiment 1 of an intelligent lung cancer detection system based on image processing and machine learning;
fig. 3 is a schematic structural diagram of an embodiment 2 of an intelligent lung cancer detection system based on image processing and machine learning.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The following is a further detailed description of the embodiments:
Example 1:
As shown in fig. 1 and fig. 2, the intelligent lung cancer detection system based on image processing and machine learning comprises an image processing module, a trained main machine learning model, a trained auxiliary machine learning model and a result output module, wherein the image processing module is in interactive communication with the trained main machine learning model and the trained auxiliary machine learning model, and the result output module is in interactive communication with the image processing module. A specific system architecture is shown with reference to fig. 1.
Specifically, the host machine learning model adopted in the embodiment is a deep learning model, namely a convolutional neural network (Convolutional Neural Network, CNN), and of course, besides the convolutional neural network adopted in the embodiment, a cyclic neural network (Recurrent Neural Network, RNN) or a traditional machine learning model can be adopted as a main machine learning model for subsequent lesion feature recognition. The convolutional neural network training process adopted in this embodiment is as follows:
S1, data preparation: and collecting and preparing a lung image data set comprising a normal sample and a disease sample, and ensuring the quality and labeling accuracy of the data.
S2, extracting features: because the embodiment adopts the convolutional neural network for training, the original lung image dataset can be directly used for training. If a traditional machine learning model is adopted, such as: the lung image data needs to be preprocessed and extracted by a support vector machine, a random forest, a K nearest neighbor algorithm and the like, and common features include texture features, morphological features and the like.
S3, training a model: the selected model is trained using the training set, and model parameters are updated by a back-propagation algorithm so that the model can better fit the data set.
S4, evaluating a model: and evaluating the trained model by using the verification set, and evaluating performance indexes of the model, such as accuracy, precision, recall rate and the like.
S5, model tuning and verification: optimizing the model according to the evaluation result; and verifying the final model by using a test set, evaluating the performance of the model on unseen data, and ensuring the generalization capability and the robustness of the model.
Specifically, the present embodiment provides two different auxiliary machine learning models, which are respectively: gray Co-occurrence matrix (GRAY LEVEL Co-occurrence Matrix, GLCM) and directional gradient histogram (Histogram of Oriented Gradients, HOG). Wherein the GLCM uses spatial relationships between pixel gray levels in the image to describe texture information. The texture information of the image may be represented by calculating various statistical features (e.g., energy, contrast, uniformity, etc.) in the GLCM. HOGs capture edge and texture information in an image by computing gradients for each pixel in the image. One of the two feature extraction modes can be selected as a secondary machine learning model for subsequent texture information feature recognition according to actual conditions. The training process of the two is as follows:
s11, data preparation: a labeled lung image dataset is collected and prepared, ensuring that the dataset contains a plurality of different types of lung texture information.
S12, extracting features: texture features are extracted by GLCM or HOG in regions in the lung image.
S13, training a model: training the selected model by using the marked lung image data set, taking the extracted texture features as input and the corresponding class labels as output.
S14, evaluating a model: and evaluating the trained model by using a cross-validation method and the like, and evaluating performance indexes of the model, such as accuracy, precision, recall rate and the like.
And S15, model tuning and verification: and (3) optimizing the model according to the evaluation result, verifying the finally trained model by using an independent test set, evaluating the performance of the model on unseen data, and ensuring the generalization capability and the robustness of the model.
The image processing module is used for acquiring lung image data of a patient, wherein the lung image data comprises one or more of lung X-ray image data, lung CT image data, lung MRI image data or PET-CT image data of the patient. Because of the limited adaptability of the lung image data due to the different formats and sizes of the lung image data, the image processing module of this embodiment is configured to perform format conversion and size conversion on one or more acquired lung image data, and generate one or more pieces of lung image data in a first format and a first size. Specifically, the first format and the first size may be set according to actual requirements, and in this embodiment, the first format is set to NITIF format; the first size is set to 224x224.
The image processing module performs a blocking process on the lung image data based on the set first format (NITIF format) and first size (224 x 224) to obtain a plurality of block images. Wherein the first dimension is achieved by scaling, shearing, normalizing, and the like. Furthermore, the image processing module is adapted to pre-process the segmented image, the pre-process at least comprising: denoising processing, contrast enhancement processing and edge detection processing; thereby obtaining lung image data with uniform format and low noise.
The main machine learning model is used for carrying out feature recognition on lesion features in the segmented image; specifically, the main machine learning model is used for carrying out edge recognition on lesion features in the segmented image to obtain a first lesion feature edge, and is used for carrying out center recognition on the lesion features in the segmented image based on the first lesion feature edge to obtain a first lesion feature center, and combining the first lesion feature edge and the first lesion feature center to integrate and output the lesion features of the segmented image. The pathological change characteristics specifically comprise: information on the location, size, and type of abnormal lesions such as bumps and nodules in the lung images.
The auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in the segmented image; specifically, two different auxiliary machine learning models of the embodiment adopt different modes to perform feature recognition of texture information.
The auxiliary machine learning model based on GLCM is used for receiving the segmented image, calculating the gray value relation between pixel pairs in the segmented image, calculating each statistical feature according to the gray level co-occurrence matrix to generate texture features, and representing the texture features as feature descriptors, wherein the feature descriptors correspond to texture information of the segmented image to which the feature descriptors belong. The HOG-based auxiliary machine learning model is used for receiving the segmented image, calculating the gradient direction and the gradient magnitude of pixels in the segmented image, and forming a direction gradient histogram corresponding to texture information of the segmented image by the gradient direction and the gradient magnitude of the pixels in the segmented image. The texture information specifically comprises an angular second moment, entropy, contrast score matrix and correlation, wherein the correlation occupies larger weight for subsequent block fusion of the segmented image, the correlation can be used for measuring the similarity degree of the gray level of the image in the row or column direction, the magnitude of the value reflects the local gray level correlation, and the larger the value is, the larger the correlation is, and the block fusion is performed based on the correlation.
The image processing module is also used for carrying out block fusion on the segmented images by combining with texture information in the segmented images, carrying out feature fusion on lesion features in the lung image data after the block fusion, and extracting lesion areas based on the lung image data after the feature fusion. Evaluating the detection result of the lesion feature through the synergistic effect of the block fusion and the feature fusion, wherein the more accurate the detection result is, the higher the correctness of the block fusion is, and the more accurate the edge region identification of the segmented image is; otherwise, the worse the detection result is, the lower the correctness of block fusion is, and the worse the edge area of the segmented image is identified. The image processing module is used for presetting coding identifiers according to areas corresponding to the segmented images before the segmented processing of the lung image data is carried out, obtaining first lesion feature edges and first lesion feature centers of the segmented images, and generating first fusion information based on the first lesion feature edges and the first lesion feature centers of the segmented images; obtaining texture information of each segmented image edge, and generating second fusion information based on the texture information of each segmented image edge; and respectively carrying out block fusion on the block images based on the first fusion information and the second fusion information. After block fusion, at least the following two operations are performed based on the block fusion case:
(1) And if the superposition rate of the coding identifier corresponding to the lung image data fused based on the first fusion information and the second fusion information block is larger than a threshold value with a preset coding identifier, selecting fusion information with larger superposition rate for block fusion.
(2) And if the coincidence rate of the code identifier corresponding to the lung image data fused on the basis of any one piece of the first fusion information or the second fusion information and the preset code identifier is smaller than a threshold value, generating a rechecking characteristic identification signal.
Based on the two operations, the main machine learning model and the auxiliary machine learning model are used for receiving the rechecking characteristic identification signal; the main machine learning model is used for carrying out feature recognition on lesion features in each segmented image after receiving the rechecking feature recognition signal; the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in each segmented image after receiving the rechecking feature recognition signals. Therefore, the accuracy of the detection result is ensured, when the detection result does not reach the expectation, the system automatically detects again until the detection result meets the expectation, and of course, in order to avoid that the detection result of the system cannot meet the expectation for a long time, a user can limit the update times of the detection result by setting a threshold value of the detection times, and takes an effective value of the update times to assist medical staff in detecting lung cancer.
The result output module is used for comparing the extracted lesion area with standard features in a lung disease database and outputting lung cancer classification and diagnosis results. Specifically, feature fusion is performed based on a first lesion feature edge and a first lesion feature center in each segmented image in the lung image data after block fusion, a second lesion feature edge and a second lesion feature center are generated, and a lesion region is extracted based on the second lesion feature edge and the second lesion feature center. The output lung cancer classification and diagnosis results may include: lesion detection results (information such as the position, size, type, etc. of abnormal lesions such as bumps, nodules, etc.), lesion classification results (benign and malignant tumors, infectious lesions, etc.), tumor attributes (shape, density, edge features, etc. of tumors), probabilities or confidence levels (probability or confidence levels of each category), etc.
Example 2:
As shown in fig. 3, the difference from embodiment 1 is that the patient information extraction module is further included, and the information extraction module is in interactive communication with the result output module, and the patient information extraction module is configured to extract other pulmonary disease related data besides pulmonary image data, specifically, other pulmonary disease related data besides pulmonary image data includes: patient history including past disease history, family history, smoking history, professional history, etc.; physiological parameters of the patient, such as height, weight, blood pressure, heart rate, respiratory rate, etc.; laboratory test results such as blood tests, urine tests, sputum tests, and the like.
The result output module is used for classifying and diagnosing lung cancer by combining the standard characteristic comparison result in the lesion area and lung disease database and other lung disease related data except lung image data.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.
Claims (10)
1. The lung cancer intelligent detection system based on image processing and machine learning is characterized by comprising an image processing module, a trained main machine learning model, a trained auxiliary machine learning model and a result output module;
The image processing module is used for acquiring lung image data of a patient, and performing block processing on the lung image data to acquire a plurality of block images;
the main machine learning model is used for carrying out feature recognition on lesion features in the segmented image;
the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in the segmented image;
The image processing module is also used for carrying out block fusion on the segmented images by combining with texture information in the segmented images, carrying out feature fusion on lesion features in the lung image data after the block fusion, and extracting a lesion region based on the lung image data after the feature fusion;
The result output module is used for comparing the extracted lesion area with standard features in a lung disease database and outputting lung cancer classification and diagnosis results.
2. The intelligent lung cancer detection system based on image processing and machine learning according to claim 1, wherein the image processing module is configured to acquire one or more of lung X-ray image data, lung CT image data, lung MRI image data, or PET-CT image data of a patient; and performing format conversion and size conversion on the acquired one or more lung image data to generate one or more lung image data with a first format and a first size, and performing independent blocking processing on each piece of lung image data with the first format and the first size according to a preset size to acquire a plurality of blocking images.
3. The intelligent lung cancer detection system based on image processing and machine learning according to claim 2, wherein the image processing module is further configured to perform preprocessing on the segmented image, the preprocessing at least including: denoising processing, contrast enhancement processing, and edge detection processing.
4. The intelligent lung cancer detection system based on image processing and machine learning according to claim 1, wherein the learning model of the host computer is used for carrying out edge recognition on lesion features in the segmented image to obtain a first lesion feature edge, carrying out center recognition on the lesion features in the segmented image based on the first lesion feature edge to obtain a first lesion feature center, and integrating and outputting the lesion features of the segmented image by combining the first lesion feature edge and the first lesion feature center.
5. The intelligent lung cancer detection system based on image processing and machine learning according to claim 4, wherein the auxiliary machine learning model is used for receiving the segmented image, calculating a gray value relation between pixel pairs in the segmented image, calculating each statistical feature according to a gray co-occurrence matrix to generate a texture feature, and representing the texture feature as a feature descriptor, wherein the feature descriptor corresponds to texture information of the segmented image to which the feature descriptor belongs.
6. The intelligent lung cancer detection system based on image processing and machine learning according to claim 4, wherein the auxiliary machine learning model is used for receiving the segmented image, calculating the gradient direction and the gradient magnitude of pixels in the segmented image, and forming a direction gradient histogram of the gradient direction and the gradient magnitude of the pixels in the segmented image, wherein the direction gradient histogram corresponds to texture information of the segmented image to which the direction gradient histogram belongs.
7. The intelligent lung cancer detection system based on image processing and machine learning according to claim 5 or 6, wherein the image processing module is further configured to preset a coding identifier according to an area corresponding to the segmented image before performing the segmentation processing on the lung image data, obtain a first lesion feature edge and a first lesion feature center of each segmented image, and generate first fusion information based on the first lesion feature edge and the first lesion feature center of each segmented image; obtaining texture information of each segmented image edge, and generating second fusion information based on the texture information of each segmented image edge; respectively carrying out block fusion on the block images based on the first fusion information and the second fusion information;
If the superposition rate of the coding identifier corresponding to the lung image data fused based on the first fusion information and the second fusion information is larger than a threshold value, selecting fusion information with larger superposition rate for block fusion;
and if the coincidence rate of the code identifier corresponding to the lung image data fused on the basis of any one piece of the first fusion information or the second fusion information and the preset code identifier is smaller than a threshold value, generating a rechecking characteristic identification signal.
8. The intelligent lung cancer detection system based on image processing and machine learning of claim 7, wherein the primary machine learning model and the secondary machine learning model are each further configured to receive a rechecking feature identification signal; the main machine learning model is used for carrying out feature recognition on lesion features in each segmented image after receiving the rechecking feature recognition signal; the auxiliary machine learning model is used for carrying out feature recognition on lesion features and texture information in each segmented image after receiving the rechecking feature recognition signals.
9. The intelligent lung cancer detection system based on image processing and machine learning according to claim 1, wherein the image processing module is configured to perform feature fusion based on a first lesion feature edge and a first lesion feature center in each segmented image in the block fused lung image data, generate a second lesion feature edge and a second lesion feature center, and extract a lesion region based on the second lesion feature edge and the second lesion feature center.
10. The intelligent lung cancer detection system based on image processing and machine learning according to claim 1, further comprising a patient information extraction module, wherein the patient information extraction module is used for extracting other lung disease related data except for lung image data, and the result output module is used for classifying and diagnosing lung cancer by combining standard feature comparison results in a lesion area and lung disease database and other lung disease related data except for lung image data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410600770.3A CN118469950A (en) | 2024-05-15 | 2024-05-15 | Lung cancer intelligent detection system based on image processing and machine learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410600770.3A CN118469950A (en) | 2024-05-15 | 2024-05-15 | Lung cancer intelligent detection system based on image processing and machine learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118469950A true CN118469950A (en) | 2024-08-09 |
Family
ID=92155336
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410600770.3A Pending CN118469950A (en) | 2024-05-15 | 2024-05-15 | Lung cancer intelligent detection system based on image processing and machine learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118469950A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118762023A (en) * | 2024-09-06 | 2024-10-11 | 云南师范大学 | A lung cancer image processing method, system and storage medium based on artificial intelligence |
-
2024
- 2024-05-15 CN CN202410600770.3A patent/CN118469950A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118762023A (en) * | 2024-09-06 | 2024-10-11 | 云南师范大学 | A lung cancer image processing method, system and storage medium based on artificial intelligence |
CN118762023B (en) * | 2024-09-06 | 2024-11-22 | 云南师范大学 | Lung cancer image processing method, system and storage medium based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108364006B (en) | Medical image classification device based on multi-mode deep learning and construction method thereof | |
Yousef et al. | A holistic overview of deep learning approach in medical imaging | |
US11423540B2 (en) | Segmentation of anatomical regions and lesions | |
US11488306B2 (en) | Immediate workup | |
CN108464840B (en) | Automatic detection method and system for breast lumps | |
Li et al. | Dilated-inception net: multi-scale feature aggregation for cardiac right ventricle segmentation | |
CN118262875A (en) | Medical image diagnosis and contrast film reading method | |
CN101103924A (en) | Breast cancer computer-aided diagnosis method and system based on mammography | |
CN114494157B (en) | Automatic evaluation method for image quality of fetal four-chamber ultrasound sections | |
CN108549912A (en) | A kind of medical image pulmonary nodule detection method based on machine learning | |
CN109934824A (en) | Cervical spinal cord high signal detection method and system | |
CN118657756A (en) | Intelligent decision-making support system and method for nursing care of patients with brain tumors | |
CN115345856A (en) | Breast cancer chemotherapy curative effect prediction model based on image dynamic enhancement mode | |
CN118737442A (en) | Postpartum breast health detection method integrating multimodal data | |
CN118469950A (en) | Lung cancer intelligent detection system based on image processing and machine learning | |
CN102053963A (en) | Retrieval method of chest X-ray image | |
CN112116559A (en) | Digital pathological image intelligent analysis method based on deep learning | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
CN114519705A (en) | Ultrasonic standard data processing method and system for medical selection and identification | |
Adjei et al. | Brain tumor segmentation using SLIC superpixels and optimized thresholding algorithm | |
CN107590806A (en) | A kind of detection method and system based on brain medical imaging | |
CN116468923A (en) | Image strengthening method and device based on weighted resampling clustering instability | |
CN115375632A (en) | Lung nodule intelligent detection system and method based on CenterNet model | |
US20250173872A1 (en) | Information processing apparatus, information processing method, and storage medium | |
CN117893792B (en) | Bladder tumor classification method based on MR signals and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |