CN112529849A - Automatic counting method and device for CT ribs - Google Patents
Automatic counting method and device for CT ribs Download PDFInfo
- Publication number
- CN112529849A CN112529849A CN202011356450.6A CN202011356450A CN112529849A CN 112529849 A CN112529849 A CN 112529849A CN 202011356450 A CN202011356450 A CN 202011356450A CN 112529849 A CN112529849 A CN 112529849A
- Authority
- CN
- China
- Prior art keywords
- rib
- point cloud
- point
- layer
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 238000013528 artificial neural network Methods 0.000 claims abstract description 36
- 238000013507 mapping Methods 0.000 claims abstract description 32
- 210000000614 rib Anatomy 0.000 claims description 186
- 238000012549 training Methods 0.000 claims description 34
- 230000011218 segmentation Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000011478 gradient descent method Methods 0.000 claims description 7
- 230000005484 gravity Effects 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 7
- 210000000854 cervical rib Anatomy 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 3
- 210000000988 bone and bone Anatomy 0.000 claims 2
- QTCANKDTWWSCMR-UHFFFAOYSA-N costic aldehyde Natural products C1CCC(=C)C2CC(C(=C)C=O)CCC21C QTCANKDTWWSCMR-UHFFFAOYSA-N 0.000 claims 2
- ISTFUJWTQAMRGA-UHFFFAOYSA-N iso-beta-costal Natural products C1C(C(=C)C=O)CCC2(C)CCCC(C)=C21 ISTFUJWTQAMRGA-UHFFFAOYSA-N 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 11
- 238000004590 computer program Methods 0.000 description 7
- 230000002159 abnormal effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 206010010356 Congenital anomaly Diseases 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- AZFKQCNGMSSWDS-UHFFFAOYSA-N MCPA-thioethyl Chemical compound CCSC(=O)COC1=CC=C(Cl)C=C1C AZFKQCNGMSSWDS-UHFFFAOYSA-N 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000036244 malformation Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06M—COUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
- G06M1/00—Design features of general application
- G06M1/27—Design features of general application for representing the result of count in the form of electric signals, e.g. by sensing markings on the counter drum
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30008—Bone
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a method and a device for automatically counting CT ribs, wherein the method comprises the following steps: segmenting the rib bone in the CT to obtain a rib mask corresponding to the CT; traversing each layer of the mask, taking each layer of the mask as a binary image, and extracting a rib outline; converting each rib outline of each layer into point cloud; predicting the number of the rib by using a point cloud pattern neural network to obtain the number of the point cloud rib; and performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines to obtain the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
Description
Technical Field
The invention relates to the field of computers, in particular to a method and a device for automatically counting CT ribs.
Background
Diagnosis, description and report of fracture (disease) in CT are one of the important contents for imaging doctors. When the fracture (disease) is discovered, the focus needs to be described according to the anatomical position of the fracture (disease) for follow-up analysis or reference of other departments. With the spread of thin-layer CT, doctors can find subtle fractures (diseases), but due to the increase of the number of layers, the confirmation of the position of the lesion becomes a difficult problem, especially the description of the rib. A person usually has 12 pairs of ribs, each of which has a separate number and can be divided into 1 st, … and 12 th ribs from top to bottom. Since there is no reliable reference point, the physician needs to determine the location of one lesion from the start level of CT to the lesion level, and if there are multiple lesions, it needs to repeat many times. The process is easy to make mistakes and seriously influences the film reading efficiency of doctors, so that the automatic rib counting method is of great importance to the efficiency of the doctors and the improvement of the diagnosis and treatment quality.
There are two main methods of automated rib counting currently available. The first is a rule-based method, which generally employs a threshold or deep learning method to segment ribs, extract rib regions, then employs a certain morphological regularization method to process, and finally calculates connected domains, and assigns category labels to each connected domain from top to bottom according to positions. However, this method does not take into account the morphological information of the ribs, and will give an erroneous count when the CT does not sweep the 1 st pair of ribs; in the case of severe fracture or failure in segmentation, each connected domain no longer corresponds to one rib, so it is difficult to design a reasonable rule to assign a rib number to each region.
The second is a voxel segmentation-based approach, which generally treats rib counting as a segmentation problem, and predicts each rib as an independent class using a depth learning-based 2D or 3D segmentation model. The method can be used for learning from data by labeling a large amount of rib counting data, so that manual design rules are avoided. But due to the restriction of video memory and calculation amount, the model can only take partial CT data as input, so that the segmentation of the model is inaccurate due to lack of enough context; meanwhile, the segmentation network has a large number of parameters, needs a large number of CT and corresponding counting labels, considers various types and parts of fracture, has large individual difference of people, and is difficult to collect training data with enough diversity in practice so as to ensure the stability of the model; finally, the segmentation network operates the original data, so that the computation complexity is extremely high, and huge resource overhead is generated in actual deployment.
Disclosure of Invention
The present invention is directed to a method and apparatus for automatic counting of CT ribs that overcomes, or at least partially solves, the above-mentioned problems.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
one aspect of the present invention provides a method for automatically counting CT ribs, comprising: segmenting the rib bone in the CT to obtain a rib mask corresponding to the CT; traversing each layer of the mask, taking each layer of the mask as a binary image, and extracting a rib outline; converting each rib outline of each layer into point cloud; predicting the number of the rib by using a point cloud pattern neural network to obtain the number of the point cloud rib; and performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines to obtain the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
Wherein converting the contour of each rib of each slice into a point cloud comprises: converting the outline of each rib of each layer into a point cloud by using a formula:wherein p isThree-dimensional coordinates, z being the layer plane,in the form of a profile, the profile,is a mapping from the contour to the keypoints.
Wherein the mapping from the profile to the keypoints includes, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
The method for predicting the number of the rib by using the point cloud chart neural network comprises the following steps of: and for the point cloud containing N points, taking coordinates of the points as input, obtaining feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming global feature expression of the point cloud by using a pooling method for feature vectors of the N points, connecting the global features of the point cloud with each local feature in series, and obtaining coding prediction of each point through a plurality of multilayer perceptron models.
Wherein, the method further comprises: training a point cloud pattern neural network; the training point cloud graph neural network comprises: and training by using the labeled data, wherein a gradient descent method is adopted in the training process to calculate loss according to a predicted result and a real result, and model parameters are optimized.
Wherein, part of data in the marking data is obtained by editing the real point cloud data coordinates; wherein editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
Another aspect of the present invention provides an automatic counting device for CT ribs, comprising: the segmentation module is used for segmenting the rib in the CT to obtain a rib mask corresponding to the CT; the extraction module is used for traversing each layer of the mask, taking each layer of the mask as a binary image and extracting the rib outline; the conversion module is used for converting each rib outline of each layer into a point cloud; the prediction module is used for predicting the number of the rib by adopting a point cloud graph neural network to obtain the number of the point cloud rib; and the inverse mapping module is used for performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines, obtaining the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
The conversion module converts the outline of each rib of each layer into a point cloud in the following way: a conversion module, specifically configured to convert the contour of each rib of each slice into a point cloud by using a formula:wherein p is a three-dimensional coordinate, z is a layer plane,in the form of a profile, the profile,is a mapping from the contour to the keypoints.
Wherein the mapping from the profile to the keypoints includes, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
The prediction module predicts the number of the rib by adopting a point cloud graph neural network in the following way to obtain the point cloud rib number: and the prediction module is specifically used for inputting the coordinates of points of the point cloud containing N points, obtaining the feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming the global feature expression of the point cloud by the feature vectors of the N points through a pooling method, connecting the global feature of the point cloud with each local feature in series, and obtaining the code prediction of each point through a plurality of multilayer perceptron models.
Wherein, the device still includes: the training module is used for training the point cloud pattern neural network; the training module trains the point cloud pattern neural network by the following method: and the training module is specifically used for training by using the labeled data, and the training process adopts a gradient descent method to calculate loss according to a predicted result and a real result and optimize model parameters.
Wherein, part of data in the marking data is obtained by editing the real point cloud data coordinates; wherein editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
Therefore, according to the CT rib automatic counting method and device provided by the invention, based on the learned counting model, the rib outline is firstly segmented by using the segmentation model, then key points are extracted from the rib outline and converted into point clouds, and then the counting is carried out by adopting a point cloud segmentation method. The learning method can be used for learning from the labeled data without artificial design rules, so that the development difficulty is reduced; only a small number of points need to be calculated, so that the processing efficiency is high; the rib contour is converted into point cloud, and fracture and some congenital malformations can be simulated through point-to-point operation, so that the dependence on training data volume is reduced; after the point cloud is converted, all the outlines can be sent to a neural network for reasoning at the same time, so that the relationship among different ribs is easier to model, and higher precision is achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.
FIG. 1 is a flowchart of an automatic counting method for CT ribs according to an embodiment of the present invention;
FIG. 2 is a schematic view of a rib counting model based on a point cloud diagram neural network in the CT rib automatic counting method according to the embodiment of the present invention;
fig. 3 is a schematic structural diagram of an automatic CT rib counting device according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention provides a novel rib counting method which has the characteristics of learning, low dependence on labeled data quantity, accuracy, high efficiency and the like. The input of the invention is a CT, and the output is the outline of each rib in the CT and the corresponding rib number, specifically, the invention is realized by the following method:
fig. 1 shows a flowchart of an automatic CT rib counting method according to an embodiment of the present invention, and referring to fig. 1, the automatic CT rib counting method according to the embodiment of the present invention includes:
s1, the rib bone in CT is segmented to obtain a mask of the rib corresponding to CT.
Specifically, the rib mask M corresponding to CT is obtained by first segmenting the rib in CT by using a conventional or deep learning-based method, where 1 denotes the rib and 0 is the other.
And S2, traversing each layer of mask, taking each layer of mask as a binary picture, and extracting the rib outline.
This step can extract rib contours. Specifically, each level z of the mask is traversed, and each level M of the mask is maskedz,∶,∶Extracting the outer contour of the rib as a binary imageWherein M iszFor the number of independent profiles of the layer, each profileSet of point coordinates for a series of points:
and S3, converting each rib outline of each layer into a point cloud.
As an optional implementation of the embodiment of the present invention, converting the contour of each rib of each slice into a point cloud includes: converting the outline of each rib of each layer into a point cloud by using a formula:wherein p is a three-dimensional coordinate, z is a layer plane,in the form of a profile, the profile,is a mapping from the contour to the keypoints. Wherein the mapping from the profile to the keypoints includes, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
This step may convert the rib contours to a point cloud. Specifically, each rib contour of each layer is mapped with a three-dimensional coordinate p, which is composed of a layer z and a contourAnd calculating to obtain the series connection of the contour key points and the plane coordinates z:
wherein the mapping from contours to keypointsThere are many calculation methods, such as using the center of gravity, centroid, and center point of the circumscribed rectangle frame. The invention preferably takes the center point of an external rectangular frame as an example as a key point mapping method, and the key points Wherein:
thus each contour can be represented by a corresponding point, and all rib contour points in the CT constitute a point cloud.
And S4, predicting the rib number by using the point cloud pattern neural network to obtain the point cloud rib number.
The number of the rib can be predicted by adopting a point cloud pattern neural network in the step. Specifically, this step first requires mapping the rib number to the code, with a one-to-one correspondence. There are many options for the implementation of the coding: if the 24 ribs are coded into 24-dimensional zero one (one hot) codes, the positions of the elements with the value of 1 represent the numbers of the ribs, the left 1-12 ribs adopt the numbers of 1-12, and the right 1-12 ribs adopt the numbers of 13-24; 13-dimensional coding may also be used, where the first 12 dimensions represent rib numbers and the last dimension represents the left and right sides. The point cloud rib number prediction model provided by the invention is irrelevant to a specific coding mode, and any coding is within the protection scope of the invention.
As an optional implementation manner of the embodiment of the present invention, predicting the rib number by using a point cloud graph neural network, and obtaining the point cloud rib number includes: and for the point cloud containing N points, taking coordinates of the points as input, obtaining feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming global feature expression of the point cloud by using a pooling method for feature vectors of the N points, connecting the global features of the point cloud with each local feature in series, and obtaining coding prediction of each point through a plurality of multilayer perceptron models.
For point clouds containing N points, the invention adopts a point cloud-based graph neural network to predict rib coding. The specific encoding prediction process is described below by using fig. 2 as an example of a specific network, but since the neural network has different implementation forms, the change of the computing unit or the addition and deletion of the number of layers still fall within the protection scope of the present invention.
Referring to fig. 2, the network uses coordinates of points as input, each point obtains a feature space expression of the point through a multilayer perceptron neural network model, the multilayer perceptron model can be stacked for many times, and finally feature vectors of N points form a global feature expression of the point cloud by a pooling method; and connecting the global features of the point cloud with each local feature in series, and then obtaining the coding prediction of each point through a plurality of layers of perceptron models. The corresponding rib number of each point can be obtained by decoding the code.
As an optional implementation manner of the embodiment of the present invention, the method for automatically counting CT ribs further includes: training a point cloud pattern neural network; the training point cloud graph neural network comprises: and training by using the labeled data, wherein a gradient descent method is adopted in the training process to calculate loss according to a predicted result and a real result, and model parameters are optimized. Wherein, part of data in the marking data is obtained by editing the real point cloud data coordinates; wherein editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
Specifically, the point cloud pattern neural network needs to be trained by using labeled data, and the trained parameters can be used for rib coding prediction. And a gradient descent method is adopted in the training process, loss is calculated according to the prediction result and the real result of the model, and the model parameters are optimized.
In order to improve the generalization capability of the point cloud graph neural network and save the labeling cost. The invention provides a method for editing point cloud coordinates, simulating abnormal conditions and expanding training data on the basis of actually extracted point cloud data, which comprises the following steps:
1. simulating fracture: randomly selecting adjacent points of a rib part, and changing the positions of the points by adopting random rotation and displacement;
2. turning: the implementation steps are as follows, a, calculating the mean value of all points x coordinateb. With point x coordinate minus meanc. All points x-coordinate negativeNumber; d. all points plus the mean valuee. The left and right sides of the category code are interchanged.
3. Simulating the shooting body position: randomly rotating all point clouds along three coordinate axes;
4. simulating the neck rib: copying a first rib part point, and adding a random value to the upper side of the first rib by subtracting the random value from the z coordinate of the selected point;
5. simulating waist ribs: the 12 th rib segment point is copied and added to the 12 th rib underside by adding a random value to the selected point z coordinate.
And S5, performing inverse mapping on the point cloud rib numbers, mapping back to the rib outlines, obtaining the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
This step may map the rib numbers of the point cloud back to the rib contours. Specifically, each point in the point cloud corresponds to the level and contour of the rib one by one, so that the rib number of the point is assigned to the rib contour, and the number of the rib to which each level and each rib contour belongs can be obtained, thereby completing rib counting.
Therefore, according to the CT rib automatic counting method provided by the embodiment of the invention, ribs are firstly segmented, rib outlines are extracted from the segmented ribs, the outlines are converted into point clouds, and inference and prediction are carried out by adopting a graph neural network. The method based on learning can automatically learn from the labeled data, so that post-processing rules do not need to be manually designed, the stability of rib counting can be improved, the development efficiency is improved, and the maintenance cost is reduced.
In addition, because the graph neural network learns from the rib segmentation result, when the rib segmentation is abnormal, the graph neural network model can still count correctly, so that the system is more stable in actual operation.
Because the point cloud is only formed by coordinates, the coordinate editing method can simulate rare conditions in actual conditions, such as fracture, abnormal shooting positions, congenital deformity and the like, improve the generalization capability of the model, and reduce the cost of data collection and labeling. The counting model only uses operations such as a multilayer perceptron, and the like, and the number of the points in the point cloud is small (average 6000 points), so that the counting model has extremely high operation efficiency and reduces the deployment cost.
Fig. 3 is a schematic structural diagram of an automatic CT rib counting device according to an embodiment of the present invention, in which the above method is applied, and the following only briefly describes the structure of the automatic CT rib counting device, and please refer to the related description in the automatic CT rib counting method, referring to fig. 3, the automatic CT rib counting device according to an embodiment of the present invention includes:
the segmentation module is used for segmenting the rib in the CT to obtain a mask of the rib corresponding to the CT;
the extraction module is used for traversing each layer of the mask, taking each layer of the mask as a binary image and extracting the rib outline;
the conversion module is used for converting each rib outline of each layer into a point cloud;
the prediction module is used for predicting the number of the rib by adopting a point cloud graph neural network to obtain the number of the point cloud rib;
and the inverse mapping module is used for performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines, obtaining the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
As an optional implementation of the embodiment of the present invention, the transformation module transforms the contour of each rib of each level into a point cloud by: a conversion module, specifically configured to convert the contour of each rib of each slice into a point cloud by using a formula:wherein p is a three-dimensional coordinate, z is a layer plane,in the form of a profile, the profile,is a mapping from the contour to the keypoints.
As an alternative to the embodiments of the present invention, the mapping from contours to keypoints includes, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
As an optional implementation manner of the embodiment of the present invention, the prediction module predicts the number of the rib by using the point cloud graph neural network in the following manner, so as to obtain the point cloud rib number: and the prediction module is specifically used for inputting the coordinates of points of the point cloud containing N points, obtaining the feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming the global feature expression of the point cloud by the feature vectors of the N points through a pooling method, connecting the global feature of the point cloud with each local feature in series, and obtaining the code prediction of each point through a plurality of multilayer perceptron models.
As an optional implementation manner of the embodiment of the present invention, the automatic CT rib counting apparatus provided in the embodiment of the present invention further includes: the training module is used for training the point cloud pattern neural network; the training module trains the point cloud pattern neural network by the following method: and the training module is specifically used for training by using the labeled data, and the training process adopts a gradient descent method to calculate loss according to a predicted result and a real result and optimize model parameters.
As an optional implementation manner of the embodiment of the present invention, part of the data in the annotation data is obtained by editing the coordinates of the real point cloud data; wherein editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
Therefore, the automatic counting device for the CT ribs provided by the embodiment of the invention firstly segments the ribs, extracts the rib outline from the segmented ribs, then converts the outline into point cloud, and adopts the graph neural network to carry out reasoning and prediction. The method based on learning can automatically learn from the labeled data, so that post-processing rules do not need to be manually designed, the stability of rib counting can be improved, the development efficiency is improved, and the maintenance cost is reduced.
In addition, because the graph neural network learns from the rib segmentation result, when the rib segmentation is abnormal, the graph neural network model can still count correctly, so that the system is more stable in actual operation.
Because the point cloud is only formed by coordinates, the coordinate editing method can simulate rare conditions in actual conditions, such as fracture, abnormal shooting positions, congenital deformity and the like, improve the generalization capability of the model, and reduce the cost of data collection and labeling. The counting model only uses operations such as a multilayer perceptron, and the like, and the number of the points in the point cloud is small (average 6000 points), so that the counting model has extremely high operation efficiency and reduces the deployment cost.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). The memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (12)
1. An automatic counting method for CT ribs is characterized by comprising the following steps:
segmenting a rib in CT to obtain a rib mask corresponding to the CT;
traversing each layer of the mask, taking each layer of the mask as a binary image, and extracting a rib outline;
converting each rib outline of each layer into point cloud;
predicting the number of the rib by using a point cloud pattern neural network to obtain the number of the point cloud rib;
and performing inverse mapping on the point cloud rib numbers, mapping the point cloud rib numbers back to the rib outlines to obtain the rib numbers to which each rib outline of each layer belongs, and finishing rib counting.
2. The method of claim 1, wherein the converting the outline of each rib of each slice to a point cloud comprises:
converting the outline of each rib of each layer into a point cloud by using a formula:
3. The method of claim 2, wherein the mapping from contours to keypoints comprises, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
4. The method of claim 1, wherein predicting the rib number using the point cloud neural network comprises:
and for the point cloud containing N points, taking coordinates of the points as input, obtaining feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming global feature expression of the point cloud by using a pooling method for feature vectors of the N points, connecting the global features of the point cloud with each local feature in series, and obtaining coding prediction of each point through a plurality of multilayer perceptron models.
5. The method of claim 1 or 4, further comprising: training the point cloud graph neural network;
the training the point cloud graph neural network comprises: and training by using the labeled data, wherein a gradient descent method is adopted in the training process to calculate loss according to a predicted result and a real result, and model parameters are optimized.
6. The method according to claim 5, wherein part of the data in the annotation data is obtained by editing real point cloud data coordinates; wherein the editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
7. An automatic counting device for CT ribs is characterized by comprising:
the segmentation module is used for segmenting costal bones in the CT to obtain masks of the costal bones corresponding to the CT;
the extraction module is used for traversing each layer of the mask, taking each layer of the mask as a binary image and extracting the rib outline;
the conversion module is used for converting each rib outline of each layer into a point cloud;
the prediction module is used for predicting the number of the rib by adopting a point cloud graph neural network to obtain the number of the point cloud rib;
and the inverse mapping module is used for performing inverse mapping on the point cloud rib number and mapping the point cloud rib number back to the rib outline to obtain the rib number to which each rib outline of each layer belongs so as to finish rib counting.
8. The apparatus of claim 7, wherein the conversion module converts the contour of each rib of each slice to a point cloud by:
9. The apparatus of claim 8, wherein the mapping from contours to keypoints comprises, but is not limited to: the center of gravity, the centroid and the center point of the circumscribed rectangle frame.
10. The apparatus of claim 7, wherein the prediction module predicts the rib number using a point cloud neural network to obtain a point cloud rib number by:
the prediction module is specifically used for inputting point coordinates of point clouds containing N points, obtaining feature space expression of the points by each point through a multilayer perceptron neural network model, finally forming global feature expression of the point clouds by feature vectors of the N points through a pooling method, connecting the global features of the point clouds with each local feature in series, and obtaining coding prediction of each point through a plurality of multilayer perceptron models.
11. The apparatus of claim 7 or 10, further comprising: the training module is used for training the point cloud pattern neural network;
the training module trains the point cloud pattern neural network by: the training module is specifically used for training by using the labeled data, and a gradient descent method is adopted in the training process to calculate loss according to a predicted result and a real result and optimize model parameters.
12. The apparatus according to claim 11, wherein part of the data in the annotation data is obtained by editing real point cloud data coordinates; wherein the editing the point cloud includes but is not limited to: simulating fracture, overturning, simulating shooting position, simulating cervical rib and simulating lumbar rib.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011356450.6A CN112529849B (en) | 2020-11-27 | 2020-11-27 | CT rib automatic counting method and device |
PCT/CN2021/131649 WO2022111383A1 (en) | 2020-11-27 | 2021-11-19 | Ct-based rib automatic counting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011356450.6A CN112529849B (en) | 2020-11-27 | 2020-11-27 | CT rib automatic counting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112529849A true CN112529849A (en) | 2021-03-19 |
CN112529849B CN112529849B (en) | 2024-01-19 |
Family
ID=74994054
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011356450.6A Active CN112529849B (en) | 2020-11-27 | 2020-11-27 | CT rib automatic counting method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112529849B (en) |
WO (1) | WO2022111383A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114049358A (en) * | 2021-11-17 | 2022-02-15 | 苏州体素信息科技有限公司 | Method and system for rib case segmentation, counting and positioning |
WO2022111383A1 (en) * | 2020-11-27 | 2022-06-02 | 北京深睿博联科技有限责任公司 | Ct-based rib automatic counting method and device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452577A (en) * | 2008-11-26 | 2009-06-10 | 沈阳东软医疗系统有限公司 | Rib auto-demarcating method and device |
US20190066294A1 (en) * | 2017-08-31 | 2019-02-28 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for image segmentation |
CN110555860A (en) * | 2018-06-04 | 2019-12-10 | 青岛海信医疗设备股份有限公司 | Method, electronic device and storage medium for marking rib region in medical image |
CN110866905A (en) * | 2019-11-12 | 2020-03-06 | 苏州大学 | Rib identification and marking method |
CN110992376A (en) * | 2019-11-28 | 2020-04-10 | 北京推想科技有限公司 | CT image-based rib segmentation method, device, medium and electronic equipment |
CN111091605A (en) * | 2020-03-19 | 2020-05-01 | 南京安科医疗科技有限公司 | Rib visualization method, identification method and computer-readable storage medium |
CN111915620A (en) * | 2020-06-19 | 2020-11-10 | 杭州深睿博联科技有限公司 | CT rib segmentation method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10867436B2 (en) * | 2019-04-18 | 2020-12-15 | Zebra Medical Vision Ltd. | Systems and methods for reconstruction of 3D anatomical images from 2D anatomical images |
CN112529849B (en) * | 2020-11-27 | 2024-01-19 | 北京深睿博联科技有限责任公司 | CT rib automatic counting method and device |
-
2020
- 2020-11-27 CN CN202011356450.6A patent/CN112529849B/en active Active
-
2021
- 2021-11-19 WO PCT/CN2021/131649 patent/WO2022111383A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101452577A (en) * | 2008-11-26 | 2009-06-10 | 沈阳东软医疗系统有限公司 | Rib auto-demarcating method and device |
US20190066294A1 (en) * | 2017-08-31 | 2019-02-28 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for image segmentation |
CN110555860A (en) * | 2018-06-04 | 2019-12-10 | 青岛海信医疗设备股份有限公司 | Method, electronic device and storage medium for marking rib region in medical image |
CN110866905A (en) * | 2019-11-12 | 2020-03-06 | 苏州大学 | Rib identification and marking method |
CN110992376A (en) * | 2019-11-28 | 2020-04-10 | 北京推想科技有限公司 | CT image-based rib segmentation method, device, medium and electronic equipment |
CN111091605A (en) * | 2020-03-19 | 2020-05-01 | 南京安科医疗科技有限公司 | Rib visualization method, identification method and computer-readable storage medium |
CN111915620A (en) * | 2020-06-19 | 2020-11-10 | 杭州深睿博联科技有限公司 | CT rib segmentation method and device |
Non-Patent Citations (2)
Title |
---|
SOWMYA RAMAKRISHNAN ET AL.: "Automatic Three-Dimensional Rib Centerline Extraction from CT Scans For Enhanced Visualization and Anatomical Context", 《MEDICAL IMAGING 2011: IMAGE PROCESSING》, vol. 7962, pages 1 - 79622 * |
石志瑞, 等: "基于CT图像的肋骨定位和追踪", 《南阳理工学院学报》, vol. 10, no. 4, pages 56 - 61 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022111383A1 (en) * | 2020-11-27 | 2022-06-02 | 北京深睿博联科技有限责任公司 | Ct-based rib automatic counting method and device |
CN114049358A (en) * | 2021-11-17 | 2022-02-15 | 苏州体素信息科技有限公司 | Method and system for rib case segmentation, counting and positioning |
Also Published As
Publication number | Publication date |
---|---|
WO2022111383A1 (en) | 2022-06-02 |
CN112529849B (en) | 2024-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10748036B2 (en) | Training a neural network to predict superpixels using segmentation-aware affinity loss | |
Zhang et al. | Cross-view cross-scene multi-view crowd counting | |
CN111369576B (en) | Training method of image segmentation model, image segmentation method, device and equipment | |
CN111369581B (en) | Image processing method, device, equipment and storage medium | |
Li et al. | Supervae: Superpixelwise variational autoencoder for salient object detection | |
CN110503654A (en) | A kind of medical image cutting method, system and electronic equipment based on generation confrontation network | |
WO2022001623A1 (en) | Image processing method and apparatus based on artificial intelligence, and device and storage medium | |
CN111402216B (en) | Three-dimensional broken bone segmentation method and device based on deep learning | |
CN112529849B (en) | CT rib automatic counting method and device | |
CN111915620B (en) | CT rib segmentation method and device | |
Zhang et al. | Exploring semantic information extraction from different data forms in 3D point cloud semantic segmentation | |
CN111681204A (en) | CT rib fracture focus relation modeling method and device based on graph neural network | |
CN108597589B (en) | Model generation method, target detection method and medical imaging system | |
CN111341438B (en) | Image processing method, device, electronic equipment and medium | |
CN113408595B (en) | Pathological image processing method and device, electronic equipment and readable storage medium | |
CN117173463A (en) | Bone joint model reconstruction method and device based on multi-classification sparse point cloud | |
CN112017190B (en) | Global network construction and training method and device for vessel segmentation completion | |
Li | A crowd density detection algorithm for tourist attractions based on monitoring video dynamic information analysis | |
CN113989269B (en) | Traditional Chinese medicine tongue image tooth trace automatic detection method based on convolutional neural network multi-scale feature fusion | |
CN116051813A (en) | Full-automatic intelligent lumbar vertebra positioning and identifying method and application | |
CN114049358A (en) | Method and system for rib case segmentation, counting and positioning | |
CN114118127A (en) | Visual scene mark detection and identification method and device | |
Akwensi et al. | APC2Mesh: Bridging the gap from occluded building façades to full 3D models | |
CN118212490B (en) | Training method, device, equipment and storage medium for image segmentation model | |
EP4198884A1 (en) | Method and system for processing an image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |