WO2005101314A2 - Method and apparatus for processing images in a bowel subtraction system - Google Patents
Method and apparatus for processing images in a bowel subtraction system Download PDFInfo
- Publication number
- WO2005101314A2 WO2005101314A2 PCT/US2005/012325 US2005012325W WO2005101314A2 WO 2005101314 A2 WO2005101314 A2 WO 2005101314A2 US 2005012325 W US2005012325 W US 2005012325W WO 2005101314 A2 WO2005101314 A2 WO 2005101314A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- boundary
- image
- value
- colon
- pixels
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/68—Analysis of geometric attributes of symmetry
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10068—Endoscopic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20156—Automatic seed setting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30028—Colon; Small intestine
- G06T2207/30032—Colon polyp
Definitions
- the present invention relates generally to colonoscopy techniques and more particularly to a system and method for processing an image of a bowel and for detecting polyps in the image.
- a colonoscopy refers to a medical procedure for examining a colon to detect abnormalities such as polyps, tumors or inflammatory processes in the anatomy of the colon.
- the colonoscopy is a procedure which includes of a direct endoscopic examination of the colon with a flexible tubular structure known as a colonoscope which typically has imaging (e.g. fiber optic) or video recording capabilities at one end thereof.
- the colonoscope is inserted through the patient's anus and directed along the length of the colon, thereby permitting direct endoscopic visualization of colon polyps and tumors and in some cases, providing a capability for endoscopic biopsy and polyp removal.
- colonoscopy provides a precise means of colon examination, it is time-consuming, expensive to perform, and requires great care and skill by the examiner. The procedure also requires thorough patient preparation including ingestion of purgatives and enemas, and usually a moderate anesthesia. Also, since colonoscopy is an invasive procedure, there is a significant risk of injury to the colon and the possibility of colon perforation and peritonitis, which can be fatal.
- a virtual colonoscopy makes use of images generated by computed tomography (CT) imaging systems (also referred to as computer assisted tomography (CAT) imaging systems).
- CT computed tomography
- CAT computer assisted tomography
- a computer is used to produce an image of cross-sections of regions of the human body by using measure attenuation of X-rays through a cross-section of the body.
- CT imaging system generates two-dimensional images of the inside of an intestine. A series of such two-dimensional images can be combined to provide a three-dimensional image of the colon.
- colons tend to have folded regions (or more simply, "folds").
- the folds are sometimes difficult to distinguish from the bowel contents and thus are sometimes inadvertently labeled or "tagged” as bowel contents.
- the fold region is also subtracted. This results in the processed image (i.e. the image of the colon from which contents have been digitally removed) having gaps or other artifacts due to the unintentional subtraction of a fold. Such gaps or artifacts are distracting to a person (e.g. a doctor or other medical practitioner) examining the image.
- a fold processing system includes a fold processor which detects folds in a bowel and identifies the fold as a portion of the bowel in an image.
- the fold processing system identifies a boundary in the digital image (e.g. an air-water boundary) and uses symmetry to determine whether a fold exists in the image. If a fold is found, the fold is identified (or labeled or tagged) as being a portion of the bowel rather than bowel contents. Thus, when the bowel contents are digitally subtracted form the image the fold region is left in the image.
- a system for processing a fold region of in a digital image of a colon includes a boundary processor which receives a first digital image and identifies in the image a boundary between a first substance having a first density and a second substance having a second different density and a symmetry processor which processes one or more portions of the image about the boundary to determine whether symmetry exists about the boundary and identifies regions in the image having symmetry about the boundary.
- a system and technique for identifying a colon centerline in an image of an uncleansed colon includes identifying a first point (or seed point) known to be within the colon. The regions around the seed point are then labeled (e.g. identified as containing either high density or low density material). Once the image regions are labeled, the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point. Regions around the seed point with the same label are then identified as being within the colon. The next colon image is processed by labeling regions in the image (e.g.
- the seed point may be manually identified or automatically identified. Automatic identification may be accomplished by first processing an image corresponding to an inferior aspect of the colon and using information concerning anatomical structure which should appear in the image.
- a system and technique for detecting objects (including polyps) in a colon image dataset which has been cleaned electronically includes separating the colon surface from the rest of the image dataset.
- the portion of the image data set which corresponds to the separated colon surface is then processed to generate a planar map of the colon in which the value of each pixel in the planar map corresponds to its radial distance from a central axis in the planar map.
- a segregation of features in the planar map of the colon (a process referred to as segmentation) is then performed.
- Objects, including polyps, identified by the segmentation process can then be described and classified.
- Statistical or correlation techniques can be used to identify and/or classify objects (including polyps) within the image.
- FIG. 1 is a block diagram of a system for digital bowel subtraction and automatic polyp detection
- FIGs. 2 and 2A are a series of diagrams illustrating a technique for detecting and processing a bowel fold
- FIG. 3 is a flow diagram showing a process for processing a bowel fold region in an image
- FIGs. 4 and 4A are a series of a flow diagrams showing a process for finding a centerline in a colon
- FIG. 5 is a diagram of a colon having a centerline
- Fig. 6 is a cross sectional view of a colon taken across lines 6-6 on Fig. 5;
- Figs. 7 and 7A are cross sectional views of a colon taken across lines 6-6 on Fig.
- Fig. 8 is a pair of images aligned to extend the colon identification from a first image to a second image
- Fig. 9 is a diagram of a colon showing the direction of processing used to define colon regions in sequential images of the colon;
- Fig. 10 is a diagram of a colon having a centerline
- Fig. 10A is a colon map generated after a colon centerline has been identified and which can be used for polyp detection.
- Figs. 11 - 11 C are a series of diagrams which illustrate the mapping between colon centerline and 3D datasets.
- DBSP digital bowel subtraction processor
- APDP automated polyp detection processor
- a computed tomography (CT) system generates signals which can be stored as a matrix of digital values in a storage device of a computer or other digital processing device. As described herein, the CT image is divided into a two-dimensional array of pixels, each represented by a digital word.
- the two-dimensional array of pixels can be combined to form a three-dimensional array of pixels.
- the value of each digital word corresponds to the intensity of the image at that pixel.
- the array of digital data values are generally referred to as a "digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of density values in a scene.
- a digital data storage device such as a memory for example, as an array of numbers representing the spatial distribution of density values in a scene.
- original image refers to an image provided from the representational matrices that are output from a CT or other type of scanner machine.
- Each of the numbers in the array can be expressed as to a digital word typically referred to as a "picture element” or a "pixel” or as “image data.”
- the image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
- a pixel represents a single instantaneous value which is located at specific spatial coordinates in the image.
- the digital word is comprised of a certain number of bits and that the techniques of the present invention can be used on digital words having any number of bits.
- the digital word may be provided as an eight-bit binary value, a twelve bit binary value, a sixteen but binary value, a thirty-two bit binary value, a sixty-four bit binary value or as a binary value having any other number of bits (e.g. one hundred twenty-eight or more bits). More of less than each of the above-specified number of bits may also be used.
- the techniques described herein may be applied equally well to either a scale images or color images.
- each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location.
- each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values.
- R bits color red
- G bits color green
- B-bits predetermined number of bits
- RGB hue, saturation, brightness
- CMYK cyan, magenta, yellow, black
- the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the Luminosity and color axes a & b (Lab) YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
- An "image region” or more simply a “region” is a portion of an image. For example, if an image is provided as a 32 X 32 pixel array, a region may correspond to a 4 X 4 portion of the 32 X 32 pixel array.
- the local window is thought of as "sliding" across the image because the local window is placed above one pixel, then moves and is placed above another pixel, and then another, and so on. Sometime the "sliding" is made in a raster pattern. It should be noted, though, that other patterns can also be used.
- detection techniques described herein are described in the context of detecting polyps in a colon, those of ordinary skill in the art should appreciate that the detection techniques can also be used search for and detect structures other than polyps and that the techniques may find application in regions of the body other than the bowel or colon.
- One approach to subtracting contents from the colon is to first identify the intersection of a morphologic dilation of an air region, a dilated fecal matter region and a dilated edge region (which can be found by using a gradient finding function and morphologic dilation). The system then approximates the intersection of these three regions to be the residue that preferably would be removed. While this technique provides satisfactory results in terms of digitally subtracting the contents of the bowel from an image, it can lead to a problem of over subtraction, because sometimes folds that should remain in the image pass through the residue area that is removed thus causing the folds to also be removed.
- volume averaging can cause the soft tissue region (which should be expressed as image regions having low pixel values) to be assigned pixel values which approximate the pixel values which represent bowel contents (i.e. image regions having high pixel values).
- the soft tissue region which should be expressed as image regions having low pixel values
- pixel values which approximate the pixel values which represent bowel contents i.e. image regions having high pixel values.
- the fold symmetry processing approach One approach which allows subtraction of the bowel contents while allowing the fold regions to remain in the image is referred to as the fold symmetry processing approach.
- the symmetry of the objects in the image are used to identify fold regions in the image.
- the symmetry approach it is recognized that if a point lies in the intersection of the residue (also referred to as bowel contents) and a fold, then it should be surrounded by soft tissue like pixels on all sides. Thus, by examining the variance between this pixel and the pixels around it, this variance should be relatively low.
- the variance of the pixel and the surrounding pixels should be relatively high.
- the process involves searching for pixels having a relatively low variance. These pixels were then put back into the image (rather than subtracted).
- a system for performing virtual colonoscopy 10 includes a computed tomography (CT) imaging system 12 having a database 14 coupled thereto.
- CT computed tomography
- the CT system 10 produces two-dimensional images of cross- sections of regions of the human body by measuring attenuation of X-rays through a cross- section of the body.
- the images are stored as digital images or image data in the image database 14.
- a series of such two-dimensional images can be combined using known techniques to provide a three-dimensional image of a colon.
- a user interface 16 allows a user to operate the CT system and also allows the user to access and view the images stored in the image database.
- a digital bowel subtraction processor (DBSP) 18 is coupled to the image database 14 and the user interface 16.
- the DBSP receives image data from the image database and processes the image data to digitally remove the contents of the bowel from the digital image.
- the DBSP can then store the image back into the image database 14.
- the particular manner in which the DBSP processes the images to subtract or remove the bowel contents from the image will be described in detail below in conjunction with Figs. 2-6. Suffice it here to say that since the DBSP digitally subtracts or otherwise removes the contents of the bowel from the image provided to the DBSP, the patient undergoing the virtual colonoscopy need not purge the bowel in the conventional manner which is know to be unpleasant to the patient.
- the DBSP 18 may operate in one of at least two modes.
- the first mode is referred to as a raster mode in which the DBSP utilizes a map or window which is moved in a predetermined pattern across an image.
- the pattern corresponds to a raster pattern.
- a threshold process is used in which the window scans the entire image while threshold values are applied to pixels within the image in a predetermined sequence. The threshold process assesses whether absolute threshold values have been crossed and the rate at which they have been crossed.
- the raster scan threshold process is used to identify pixels having values which represent low density regions (e.g. air) sometimes referred to as "air pixels" which are proximate (including adjacent to) pixels having values which represent matter or substance (e.g.
- the processor examines each of the pixels to locate native un-enhanced soft tissue, as a boundary between soft tissue (e.g. bowel wall) and bowel contents is established, pixels are reset to predetermined values depending upon the side of the boundary on which they appear.
- native un-enhanced soft tissue e.g. bowel wall
- the second mode of operation for the DBSP 18 is the so-called gradient processor mode.
- a soft tissue threshold (ST) value, an air threshold (AT) value and a bowel threshold (BT) value are selected.
- a first mask is applied to the image and all pixels having values greater than the bowel threshold value are marked.
- a gradient is applied to the pixels in the images to identify pixels in the image which should have air values and bowel values.
- the gradient function identifies regions having rapidly changing pixel values. From experience, one can select bowel/air and soft tissue/air transition regions in an image by appropriate selection of the gradient threshold.
- the gradient process uses a second mask to capture a first shoulder region in a transition region after each of the pixels having values greater than the BT value have been marked.
- a mucosa insertion processor 19a is used to further process the sharp boundary to lesson the impact of or remove the visually distracting regions.
- the sharp edges are located by applying a gradient operator to the image from which the bowel contents have been extracted.
- the gradient operator may be similar to the gradient operator used to find the boundary regions in the gradient subtracter approach described herein.
- the gradient threshold used in this case typically differs from that used to establish a boundary between bowel contents and a bowel wall.
- [UU4bJ lhe particular gradient threshold to use can be empirically determined. Such empirical selection may be accomplished, for example, by visually inspecting the results of gradient selection on a set of images detected under similar scanning and bowel preparation techniques and adjusting gradient thresholds manually to obtain the appropriate gradient (tissue transition selector) result.
- the sharp edges end up having the highest gradients in the subtracted image.
- a filter is then applied to these boundary (edge) pixels in order to "smooth" the edge.
- the filter is provided having a constrained Gaussian filter characteristic. The constraint is that the smoothing is allowed to take place only over a predetermined width along the boundary.
- the predetermined with should be selected such that the smoothing process does not obscure any polyp of other bowel structures of possible interest.
- the predetermined width corresponds to a width of less than ten pixels.
- the predetermined width corresponds to a width in the range of two to five pixels and in a most preferred embodiment, the width corresponds to a width of three pixels. The result looks substantially similar and in some cases indistinguishable from the natural mucosa seen in untouched bowel wall, and permits an endoluminal evaluation of the subtracted images.
- the digital bowel subtraction processor 18 also includes a fold processor 19b. It has been recognized that during the process to subtract from the image regions which have been identified or “tagged” as bowel contents (hereinafter "tagged regions"), it is possible to subtract regions of the colon which correspond to a fold. This occurs because during the subtraction process, the system may inadvertently subtract soft tissue elements (represented by low pixel values) that are bounded by high density tagged material (represented by high pixel values).
- the fold processor 19b helps prevent fold regions from being removed from the image.
- the fold processor includes a boundary processor and a symmetry processor which identify a boundary and utilizes a symmetry characteristic of the fold region of the image to identify the fold and thus prevent the fold from being subtracted from the image.
- an automated polyp detection processor (APDP) 20.
- the APDP 20 receives image data from the image database and processes the image data to detect and /or identify polyps, tumors, inflammatory processes, or other irregularities in the anatomy of the colon.
- the APDP 20 can thus pre-screen each image in the database 14 such that an examiner (e.g. a doctor) need not examine every image but rather can focus attention on a subset of the images possibly having polyps or other irregularities.
- the examiner Since the CT system 10 generates a relatively large number of images for each patient undergoing the virtual colonoscopy, the examiner is allowed more time to focus on those images in which the examiner is most likely to detect a polyp or other irregularity in the colon.
- the particular manner in which the APDP 20 processes the images to detect and /or identify polyps in the images will be described in detail below in conjunction with Figs. 7-9. Suffice it here to say that the APDP 20 can be used to process two-dimensional or three-dimensional images of the colon. It should also be noted that APDP 20 can process images which have been generated using either conventional virtual colonoscopy techniques (e.g. techniques in which the patient purges the bowel prior to the CT scan) or the APDP 20 can process images in which the bowel contents have been digitally subtracted (e.g. images which have been generated by DBSP 18).
- polyp detection system 20 can provide results generated thereby to an indicator system which can be used to annotate (e.g. by addition of a marker, icon or other means) or otherwise identify regions of interest in an image (e.g. by drawing a line around the region in the image, or changing the color of the region in the image) which has been processed by the detection system 20.
- an image of a bowel cross section 100 includes a bowel wall 101 which defines a perimeter of the bowel.
- the bowel includes a fold 102.
- Portions of the bowel image 104a, 104b corresponds to an air region of the bowel 100 and portions of the bowel image 106a, 106b correspond to those portions of the bowel having contents therein.
- the air regions 104a, 104b appear as low density material (and thus can be represented by pixels having a relatively low value, for example) while the bowel contents are represented as a high density material (and thus can be represented, for example, by pixels having a value which is relatively high compared with the valve of the pixel requesting the low density regions).
- a boundary 108 exists between the low density regions 104a, 104b and the high density regions 106a, 106b.
- the fold region 102 is relatively long and narrow and is immersed in the high density material 106. Due to the averaging of pixel values which occurs during the imaging process, the fold region 102 (or some portions of the fold region 102) can be artificially designated as a region of high density material. Thus, if left with such a designation, the fold region would be subtracted as part of the high density region 106 which represents bowel contents.
- a point that lies along the boundary 108 in the residue 106 should have both air (low pixel values) and fecal matter (high pixel values) around it.
- the boundary line 108 there is no symmetry about the boundary line 108 in that the pixel values in the region 106 below the boundary line 108 have a value which is substantially different (e.g. relatively high) compared with the pixel values in the region 104a or 104b above the boundary line 108.
- a selected point which is not in the fold region 102 should have a relatively high variance of pixel values around it.
- the pixels having a low variance can be identified by forming a window 110a and correlating pixel values in the window 110a with a kernel.
- the boundary 108 has a width and thus the window 110a must be provided having a size which is large enough to span the width of the boundary 108.
- the boundary 108 is typically a minimum of 3 to 5 pixels wide.
- the size of the window 110 may be empirically determined. It should be appreciated of course that the window should fit within the expected width of a fold (which is typically in the range of about five to ten pixels but which could be larger than ten or smaller than five in some cases.
- the window 110a is generated and slides or moves across the image along the boundary 108.
- the window 110a When the window 110a is located so that it includes a portion of both regions 104a and 106a, the pixels below the boundary 108 have a relatively high value compared with the value of the pixels above the boundary 108. Thus, the correlation of the pixels above and below the boundary 108 results in a relatively high correlation value which indicates that the there is no symmetry about the boundary 108.
- the correlation of the pixels above and below the boundary results in a relatively low correlation value since the pixels above and below the boundary both correspond to low density pixels.
- the correlation of the pixels above and below the boundary again results in a relatively high correlation value since the pixel values above the boundary 108 correspond to low density material pixel values while the pixel values below the boundary 108 correspond to high density material pixel values.
- the returned pixels can be dilated. This results in more of each desired fold being present in the image while at the same time providing a relatively thin, structure having a normal or natural appearance in the final subtracted image.
- FIG. 3 is a flow diagram showing the processing performed by a processing apparatus which may, for example, be provided as part of a virtual colonoscopy system 10 such as that described above in conjunction with FIG. 1 to perform digital bowel subtraction including detection of fold regions and automated polyp detection.
- the rectangular elements (typified by element 120 in FIG. 3), are herein denoted “processing blocks" and represent computer software instructions or groups of instructions.
- the processing blocks can represent functions performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC).
- ASIC application specific integrated circuit
- the flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
- FIG. 3 a process for detecting and processing fold regions in an image of a bowel begins by identifying an air boundary as shown in block 100.
- an air-water boundary it should be appreciated that reference to any specific boundary (e.g. a boundary between air and water) is intended to be exemplary and is made for reasons of promoting clarity in the description and is not intended to be and should not be construed as limiting. It should be appreciated that the boundary may be between air and some other matter or substance and not necessarily water or between two substances having different densities.
- a correlation matrix is applied to the air- water boundary of an original image.
- correlation matrix allows identification of those regions of the image having symmetry about the air- water boundary. Such regions correspond to low variance regions of the image and these regions are identified as shown in block 104. Once the low variance regions of the image are identified, a regional morphologic dilation of these regions is performed as shown in block 106. By recognizing that the low variance regions correspond to fold regions, the fold region is in essence being dilated. This dilation process results in more of each desired fold being present in the image while at the same time providing the image having a fold structure with a normal or natural appearance when viewed by a person examining the image.
- a subtraction mask is used to remove the bowel contents from the image.
- regions covered by the subtraction mask are removed.
- the dilation operation performed in block 106 results in the removal of pixels (representing the fold region) from the subtraction mask.
- the fold regions are left in the image after the bowel contents are subtracted from the image.
- a process for identifying a colon centerline when the colon is not cleansed begins with block 130 in which a first image is selected.
- the first selected image is also sometimes referred to herein as the index image.
- Processing then flows to block 132 in which a first point (referred to herein as a seed point) known to be within the colon is identified.
- the seed point may be identified manually (e.g. by a user) or in some embodiments a centerline processor may automatically determine the seed point. Automatic detection of the seed point may be accomplished for example by having the system first process an image corresponding to the most inferior aspect of the colon (i.e. the image which contains the rectum/anus portion of the colon.
- the system can automatically select a seed point. Once the seed point is identified in the index image, the entire colon can then be identified in the image (including bowel contents, air and some soft tissue).
- processing block 134 a simple subtraction is then performed.
- the subtraction can be accomplished using a threshold and dilation technique.
- the regions are then labeled (e.g. identified as containing either high density or low density material) as shown in block 138.
- the colon region is identified by finding the seed point (i.e. the point known to be in the colon) and determining what label is assigned to the seed point.
- the first image is now processed.
- processing then flows to decision block 141 in which a determination is made as to whether there are any more images to process. If a decision is made that all images have been processed, then centerline processing ends. If on the other hand, a decision is made that all images have not yet been processed, then processing flows to processing block 142 in which the next image is selected. Also, the image which was last processed is identified as the "current image.”
- Processing then proceeds to block 144 in which a simple subtraction is then performed on the next image (i.e. the image currently being processed).
- the subtraction can be accomplished using a threshold and dilation technique.
- the image is subject to a threshold operation to assign air and non-air values (or to simply assign values which indicate regions of different densities) and then the regions are labeled (e.g. as either high density or low density material) as shown in blocks 146 and 148.
- the colon region is identified in the next image by using the colon location information in the current image. In this manner, the center of the colon in each subsequent image can be found thereby allowing a colon centerline to be established.
- a colon 160 having bowel contents 162 therein is processed in the manner described above in conjunction with Figs. 4 and 4 A to provided a centerline 164.
- the seed point is manually provided (e.g. by a user)
- any of the points 168a - 168d could be used as the seed point.
- the process begins with the inferior most image 170 (also referred to as the index image) and a point proximate the center of the image 170 corresponds to the rectum/anus 171 is selected as the seed point (i.e. a point in the image known to be part of the colon).
- image 170 selected as the first image the process proceeds sequentially through each image moving in the direction of image 172.
- image 172 is shown to include a first object 174 and a second object 176. It should be appreciated that both objects 174, 176 include high density regions 178, 180. Since the bowel centerline is being found without cleansing the bowel, it is possible that either object 174, 176 corresponds to the bowel and the other object corresponds to some other structure such as bone, for example. Structure 176 also includes a region 182 having a density lower than other objects or regions in the image (e.g. an air region 182) and a boundary 184.
- the air region 182 has a boundary 190 and to help distinguish the colon from other material, the air region boundary 190 is dilated as indicated by boundary 190a.
- the content region 180 has a boundary 192 and to help distinguish the colon from other material, the content region boundary 192 is dilated as indicated by boundary 192a.
- the region 178 also has a boundary 194 which is dilated as indicated by boundary 194a.
- FIG. 7A the dilation of the air and high density regions results in the generation of an overlap region 196.
- a union of each point in each set is then made to identify the entire colon. It should be appreciated the points must be contiguous to be included as points in the union of sets. In this manner, structure 178 is distinguished as a structure which is separate from structure 178.
- a pair of images 200, 202 in which structures 176', 178' have been identified are processed to find a centerline.
- the current image 200 corresponds to an image in which a point 204 in the colon has already been identified.
- the point 204 could have been identified either manually or automatically as described above.
- the alignment should be considered conventional in that the images are obtained in the same scanning procedure and will have assigned to them a fixed position within the frame of reference established by the CT scanner.
- the point 204 lies within structure 176". Since point 204 is known to be within the colon and the distance D which separates the two images 200, 202 along the colon is known to be small, structure 176" in image 202 is identified as a colon region.
- the images are processed sequentially in a given direction as shown in Fig. 9. It should be appreciated that in some instances, e.g. when a turn in the colon is reached, it is necessary to reverse the direction in which the images are processed. This is important because sometimes the colon makes 180 degree turns (also referred to as bends, or flexures) and in order to correctly map the anatomy with reference to the colon, the system must change direction to follow the anatomic (as opposed to spatial) direction of the colon. Upon reversing, the system needs to continue in the 'reversed' direction at least the distance equivalent to the maximum diameter of the colon before determining that it has reached its terminus.
- 180 degree turns also referred to as bends, or flexures
- the system Upon reversing, the system needs to continue in the 'reversed' direction at least the distance equivalent to the maximum diameter of the colon before determining that it has reached its terminus.
- Figs. 10 and 10 A by finding a centerline 222 of a colon 220, the colon can be "unfolded" as shown by reference number 220a in Fig. 10A.
- polyp detection techniques can be applied to the unfolded colon 220a.
- the centerline has two primary uses: improving subtraction (by allowing the system to clearly follow fold anatomy in three dimensions; and 2) improving polyp detection. The latter is improved because the detection system can operate within a frame of reference which is related to the colon anatomy. If all of the colon anatomy is laid out on a 'plane', as possible with a centerline, then potential lesions can be normalized with respect to scale, and simultaneously, orientation (rotation) of target lesions can be minimized.
- polyps e.g. polyps 226a, 226b
- the polyp locations are identified in the three dimensional image of the colon. An approach to polyp detection using a map of an unfolded colon is next described.
- the first step in detection is to separate the colon surface from the rest of the image dataset.
- a useful method to map the colon surface is to calculate a colon centerline (i.e. a three-dimensional curve that runs the length of the colon along the center of its lumen.
- the centerline is a useful construct for evaluation of a wide range of image processing problems, and can be calculated by use of a morphologic thinning algorithm and the so-called medial axis transform (MAT).
- a morphologic thinning algorithm the air within the colon lumen is taken as the object of interest and this column of air is iteratively eroded, until a single line segment remains along the central axis of the colon.
- the medial axis transform is a complementary algorithm wherein the distance between a set of regularly spaced points within the column and the outer boundary of the air column is tabulated. Points associated with greater distance to the boundary are assigned higher values, and the medial axis is taken as the set of points with the maximum distance to the outer boundary. While the MAT is more computationally expensive, it is generally a robust approach.
- a radial distance signature is calculated along the length of the centerline.
- the radial signature is a graph representing the distance from center of an object to its boundary as a radial line segment is swept through 360°.
- the distance from the centerline to the colon mucosal surface is measured at each fixed angular interval around the centerline, and this process is repeated along the length of the centerline curve.
- the result of this procedure is a planar map of the colon where the value of each pixel represents its radial distance from the central axis.
- the process is akin to straightening the colon, and slicing it open longitudinally, as for a pathology specimen. This approach has been employed to map the colon in both phantom and clinical cases for evaluation for CTC.
- this method can be viewed akin to mathematically flooding the contour of the colon map with water, and evaluating the contour lines of features that remain above the water surface.
- the water is allowed to rise until all objects are inundated, and the algorithm tabulates the position of the contour lines just before the waters from different regions of the map are allowed to admix.
- the result of this process is a set of continuous boundaries surrounding the separate objects on the map.
- watershed segmentation is usually applied to the gradient transform of an image.
- the gradient transform is a rendering of the image where edges of shapes are accentuated.
- the edge accentuation is calculated by convolving the image map with an operator matrix, such as the Sobel operator.
- the edges to be accentuated are the transitions of radial height of each feature along the colon mucosa.
- the result of these operations is identification of a set of objects situated along the mapped internal surface of the colon.
- the boundary of each object is composed of the points at which the local colon surface diverges sharply inward. What follows are two methods to describe and classify these objects identified by this process of segmentation.
- the first method is based on statistical description (i.e. it is a statistical approach to polyp detection).
- statistical description i.e. it is a statistical approach to polyp detection.
- the centroid is the weighted average of the object, corresponding to its center of mass.
- the mean and variance are respectively, the standard statistical average and second moment about this average.
- the internal texture of an object also contains useful classifying information. For example, the standard deviation of pixels within an object and the average entropy of pixels that make up each object have been shown to be useful for object discrimination.
- the entropy of a set of pixels is defined as: - ⁇ [ p(/)log (p(/)) ], where p(t ' ) represents a histogram of the texture (CT density) values. Summation is performed for each i, comprising the range of possible texture values in the image.
- a frequent mimicker of the colon polyp is retained fecal material. Like polyps, retained fecal material can demonstrate a nearly spherical contour.
- Polyps however, demonstrate a uniformly soft tissue density, whereas retained feces generally contain small bubbles of air. As a result of this heterogeneity, one would expect the texture variance and entropy of polyps and fecal pseudolesions to be distinct.
- pattern recognition can be facilitated by the combination of boundary and texture descriptors into a single multi-dimensional feature vector. It is believed that there are no published studies combining both contour and textural analyses for the purpose of polyp detection; however, there is an extensive literature describing their utility for pattern recognition.
- feature vectors can be compared for the purpose of pattern classification by calculating the Euclidean distance between them.
- Objects that are similar in feature will demonstrate a small Euclidean distance, and by empirically setting a threshold, or discriminator function, one can classify an unknown object based on the distance of its feature vector to the vector of a known object.
- each class of objects designated, c ⁇ j is best described by the statistical parameters, mean feature vector, m j , covariance matrix, C j , and probability of occurrence, P(j). These parameters are analogous to the two-dimensional evaluation of populations according to mean, variance, and probability density. If these parameters are known for each class objects to be encountered, it is in theory possible to formulate an explicit discriminator function to separate them.
- This function called the Bayesian discriminator d() for a group of classes, j, takes the form: where x is a feature vector, x ⁇ is the transpose of x, Cj and C j "1 are the covariance matrix and its inverse, nay and m j T are the mean feature vector and its transpose, and P(c ⁇ j ) is the probability of class ⁇ j occurring.
- the different kinds of soft tissue density structures to be encountered in the colon are relatively few in number, and include polyps, haustral folds, and retained feces.
- One approach to polyp detection is to construct a library of known colon structures and derive the mean, co-variance, and probability parameters necessary to form an explicit discriminator. While it is possible that the feature vectors of these structures will cluster sufficiently to permit construction of an explicit discriminator function, it is believed that these parameters have not yet been catalogued for the human colon.
- the feature vector of an object is evaluated in a set of weighted nodes, analogous to neurons.
- Each node has the property that it generates a non-linear output in response to a set of input values and associated weights.
- the input nodes typically correspond in number to the dimension of the input feature vector, and similarly, the output nodes correspond in number to the different object classes to be identified. Determining the node weights is performed by exposing the network to a set of training cases.
- vectors of known class are fed forward through the network and the error of the resulting output classification is stepwise propagated backward through the network.
- Each weight is adjusted in order to minimize the local error associated at a given node with the inputs of the preceding layer.
- unknown feature vectors are fed forward through the network without backward feedback and class membership is determined by the final state of the output nodes. It has been shown that a three-layer network, comprised of an input layer, a hidden layer, and an output layer is in theory capable of separating arbitrarily complex groups of object classes. Hence, another means exists to analyze the feature vectors of colon objects if the cataloguing method proves unfeasible.
- a second approach taken for polyp classification combines a planar colon map with template matching.
- the segmented representations of objects in the colon map are further modified to normalize their size and to incorporate a description of their internal texture. Normalization for size can be accomplished by analysis of the boundary points of each object around the object centroid.
- the boundary points of an object are repositioned along their respective radii toward the centroid by the minimum radius observed in the set of boundary points for that object. This process retains the basic morphology of the boundary contour, and reduces the variations in boundary points for larger objects.
- a description of the internal composition of an object can be represented by the standard deviation of pixels contained within the object boundary.
- the pixels within the normalized representation of the object are set to the value of this standard deviation.
- the result of these steps is a planar object having a contour normalized for size, and having internal pixel values set to the standard deviation of the object's internal texture.
- a similar mapping is performed for the template polyp — its contour and internal texture are respectively normalized for size and modified to reflect internal homogeneity.
- the template is then combined with the template in the process of correlation, described previously.
- the pixel values in the correlation image reflect the similarity of each region of the modified map with the modified template. Sharp peaks of pixel value correspond to regions of high similarity and are taken to represent the location of polyps. Correlation peaks of this kind can be selected by means of a high pass filter and threshold.
- FIG. 1 Another approach to further develop a polyp detection system can be provided by implementing colon mapping and segmentation as follows.
- the centerline of the colon can be found using prior art techniques such as the algorithm described by Ferreira and Russ, with the modifications described by Zhang for handling colon flexures. With the colon centerline identified, a colon mapping process can be performed.
- the smallest size polyp of clinical interest is approximately seven (7) mm in size, which represents ten (10) isometric voxels from thirty-five (35) cm field-of-view CTC data.
- the software will sample the centerline at five (5) voxel intervals.
- the angular sampling interval at each stop along the centerline derives from a similar calculation: the maximum circumference of normal, air distended colon in CTC is approximately 180 mm, leading to a seven degree (7°) interval in order to achieve fifty percent (50%) sampling overlap.
- the technique utilizes the radial distance from the centerline to an inner surface of the colon using a three-dimensional Euclidean distance formula.
- the mucosal surface is segmented out of the CT data using a global threshold set to exclude pixels above -50 Hounsf ⁇ eld units. Previous experience has shown that this threshold value clearly outlines the inner aspect of the mucosa.
- the software will tabulate the maximal radius observed, r max .
- the radial height of each point, i, along the signature will be calculated as: v max - ⁇ .
- the data construct for the colon map comprises a three dimensional matrix, with two axes describing position along the mucosal (inner) surface of the colon, and the third dimension representing the radial height of features from the surface.
- the internal voxels of structures protruding into the colon lumen will be incorporated into the matrix in two steps, based on the calculation of the radial signature of the segmented colon.
- the maximum of the radial signature, r max will be taken to represent the most distended region of colon wall at each position for which it will be calculated along the colon centerline.
- voxels located along the radial segment stretching from the inner mucosal surface to r max will be included in the map matrix.
- the map will comprise strata representing the features protruding into the colon lumen.
- Three additional levels along the height axis of the map will be used. The first of these, designated z 0 , will hold the radial height data. The second, zi, will hold the gradient transform of the map, and the third, z 2 , will hold a normalized form of each object for use in correlation matching.
- the gradient transform of the colon map, located in level z ls will be calculated using the Sobel operator. The gradient will be calculated by convolution of the Sobel operator with the z 0 level data, and the result will be taken to represent the edges along the height-axis of structures protruding into the colon lumen.
- a watershed segmentation on the zi transform data is then performed.
- This watershed segmentation can be accomplished using any prior art technique such as the technique of Bieniek. While the watershed technique has several known advantages in terms of its function and output, it is also known that the algorithm can generate unwanted boundaries due to minor variations in the surface being analyzed. This problem, known as over-segmentation, can be addressed in the following manner. It has been shown that preprocessing the surface to be analyzed in order to bring additional information to the segmentation step can improve the output of the segmentation. For example, application of a smoothing filter can diminish the presence of minor surface variations, removing them as possible false targets of segmentation.
- the choice of a sufficiently high gradient threshold in the gradient transform may permit selection of only the larger features of interest along the colon surface.
- the boundary variance, boundary compactness, textural variance, and textural entropy are determined.
- the centroid of the boundary points of each object represented in the zi level of the data structure are determined. This may be accomplished using conventional techniques.
- the variance of the distance of boundary points to the centroid as well as the compactness can then be determined. This may also be accomplished using conventional techniques.
- the composite voxels for each three-dimensional object arising from the colon mucosa are gathered. In one approach, this can be accomplished in two steps.
- the zi boundary points can be used to define within the plane of zi the set of internal points of each object, designated, zi,intemaij-
- the data structure is designed, for each point, p , in the set zijntemaijj the composite elements of each three-dimensional object are held in the column of strata above ?i.
- the vertical delimiter of these elements is the radial signature stored in each corresponding position of the z 0 layer.
- a normalized version of each segmented object is determined and these representations are stored in the z 2 layer of the data structure.
- the normalized version of each object will be centered on the object centroid calculated previously, and stored in the zi layer. Normalization for size for each object will proceed as follows. The radial distance separating the set of boundary points and the centroid for each object are determined. The minimum radial distance observed for each set of boundary points, designated, r m i n , will be used to displace each boundary point in a radial direction toward the centroid by dividing each boundary point radius by r m i n . The result will be a set of boundary points that incorporate the shape of each object, but reduced in size.
- each obj ect in z 2 will be incorporated into this representation by calculating the standard deviation of voxel density.
- the internal elements comprising each three-dimensional object represented in zi and z 2 will be gathered together as described above for the statistical approach.
- the result in the z 2 level will be a set of objects, whose boundaries correspond in morphology to the objects in zi, but normalized for size, and whose internal elements incorporate the object texture.
- Lesions may be catalogued using either a statistical approach or a correlation approach.
- an estimate is made of the boundary variance, boundary compactness, textural variance and textural entropy of a group of colonoscopy confirmed polyps, haustral folds, and fecal pseudo lesions.
- the polyps and other objects will be extracted from the large group of research CTC cases performed previously.. Initially objects that have been identified on the traditionally cleansed CTC exams can be used, in order to exclude noise introduced by the DSBC processing.
- fifty lesions already present in a library include a spectrum of polyps ranging in size from 5 to 35 mm.
- the four statistical boundary and textural descriptors can be calculated from the CTC datasets, following the segmentation steps described above. These data will be used to estimate a mean feature vector, m, covariance matrix C, and probability of occurrence for polyps, haustral folds, and fecal pseudolesions.
- the mean vector and covariance matrix can serve as the basis for constructing a Bayesian pattern discriminator function, utilizing prior art techniques such as the method described by Gonzalez.
- the software will flag objects assigned to the polyp class by resetting the centroid of the zi layer of each object to an arbitrarily high value.
- the group of marked centroids will be taken to represent the group of polyp candidates and these data will be passed to a polyp mark-up routine for identification to the evaluating radiologist.
- the Bayesian discriminator is unable to adequately distinguish between obj ect classes, it is conceivable that the variety of polyp morphologies, from sessile to pedunculated will prove too complex for the above-described approach. If the Bayesian analysis of the feature vectors outlined above proves too limited, the feature vectors maybe analyzed by means of a three layer artificial neural network. Determination of inadequate performance of the Bayesian technique will be made based upon evaluation of polyps within the colon phantom, a series of steps described below. It is believed that the neural network approach may be necessary if the Bayesian approach yields a sensitivity for polyps in the colon phantom of ⁇ 70%.
- the three layer neural network will be composed of four input nodes, four hidden nodes, and three output nodes.
- Initial training of the network will take place with a set of polyps, folds, and pseudo lesions obtained from traditionally cleaned CTC.
- Network node weights will initially be set to small random values with zero mean. These will be adjusted according to the method of back propagation, and the output nodes will be monitored during training.
- the process continues by exposing the network to a set of known phantom structures obtained using the DSBC technique, as it has been shown that the performance of an artificial neural network can increase with graded exposure to noise during training. Training will be deemed complete for the network when upon presentation with an object of class, i, output for node 0 is > 0.95 and output for all other nodes 0 ,j ⁇ i, is ⁇ 0.05.
- a polyp template can be generated from a library of previous data by applying the processing steps outlined above to generate a family of normalized polyp representations. These polyp templates are convolved with the normalized representation of the colon stored in the z 2 level of the imaging data structure.
- the resulting correlation image is filtered using a two-dimensional high pass filter, in order to emphasize sharp peaks of correlation, as these are most likely to represent regions of match.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2007508461A JP2007532251A (en) | 2004-04-12 | 2005-04-12 | Method and apparatus for image processing in an intestinal deduction system |
EP05736176A EP1735750A2 (en) | 2004-04-12 | 2005-04-12 | Method and apparatus for processing images in a bowel subtraction system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US56146504P | 2004-04-12 | 2004-04-12 | |
US60/561,465 | 2004-04-12 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005101314A2 true WO2005101314A2 (en) | 2005-10-27 |
WO2005101314A3 WO2005101314A3 (en) | 2006-06-01 |
Family
ID=34966077
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2005/012325 WO2005101314A2 (en) | 2004-04-12 | 2005-04-12 | Method and apparatus for processing images in a bowel subtraction system |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1735750A2 (en) |
JP (1) | JP2007532251A (en) |
WO (1) | WO2005101314A2 (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007064769A1 (en) * | 2005-11-30 | 2007-06-07 | The General Hospital Corporation | Adaptive density mapping in computed tomographic images |
JP2007209538A (en) * | 2006-02-09 | 2007-08-23 | Ziosoft Inc | Image processing method and program |
EP1884894A1 (en) * | 2006-07-31 | 2008-02-06 | iCad, Inc. | Electronic subtraction of colonic fluid and rectal tube in computed colonography |
JP2008093172A (en) * | 2006-10-11 | 2008-04-24 | Olympus Corp | Image processing device, image processing method, and image processing program |
JP2009022411A (en) * | 2007-07-18 | 2009-02-05 | Hitachi Medical Corp | Medical image processor |
EP2033567A1 (en) * | 2006-05-26 | 2009-03-11 | Olympus Corporation | Image processing device and image processing program |
DE102007058687A1 (en) * | 2007-12-06 | 2009-06-10 | Siemens Ag | Colon representing method for patient, involves producing processed data by segmenting and removing image information of colon contents marked with contrast medium during recording of image data detected by colon folds |
JP2010504794A (en) * | 2006-09-29 | 2010-02-18 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Protrusion detection method, system, and computer program |
DE102009033452A1 (en) * | 2009-07-16 | 2011-01-20 | Siemens Aktiengesellschaft | Method and device for providing a segmented volume dataset for a virtual colonoscopy and computer program product |
US8031921B2 (en) | 2005-02-14 | 2011-10-04 | Mayo Foundation For Medical Education And Research | Electronic stool subtraction in CT colonography |
EP2472473A1 (en) * | 2006-03-14 | 2012-07-04 | Olympus Medical Systems Corp. | Image analysis device |
WO2015063192A1 (en) * | 2013-10-30 | 2015-05-07 | Koninklijke Philips N.V. | Registration of tissue slice image |
CN109313803A (en) * | 2016-06-16 | 2019-02-05 | 皇家飞利浦有限公司 | A kind of at least part of method and apparatus of structure in at least part of image of body for mapping object |
CN112116694A (en) * | 2020-09-22 | 2020-12-22 | 青岛海信医疗设备股份有限公司 | Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4845566B2 (en) * | 2006-04-03 | 2011-12-28 | 株式会社日立メディコ | Image display device |
US10089729B2 (en) * | 2014-04-23 | 2018-10-02 | Toshiba Medical Systems Corporation | Merging magnetic resonance (MR) magnitude and phase images |
-
2005
- 2005-04-12 EP EP05736176A patent/EP1735750A2/en not_active Withdrawn
- 2005-04-12 JP JP2007508461A patent/JP2007532251A/en not_active Withdrawn
- 2005-04-12 WO PCT/US2005/012325 patent/WO2005101314A2/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
None |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8031921B2 (en) | 2005-02-14 | 2011-10-04 | Mayo Foundation For Medical Education And Research | Electronic stool subtraction in CT colonography |
US7809177B2 (en) | 2005-11-30 | 2010-10-05 | The General Hospital Corporation | Lumen tracking in computed tomographic images |
US7961967B2 (en) | 2005-11-30 | 2011-06-14 | The General Hospital Corporation | Adaptive density mapping in computed tomographic images |
US8000550B2 (en) | 2005-11-30 | 2011-08-16 | The General Hospital Corporation | Adaptive density correction in computed tomographic images |
US7965880B2 (en) | 2005-11-30 | 2011-06-21 | The General Hospital Corporation | Lumen tracking in computed tomographic images |
WO2007064769A1 (en) * | 2005-11-30 | 2007-06-07 | The General Hospital Corporation | Adaptive density mapping in computed tomographic images |
JP2007209538A (en) * | 2006-02-09 | 2007-08-23 | Ziosoft Inc | Image processing method and program |
US7860284B2 (en) | 2006-02-09 | 2010-12-28 | Ziosoft, Inc. | Image processing method and computer readable medium for image processing |
US8244009B2 (en) | 2006-03-14 | 2012-08-14 | Olympus Medical Systems Corp. | Image analysis device |
EP2472473A1 (en) * | 2006-03-14 | 2012-07-04 | Olympus Medical Systems Corp. | Image analysis device |
US8116531B2 (en) | 2006-05-26 | 2012-02-14 | Olympus Corporation | Image processing apparatus, image processing method, and image processing program product |
EP2033567A4 (en) * | 2006-05-26 | 2010-03-10 | Olympus Corp | Image processing device and image processing program |
EP2033567A1 (en) * | 2006-05-26 | 2009-03-11 | Olympus Corporation | Image processing device and image processing program |
EP1884894A1 (en) * | 2006-07-31 | 2008-02-06 | iCad, Inc. | Electronic subtraction of colonic fluid and rectal tube in computed colonography |
JP2010504794A (en) * | 2006-09-29 | 2010-02-18 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Protrusion detection method, system, and computer program |
US8594396B2 (en) | 2006-10-11 | 2013-11-26 | Olympus Corporation | Image processing apparatus, image processing method, and computer program product |
US8917920B2 (en) | 2006-10-11 | 2014-12-23 | Olympus Corporation | Image processing apparatus, image processing method, and computer program product |
EP2085019A1 (en) * | 2006-10-11 | 2009-08-05 | Olympus Corporation | Image processing device, image processing method, and image processing program |
EP2085019A4 (en) * | 2006-10-11 | 2011-11-30 | Olympus Corp | Image processing device, image processing method, and image processing program |
JP2008093172A (en) * | 2006-10-11 | 2008-04-24 | Olympus Corp | Image processing device, image processing method, and image processing program |
JP2009022411A (en) * | 2007-07-18 | 2009-02-05 | Hitachi Medical Corp | Medical image processor |
DE102007058687A1 (en) * | 2007-12-06 | 2009-06-10 | Siemens Ag | Colon representing method for patient, involves producing processed data by segmenting and removing image information of colon contents marked with contrast medium during recording of image data detected by colon folds |
US8908938B2 (en) | 2009-07-16 | 2014-12-09 | Siemens Aktiengesellschaft | Method and device for providing a segmented volume data record for a virtual colonoscopy, and computer program product |
DE102009033452B4 (en) * | 2009-07-16 | 2011-06-30 | Siemens Aktiengesellschaft, 80333 | Method for providing a segmented volume dataset for a virtual colonoscopy and related items |
DE102009033452A1 (en) * | 2009-07-16 | 2011-01-20 | Siemens Aktiengesellschaft | Method and device for providing a segmented volume dataset for a virtual colonoscopy and computer program product |
WO2015063192A1 (en) * | 2013-10-30 | 2015-05-07 | Koninklijke Philips N.V. | Registration of tissue slice image |
US10043273B2 (en) | 2013-10-30 | 2018-08-07 | Koninklijke Philips N.V. | Registration of tissue slice image |
US10699423B2 (en) | 2013-10-30 | 2020-06-30 | Koninklijke Philips N.V. | Registration of tissue slice image |
CN109313803A (en) * | 2016-06-16 | 2019-02-05 | 皇家飞利浦有限公司 | A kind of at least part of method and apparatus of structure in at least part of image of body for mapping object |
CN109313803B (en) * | 2016-06-16 | 2023-05-09 | 皇家飞利浦有限公司 | Method and apparatus for mapping at least part of a structure in an image of at least part of a body of a subject |
CN112116694A (en) * | 2020-09-22 | 2020-12-22 | 青岛海信医疗设备股份有限公司 | Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system |
CN112116694B (en) * | 2020-09-22 | 2024-03-05 | 青岛海信医疗设备股份有限公司 | Method and device for drawing three-dimensional model in virtual bronchoscope auxiliary system |
Also Published As
Publication number | Publication date |
---|---|
WO2005101314A3 (en) | 2006-06-01 |
JP2007532251A (en) | 2007-11-15 |
EP1735750A2 (en) | 2006-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7630529B2 (en) | Methods for digital bowel subtraction and polyp detection | |
US7876947B2 (en) | System and method for detecting tagged material using alpha matting | |
US8170642B2 (en) | Method and system for lymph node detection using multiple MR sequences | |
US11896407B2 (en) | Medical imaging based on calibrated post contrast timing | |
EP1735750A2 (en) | Method and apparatus for processing images in a bowel subtraction system | |
EP4118617A1 (en) | Automated detection of tumors based on image processing | |
Casiraghi et al. | Automatic abdominal organ segmentation from CT images | |
Alush et al. | Automated and interactive lesion detection and segmentation in uterine cervix images | |
Vukadinovic et al. | Segmentation of the outer vessel wall of the common carotid artery in CTA | |
Ratheesh et al. | Advanced algorithm for polyp detection using depth segmentation in colon endoscopy | |
US8515200B2 (en) | System, software arrangement and method for segmenting an image | |
WO2010034968A1 (en) | Computer-implemented lesion detection method and apparatus | |
Anjaiah et al. | Effective texture features for segmented mammogram images | |
Afifi et al. | Unsupervised detection of liver lesions in CT images | |
Nagy et al. | On classical and fuzzy Hough transform in colonoscopy image processing | |
Wei et al. | Segmentation of lung lobes in volumetric CT images for surgical planning of treating lung cancer | |
Tamiselvi | Effective segmentation approaches for renal calculi segmentation | |
Mostafavi | A Comparison of Virtual Colonoscopy Methods for Colon Cleansing | |
Prakash | Medical image processing methodology for liver tumour diagnosis | |
Naeppi et al. | Computer-aided detection of polyps and masses for CT colonography | |
Hiraman | Liver Segmentation Using 3D CTScans | |
Ko et al. | Interactive polyp biopsy based on automatic segmentation of virtual colonoscopy | |
Gomalavalli et al. | Feature Extraction of kidney Tumor implemented with Fuzzy Inference System | |
Yoshida et al. | Computer-aided diagnosis in CT colonography: detection of polyps based on geometric and texture features | |
Kim | Multiresolutional watershed segmentation with user-guided grouping |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
REEP | Request for entry into the european phase |
Ref document number: 2005736176 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2005736176 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007508461 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005736176 Country of ref document: EP |