US20180218501A1 - System and method for detecting brain metastases - Google Patents
System and method for detecting brain metastases Download PDFInfo
- Publication number
- US20180218501A1 US20180218501A1 US15/423,220 US201715423220A US2018218501A1 US 20180218501 A1 US20180218501 A1 US 20180218501A1 US 201715423220 A US201715423220 A US 201715423220A US 2018218501 A1 US2018218501 A1 US 2018218501A1
- Authority
- US
- United States
- Prior art keywords
- objects
- subset
- anatomical region
- medical image
- classified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/103—Treatment planning systems
- A61N5/1039—Treatment planning systems using functional images, e.g. PET or MRI
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1077—Beam delivery systems
- A61N5/1084—Beam delivery systems for delivering multiple intersecting beams at the same time, e.g. gamma knives
-
- G06K9/6267—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/008—Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
- G06T7/0014—Biomedical image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/64—Analysis of geometric attributes of convexity or concavity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Definitions
- This disclosure relates generally to radiotherapy treatment planning. More specifically, this disclosure relates to systems and methods for detecting brain metastases in medical images for developing a radiotherapy treatment plan to be used during radiotherapy.
- Radiotherapy is used to treat cancers and other ailments in mammalian (e.g., human and animal) tissue.
- mammalian e.g., human and animal
- One such radiotherapy technique is a Gamma Knife, by which a patient is irradiated by a large number of low-intensity gamma rays that converge with high intensity and high precision at a target (e.g., a tumor).
- radiotherapy is provided using a linear accelerator, whereby a tumor is irradiated by high-energy particles (e.g., electrons, protons, ions and the like).
- high-energy particles e.g., electrons, protons, ions and the like.
- the placement and dose of the radiation beam must be accurately controlled to ensure the tumor receives the prescribed radiation, and the placement of the beam should be such as to minimize damage to the surrounding healthy tissue.
- a treatment plan Before administrating radiation doses to treat a patient, a treatment plan needs to be created, in which the manner of applying radiation doses are specified.
- a treatment plan is usually created based on a medical image (or a series of images) of the patient, in which an internal anatomical region of the patient is shown. From the medical image, the target to be treated is ascertained, as well as its location, size, and/or shape, based on which the directions and intensities of multiple radiation beams are determined such that the beams converge at the target location to provide the necessary radiation dose for treating the patient. While a physician may determine whether a particular object in the medical image is a target by visually observing the medical image, this process is often tedious and time consuming. Computer-aided image classification techniques can reduce the time to extract some or all required information from the medical image.
- some methods rely on training data to train a statistical model, and the trained statistical model may then be used to identify a target.
- the effectiveness of such methods depends largely on the quality of the training data.
- the training data In order to obtain acceptable results, the training data have to contain accurately identified targets in terms of their location and segmentation. Usually, such high quality training data are in short supply.
- pure image processing methods have been used to enhance the visibility of the medical image to allow the physician to better observe the medical image.
- Such methods lack the classification ability to determine whether a particular object in the medical image is a target or not.
- the present disclosure is directed to overcoming or mitigating one or more of these problems set forth.
- the system may include a memory device storing computer-executable instructions and at least one processor device communicatively coupled to the memory device.
- the computer-executable instructions when executed by the at least one processor device, cause the processor device to perform various operations.
- the operations may include identifying a plurality of objects in a medical image.
- the operations may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects.
- the morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold.
- the operations may also include classifying the objects in the subset into one of a predetermined set of shapes.
- the operations may include detecting the anatomical region of interest based on the classified objects in the subset.
- the method may be implemented by at least one processor device executing computer-executable instructions.
- the method may include identifying a plurality of objects in a medical image.
- the method may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects.
- the morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold.
- the method may also include classifying the objects in the subset into one of a predetermined set of shapes.
- the method may include detecting the anatomical region of interest based on the classified objects in the subset.
- a further aspect of the present disclosure relates to a non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of a device to cause the device to perform a method for detecting an anatomical region of interest.
- the method may include identifying a plurality of objects in a medical image.
- the method may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects.
- the morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold.
- the method may also include classifying the objects in the subset into one of a predetermined set of shapes.
- the method may include detecting the anatomical region of interest based on the classified objects in the subset.
- FIG. 1 illustrates an exemplary radiotherapy system including a target detection system, according to some embodiments of the present disclosure.
- FIG. 2A illustrates an exemplary radiotherapy device, a Gamma Knife, according to some embodiments of the present disclosure.
- FIG. 2B illustrates another examplary radiotherapy device, a linear accelerator (LINAC), according to some embodiments of the present disclosure.
- LINAC linear accelerator
- FIG. 3 illustrates an exemplary data processing device implementing the target detection system of FIG. 1 , according to some embodiments of the present disclosure.
- FIG. 4 is an exemplary medical image showing brain metastases.
- FIG. 5 is a block diagram showing exemplary components of the target detection system of FIG. 1 , according to some embodiments of the present disclosure.
- FIG. 6 schematically illustrates an object having convexity defects.
- FIG. 7 shows an exemplary output of the target detection system shown in FIG. 1 .
- FIG. 8 is a flowchart illustrating an exemplary method for detecting a target, according to some embodiments of the present disclosure.
- Systems and methods consistent with the present disclosure are directed to detecting an anatomical region of interest in a medical image (or a series of medical images) for developing a radiotherapy treatment plan.
- the anatomical region of interest may include a tumor, a cancer, an organ at risk, etc.
- the anatomical region of interest may include a brain metastasis.
- An anatomical region of interest may also be referred to as a target.
- Detection of a target may include one or more of the following aspect: (1) classification or identification, indicating whether a target is present in the medical image or whether a particular object shown in the medical image is a target; (2) positioning, indicating the position or location of the target, once identified, in two-dimensional (2D) and/or three-dimensional (3D) space; and (3) outlining or segmentation, indicating one or more ways of separating, emphasizing, or differentiating the identified target from other features or the background of the medical image.
- classification or identification indicating whether a target is present in the medical image or whether a particular object shown in the medical image is a target
- positioning indicating the position or location of the target, once identified, in two-dimensional (2D) and/or three-dimensional (3D) space
- outlining or segmentation indicating one or more ways of separating, emphasizing, or differentiating the identified target from other features or the background of the medical image.
- an object shown in the medical image may be classified/identified as a target in a binary manner: is a target or is not a target.
- the classification/identification may be carried out based on probability or likelihood.
- an object may be identified as being, for example, highly likely a target, indicating a high probability that the object is in fact a target.
- an object may be identified as being less likely a target.
- the probability or likelihood associated with an classification or identification may be described by a confidence measure, such as a percentage number, to quantify the probability or likelihood.
- the spatial location may be indicated by the (X, Y, Z) coordinates of the target in a Cartesian system, or other appropriate coordinates if other spatial coordinate systems (e.g., cylindrical, spherical, etc.) are used.
- the target may be outlined by a contour surrounding the target.
- the target may be rendered in a different color from the background or other features shown in the medical image. It would be apparent to a person skilled in the art to use other appropriate methods to separate, emphasize, or differentiate the identified target.
- the medical image(s) may include images generated from various imaging modalities.
- the medical image(s) may include a Magnetic Resonance Imaging (MRI) image, a computed tomography (CT) image, an ultrasound image, or the like.
- the medical image(s) may be 2D or 3D.
- a 3D image may include a plurality of 2D slices.
- a certain kind of target to be detected may have one or more morphology features that are specific to that kind of target. For example, most brain metastases are round in shape. Such morphology features may provide useful information to the detection of the corresponding kind of target and may lead to improved efficiency and accuracy. Embodiments of the present disclosure provide exemplary systems and methods that utilize such morphology features in the detection of targets such as brain metastases. The detection result may be used to develop a treatment plan for conducting radiotherapy treatment.
- FIG. 1 illustrates an exemplary radiotherapy system 100 , according to some embodiments of the present disclosure.
- Radiotherapy system 100 may include a treatment planning system 110 , a target detection system 120 , a radiotherapy device 130 , and a medical imaging device 140 .
- radiotherapy system 100 may include a display device and a user interface (not shown).
- target detection system 120 may communicate with medical imaging device 140 to receive one or more medical images.
- Target detection system 120 may detect one or more targets in the medical image(s) and communicate the detection result with treatment planning system 110 .
- Treatment planning system 110 may provide a treatment plan based, at least partially, on the detection result, and communicate the treatment plan with radiotherapy device 130 .
- Treatment planning system may also communicate with medical imaging device 140 to receive medical images directly, such as updated medical image(s) before or during the treatment process.
- radiotherapy device 130 may administer radiation doses to a patient according to the treatment plan.
- radiotherapy device 130 may be local with respect to treatment planning system 110 .
- radiotherapy device 130 and treatment planning system 110 may be located in the same room of a medical facility/clinic.
- radiotherapy device 130 may be remote with respect to treatment planning system 110 and the data communication between radiotherapy device 130 and treatment planning system 110 may be carried out through a network (e.g., a local area network (LAN); a wireless network; a cloud computing environment such as software as a service, platform as a service, infrastructure as a service; a client-server; a wide area network (WAN); or the like).
- LAN local area network
- wireless network e.g., a wireless network
- cloud computing environment such as software as a service, platform as a service, infrastructure as a service
- WAN wide area network
- the communication links between target detection system 120 and treatment planning system 110 , between target detection system 120 and medical imaging device 140 , and between treatment planning system 110 and medical imaging device 140 may also be implemented in a local or remote manner.
- treatment planning system 110 and target detection system 120 may be implemented in a single data processing device, as indicated by the dashed line box in FIG. 1 .
- treatment planning system 110 and target detection system 120 may be implemented as different software programs operating on the same hardware device.
- treatment planning system 110 and target detection system 120 may be implemented using different data processing devices.
- Medical imaging device 140 may include an MRI imaging device, a CT imaging device, an X-ray imaging device, a positron emission tomography (PET) imaging device, an ultrasound imaging device, a fluoroscopic device, a single-photo emission computed tomography (SPECT) imaging device, or other medical imaging devices for obtaining one or more medical images of a patient. Accordingly, medical imaging device 140 may provide various kinds of medical images. For example, the medical images may include MRI images, CT images, PET images, X-ray images, ultrasound images, SPECT images, etc.
- FIG. 2A illustrates an example of one type of radiotherapy device 130 (e.g., Leksell Gamma Knife), according to some embodiments of the present disclosure.
- a patient 202 may wear a coordinate frame 220 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy.
- Coordinate frame 220 and a patient positioning system 222 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.
- Radiotherapy device 130 may include a protective housing 214 to enclose a plurality of radiation sources 212 .
- Radiation sources 212 may generate a plurality of radiation beams (e.g., beam lets) through beam channels 216 .
- the plurality of radiation beams may be configured to focus on an isocenter 218 from different directions. While each individual radiation beam may have a relatively low intensity, isocenter 218 may receive a relatively high level of radiation when multiple doses from different radiation beams accumulate at isocenter 218 . In some embodiments, isocenter 218 may correspond to a target under surgery or treatment, such as a tumor.
- FIG. 2B illustrates another example of radiotherapy device 130 (e.g., a linear accelerator), according to some embodiments of the present disclosure.
- a patient 242 may be positioned on a patient table 243 to receive the radiation dose according to the treatment plan.
- Linear accelerator 130 may include a radiation head 245 that generates a radiation beam 46 .
- Radiation head 245 may be rotatable around a horizontal axis 247 .
- a flat panel scintillator detector 244 which may rotate synchronously with radiation head 245 around an isocenter 241 .
- Patient table 243 may be motorized so that patient 242 can be positioned with the tumor site at or close to the isocenter 241 .
- Radiation head 245 may rotate about a gantry 248 to provide patient 242 with a plurality of varying dosages of radiation according to the treatment plan.
- FIG. 3 illustrates an embodiment of a data processing device 111 .
- Data processing device 111 may implement target detection system 120 , treatment planning system 110 , or both.
- data processing device 111 may include one or more processors 250 , a memory or storage device 260 , and a communication interface 270 .
- Memory/storage device 260 may store computer-executable instructions, such as target detection software 264 .
- Memory/storage device 260 may optionally store treatment planning software 262 .
- Processor 250 may include one or more processor units, or a single processor unit with one or more cores.
- a “computing core” of processor 250 may refer to either a processor unit or a core of a processor unit that is capable of executing instructions in a parallel-computing manner. For example, a computation task may be partitioned into multiple parallel branches or “threads,” each branch/thread may be executed by a computing core in parallel with other computing core(s).
- Processor 250 may be communicatively coupled to memory/storage device 260 and configured to execute the computer executable instructions stored thereon.
- processor 250 may execute target detection software 264 to implement functionalities of target detection system 120 .
- processor device 250 may execute treatment planning software 262 (e.g., such as Monaco® software manufactured by Elekta) that may interface with target detection software 264 .
- treatment planning software 262 e.g., such as Monaco® software manufactured by Elekta
- Processor 250 may communicate with a database 150 through communication interface 270 to send/receive data to/from database 150 .
- Database 150 may communicate with medical imaging device 140 and store medical image data obtained by medical imaging device 140 .
- Database 150 may include a plurality of devices located either in a central or distributed manner.
- Processor 250 may also communicate with medical imaging device 140 directly through communication interface 270 .
- Processor 250 may include one or more general-purpose processing devices such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), or the like. More particularly, processor 250 may include a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or a processor implementing a combination of instruction sets. Processor 250 may also include one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a system on a chip (SoC), or the like.
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- DSP digital signal processor
- SoC system on a chip
- Memory/storage device 260 may include a read-only memory (ROM), a flash memory, a random access memory (RAM), a static memory, a flash memory, a hard drive, etc.
- memory/storage device 260 may include a computer-readable medium. While the computer-readable medium in an embodiment may be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of computer-executable instructions or data.
- computer-readable medium shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by a computer and that cause the computer to perform any one or more of the methodologies of the present disclosure.
- computer readable medium should accordingly be taken to include, but not be limited to, solid-state memories, optical, and magnetic media.
- Communication interface 270 may include a network adaptor, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adaptor such as fiber, USB 3.0, thunderbolt, and the like, a wireless network adaptor such as a WiFi adaptor, a telecommunication (3G, 4G, LTE and the like) adaptor, and the like.
- Communication interface 270 may provide the functionality of a local area network (LAN), a wireless network, a cloud computing environment (e.g., software as a service, platform as a service, infrastructure as a service, etc), a client-server, a wide area network (WAN), and the like.
- Processor 250 may communicate with database 150 and medical imaging device 140 via communication interface 270 .
- Radiotherapy treatment planning may require detection or delineation of a target, such as a tumor, an OAR, or healthy tissue surrounding the tumor or in close proximity to the tumor.
- a target such as a tumor, an OAR, or healthy tissue surrounding the tumor or in close proximity to the tumor.
- classification, positioning, and segmentation of the target may be performed to allow study of the dose distribution in or around the target.
- one or more medical images such as MRI images, CT images, PET images, fMRI images, X-ray images, ultrasound images, radiotherapy portal images, SPECT images and the like, of the patient undergoing radiotherapy may be obtained by medical imaging device 140 to reveal the internal structure of a body part.
- FIG. 4 shows an exemplary medical image of a patient's brain, in which several brain metastases are shown as white rounded objects 402 .
- segmentation may be carried out. For example, a 3D structure of one or more identified targets may be generated.
- the 3D structure may be obtained by contouring the target within each 2D layer or slice of an MRI or CT image and combining the contours of multiple 2D layers or slices.
- the contours may be generated manually (e.g., by a physician, dosimetrist, or health care worker) or automatically (e.g., using a program such as the Atlas-based Autosegmentation software, ABAS®, manufactured by Elekta).
- Embodiments of the present disclosure may perform automatic classification, segmentation, and positioning of a target from one or more medical images based on morphological feature(s) of the target being detected.
- An exemplary work flow using target detection system 120 to detect one or more targets in a medical image is shown in FIG. 5 .
- target detection system 120 may include components forming one or more operational branches that can be executed in parallel.
- one such branch includes a visibility filter 512 , an object identifier 514 , a morphology filter 516 , and an object classifier 518 .
- a parallel branch includes similar components 522 , 524 , 526 , and 528 .
- target detection system 120 may receive a 3D medical image including a plurality of 2D slices, and each 2D slice may be processed by a respective branch. In this way, multiple 2D slices may be processed in parallel, increasing the processing speed.
- one or more 2D medical images may also be processed by a single branch in series (e.g., for debugging or review purposes). In the following, detail functions of each component shown in FIG. 5 will be described in the context of detecting a brain metastasis target in a 3D MRI image. Other types of target in other types of medical images may be similarly detected.
- a 3D MRI image may include a plurality of 2D slices, and each slice may correspond to an X-Y plane at a fixed Z coordinate.
- Each 2D slice may be separately processed by a branch to achieve parallel processing.
- a medical image e.g., a 2D slice
- visibility filter 512 may include an anisotropic diffusion filter to reduce noise in the medical image without removing significant features such as edges, lines, or other relevant information for detecting brain metastases.
- Visibility filter 512 may also include an intensity thresholding filter to remove most background noise through an adaptive/auto contrast processing. Visibility filter 512 may also include a grey level filter to quantize the grey level of the medical image. One or more of the above filters may be used to pre-process the medical image.
- object identifier 514 may identify one or more objects in the medical image.
- the objects may include any shape that appears to be different from the background in the medical image.
- Object identifier 514 may further find the contour of each identified object.
- the contour may enclose part of or the entire region of the object.
- An object may also be identified by its corresponding contour.
- the medical image may be processed by morphology filter 516 to remove any object that is likely not a brain metastasis.
- a certain type of target may have a target-specific morphological feature that can be used to differentiate that particular type of target from others. For example, most brain metastases resemble a roundish shape. Therefore, if an object is not in a round shape or close to the round shape, that object is likely not a brain metastasis. Therefore, morphology filter 516 may exclude one or more objects identified by object identifier 514 based on the shape of these objects. For example, morphology filter 516 may determine a depth of a convexity defect found in an object to evaluate the shape of that object.
- FIG. 6 shows a schematic representation of an object having convexity defects.
- object 602 may be one that has been identified by object identifier 514 , with an identified contour shown in a solid line enclosing the object.
- morphology filter 516 may first determine a convex hull 606 enclosing object 602 , shown as a dashed-line enclosure in FIG. 6 .
- Convex hull 606 as shown in FIG. 6 , may be visualized as a “rubber band” stretched around object 602 as a shape envelope.
- Morphology filter 516 may then determine all convexity defects in the contour of object 602 .
- the contour of object 602 has three convexity defects 612 , 614 , and 616 , indicating portions of the contour extending inward from convex hull 606 .
- morphology filter 516 may determine the depth of the convexity defect, such as depths 622 , 624 , and 626 .
- the depth may be determined based on a distance from the convexity defect to the convex hull. In some embodiment, the distance may be measured from convex hull 606 to the deepest point inwards the center ( 604 ) of object 602 .
- Morphology filter 516 may compare all depths and use the depth of the deepest convexity defect (e.g., 622 ) as an indicator of the roundishness (or non-roundishness) of object 602 .
- morphology filter 516 may determine other shape related factors. For example, morphology filter 516 may determine a rectangle surrounding an object/contour, and determine the area of the rectangle. Then, morphology filter 516 may determine the area occupied by the object, and determine a ratio between the area occupied by the object and the area of the rectangle. In another example, morphology filter 516 may determine the height and width of the convex hull and determine a ratio between the height and the width. In a further example, morphology filter 516 may determine the number of convexity defects, the number of turns when moving along the contour of the object, etc. All these factors may be used to evaluate the shape of the object, for example, determine whether the object is roundish enough.
- the object may be excluded from a subset of candidate objects that are subject to further classification. For example, if the depth of the deepest convexity defect found in an object exceeds a predetermined threshold, then morphology filter 516 may determine that the object is likely not a brain metastasis. The object may then be excluded from the candidate subset for classification purposes. In some embodiments, the object may be removed from the medical image by, for example, flood filling the area enclosed by the contour of the object using the background color. In some embodiments, morphology filter 516 may exclude contour(s) of the skull from the candidate subset.
- morphology filter 516 may exclude, for example, as many objects as possible from the candidate subset based on the morphological feature associated with the target subject to classification. In this way, morphology filter 516 effectively reduces the number of objects that need to be further processed. Therefore, morphology filter 516 may also be viewed as selecting the candidate subset (e.g., object(s) that are still remain after the filtering) from all of the objects identified by object identifier 514 .
- the object(s) in the selected candidate subset may be classified by object classifier 518 .
- object classifier 518 may classify the objects into a predetermined set of shapes, such as points, lines, rounds, and complex shapes.
- the object(s)/contour(s) in the candidate subset, as well as their respective shape(s) as classified by object classifier 518 may be stored for further processing.
- the processing results may be combined by an adder 530 .
- all of the 2D slices may be stacked together in the original order (e.g., along the Z axis).
- Adder 530 may merge adjacent contours (e.g., adjacent along the Z axis across multiple 2D slices) into a single contour.
- the processing result obtained from each 2D slice may be validated in a 3D context by shape validator 540 .
- Shape validator 540 may further remove or exclude object(s) from a collection of candidate subsets resulting from the processing conducted by the one or more branches. For example, shape validator 540 may remove objects classified as points or lines, leaving only those objects classified as rounds and complex shapes.
- shape validator 540 may remove all non-circular shapes from their respective subsets.
- shape validator 540 may remove one or more objects classified as non-circular shapes from their respective subsets when the respective subsets do not include any object classified as the round shape.
- recall rate e.g., likelihood of missing a true target
- the removal of objects based on their classified shape type may be made more inclusive (e.g., removing less objects) or more exclusive (e.g., removing more objects). The particular choice may depend on the balance between efficiency and recall rate.
- Shape validator 540 may also remove an object from its respective subset when the object is not adjacent to another object located in an adjacent 2D slice. For example, a brain metastasis, being a roundish object, normally will show in multiple adjacent 2D slices that are located next to each other along the Z axis, and these adjacent objects may be merged by adder 530 , as described above. If an object is isolated in a single 2D slice without adjacent counterparts shown in adjacent slices, it is likely that the object is not a brain metastasis.
- Shape validator 540 may also remove a pair of objects mirroring each other from their respective subsets. These mirroring objects are likely ordinary brain structures, not brain metastases.
- Shape validator 540 may also remove one or more objects located at a predetermined anatomical area from their respective subsets. For example, objects located along the horizontal middle blood vessel are likely not brain metastases.
- Target classifier 550 may evaluate the factors determined for each object in previous processing and determine a confidence measure for each of the remaining object.
- the confidence measure may indicate a likelihood that a particular object is a brain metastasis.
- the confidence measure may be determined based on the degree of roundishness, the depth of any convex defect, the number of convex defects, the ratio between the area occupied by the object and the area of the rectangle enclosing the object, the ratio between the height of the convex hull and the width of the convex hull, etc.
- Target classifier 550 may divide the remaining objects into a plurality of groups based on their respective confidence measures. For example, objects having high confidence measures, indicating that the objects are highly likely brain metastases, may be placed in a primary group 562 . Similarly, objects having lower confidence measures may be placed in a secondary group 564 , and objects having even lower confidence measures, but still maybe brain metastases may be placed in a tertiary group 566 . Each object may be outlined by a contour and may be displayed to a user.
- FIG. 7 shows an exemplary output of target detection system 120 .
- a 2D slice of an MRI image of a patient's brain is shown in the center.
- the image includes a classified target 702 outlined by a highlighted contour.
- a bar under the image indicates the slice position in the 3D slice deck (e.g., along the Z axis) and the position of the slice(s) in which a target is found.
- a solid circle means that the 2D slice at the indicated position includes a target in the primary group
- a shadowed circle means that the 2D slice at the indicated position includes a target in the secondary group
- a white circle means that the 2D slice at the indicated position includes a target in the tertiary group.
- a dosimetrist, physician or healthcare worker may determine a dose of radiation to be applied to the target and any other anatomical structures proximate to the target.
- a process known as inverse planning may be performed to determine one or more plan parameters, such as volume delineation (e.g., define target volumes, contour sensitive structures), margins around the tumor and OARs, dose constraints (e.g., full dose to the tumor and zero dose to any OAR; 95% of dose to PTV while spinal cord ⁇ 45 Gy, brain stem ⁇ 55 Gy, and optic structures ⁇ 54 Gy, etc.), beam angle selection, collimator settings, and beam-on times.
- the result of inverse planning may constitute a radiotherapy treatment plan that may be stored in treatment planning system 110 . Radiotherapy device 130 may then use the generated treatment plan having these parameters to deliver radiotherapy to a patient.
- FIG. 8 is a flowchart illustrating an exemplary method 800 for detecting a target.
- object identifier 514 may identify a plurality of objects in a medical image (e.g., an MRI image).
- morphology filter 516 may select a subset of the objects based on a morphological feature (e.g., a depth of a convexity defect). For example, morphology filter 516 may exclude an object from the subset when the depth of a deepest convexity defect found on the object exceeds a predetermined threshold.
- a morphological feature e.g., a depth of a convexity defect
- object classifier 518 may classify the objects in the subset into one of a predetermined set of shapes, such as the shapes of point, line, round, and complex shape.
- target classifier 550 may detect an anatomical region of interest (e.g., a brain metastasis) based on the classified objects in the subset.
- a machine or computer readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, and the like), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and the like).
- a communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, and the like, medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, and the like.
- the communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content.
- the communication interface can be accessed via one or more commands or signals sent to the communication interface.
- the present invention also relates to a system for performing the operations herein.
- This system may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
- a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CDROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- Embodiments of the invention may be implemented with computer-executable instructions.
- the computer-executable instructions may be organized into one or more computer-executable components or modules.
- Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein.
- Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Pathology (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
The present disclosure provides systems, methods, and computer-readable storage media for detecting an anatomical region of interest for radiotherapy planning. Embodiments of the present disclosure may identify a plurality of objects in a medical image and select a subset of the objects by applying a morphology filter to the plurality of objects. The morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold. Embodiments of the present disclosure may also classify the objects in the subset into one of a predetermined set of shapes and detect the anatomical region of interest based on the classified objects in the subset.
Description
- This disclosure relates generally to radiotherapy treatment planning. More specifically, this disclosure relates to systems and methods for detecting brain metastases in medical images for developing a radiotherapy treatment plan to be used during radiotherapy.
- Radiotherapy is used to treat cancers and other ailments in mammalian (e.g., human and animal) tissue. One such radiotherapy technique is a Gamma Knife, by which a patient is irradiated by a large number of low-intensity gamma rays that converge with high intensity and high precision at a target (e.g., a tumor). In another embodiment, radiotherapy is provided using a linear accelerator, whereby a tumor is irradiated by high-energy particles (e.g., electrons, protons, ions and the like). The placement and dose of the radiation beam must be accurately controlled to ensure the tumor receives the prescribed radiation, and the placement of the beam should be such as to minimize damage to the surrounding healthy tissue.
- Before administrating radiation doses to treat a patient, a treatment plan needs to be created, in which the manner of applying radiation doses are specified. A treatment plan is usually created based on a medical image (or a series of images) of the patient, in which an internal anatomical region of the patient is shown. From the medical image, the target to be treated is ascertained, as well as its location, size, and/or shape, based on which the directions and intensities of multiple radiation beams are determined such that the beams converge at the target location to provide the necessary radiation dose for treating the patient. While a physician may determine whether a particular object in the medical image is a target by visually observing the medical image, this process is often tedious and time consuming. Computer-aided image classification techniques can reduce the time to extract some or all required information from the medical image.
- For example, some methods rely on training data to train a statistical model, and the trained statistical model may then be used to identify a target. However, the effectiveness of such methods depends largely on the quality of the training data. In order to obtain acceptable results, the training data have to contain accurately identified targets in terms of their location and segmentation. Usually, such high quality training data are in short supply.
- In another example, pure image processing methods have been used to enhance the visibility of the medical image to allow the physician to better observe the medical image. Such methods, however, lack the classification ability to determine whether a particular object in the medical image is a target or not.
- The present disclosure is directed to overcoming or mitigating one or more of these problems set forth.
- One aspect of the present disclosure relates to a system for detecting an anatomical region of interest. The system may include a memory device storing computer-executable instructions and at least one processor device communicatively coupled to the memory device. The computer-executable instructions, when executed by the at least one processor device, cause the processor device to perform various operations. The operations may include identifying a plurality of objects in a medical image. The operations may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects. The morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold. The operations may also include classifying the objects in the subset into one of a predetermined set of shapes. Moreover, the operations may include detecting the anatomical region of interest based on the classified objects in the subset.
- Another aspect of the present disclosure relates to a method for detecting an anatomical region of interest. The method may be implemented by at least one processor device executing computer-executable instructions. The method may include identifying a plurality of objects in a medical image. The method may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects. The morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold. The method may also include classifying the objects in the subset into one of a predetermined set of shapes. Moreover, the method may include detecting the anatomical region of interest based on the classified objects in the subset.
- A further aspect of the present disclosure relates to a non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of a device to cause the device to perform a method for detecting an anatomical region of interest. The method may include identifying a plurality of objects in a medical image. The method may also include selecting a subset of the objects by applying a morphology filter to the plurality of objects. The morphology filter may determine a morphological feature associated with each of the plurality of objects and exclude at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold. The method may also include classifying the objects in the subset into one of a predetermined set of shapes. Moreover, the method may include detecting the anatomical region of interest based on the classified objects in the subset.
- Additional objects and advantages of the present disclosure will be set forth in part in the following detailed description, and in part will be obvious from the description, or may be learned by practice of the present disclosure. The objects and advantages of the present disclosure will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.
- It is to be understood that the foregoing general description and the following detailed description are exemplary and explanatory only, and are not restrictive of the invention, as claimed.
- The accompanying drawings, which constitute a part of this specification, illustrate several embodiments and, together with the description, serve to explain the disclosed principles.
-
FIG. 1 illustrates an exemplary radiotherapy system including a target detection system, according to some embodiments of the present disclosure. -
FIG. 2A illustrates an exemplary radiotherapy device, a Gamma Knife, according to some embodiments of the present disclosure. -
FIG. 2B illustrates another examplary radiotherapy device, a linear accelerator (LINAC), according to some embodiments of the present disclosure. -
FIG. 3 illustrates an exemplary data processing device implementing the target detection system ofFIG. 1 , according to some embodiments of the present disclosure. -
FIG. 4 is an exemplary medical image showing brain metastases. -
FIG. 5 is a block diagram showing exemplary components of the target detection system ofFIG. 1 , according to some embodiments of the present disclosure. -
FIG. 6 schematically illustrates an object having convexity defects. -
FIG. 7 shows an exemplary output of the target detection system shown inFIG. 1 . -
FIG. 8 is a flowchart illustrating an exemplary method for detecting a target, according to some embodiments of the present disclosure. - Exemplary embodiments are described with reference to the accompanying drawings. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts.
- Systems and methods consistent with the present disclosure are directed to detecting an anatomical region of interest in a medical image (or a series of medical images) for developing a radiotherapy treatment plan. The anatomical region of interest may include a tumor, a cancer, an organ at risk, etc. In some embodiments, the anatomical region of interest may include a brain metastasis. An anatomical region of interest may also be referred to as a target. As used herein, the term “anatomical region of interest” is interchangeable with the term “target.” Detection of a target may include one or more of the following aspect: (1) classification or identification, indicating whether a target is present in the medical image or whether a particular object shown in the medical image is a target; (2) positioning, indicating the position or location of the target, once identified, in two-dimensional (2D) and/or three-dimensional (3D) space; and (3) outlining or segmentation, indicating one or more ways of separating, emphasizing, or differentiating the identified target from other features or the background of the medical image.
- For example, an object shown in the medical image may be classified/identified as a target in a binary manner: is a target or is not a target. In another example, the classification/identification may be carried out based on probability or likelihood. In some embodiments, an object may be identified as being, for example, highly likely a target, indicating a high probability that the object is in fact a target. Similarly, an object may be identified as being less likely a target. The probability or likelihood associated with an classification or identification may be described by a confidence measure, such as a percentage number, to quantify the probability or likelihood.
- The spatial location may be indicated by the (X, Y, Z) coordinates of the target in a Cartesian system, or other appropriate coordinates if other spatial coordinate systems (e.g., cylindrical, spherical, etc.) are used.
- Various methods may be used to outline or segment a target. For example, the target may be outlined by a contour surrounding the target. In another example, the target may be rendered in a different color from the background or other features shown in the medical image. It would be apparent to a person skilled in the art to use other appropriate methods to separate, emphasize, or differentiate the identified target.
- The medical image(s) may include images generated from various imaging modalities. For example, the medical image(s) may include a Magnetic Resonance Imaging (MRI) image, a computed tomography (CT) image, an ultrasound image, or the like. The medical image(s) may be 2D or 3D. A 3D image may include a plurality of 2D slices.
- A certain kind of target to be detected may have one or more morphology features that are specific to that kind of target. For example, most brain metastases are round in shape. Such morphology features may provide useful information to the detection of the corresponding kind of target and may lead to improved efficiency and accuracy. Embodiments of the present disclosure provide exemplary systems and methods that utilize such morphology features in the detection of targets such as brain metastases. The detection result may be used to develop a treatment plan for conducting radiotherapy treatment.
-
FIG. 1 illustrates anexemplary radiotherapy system 100, according to some embodiments of the present disclosure.Radiotherapy system 100 may include atreatment planning system 110, atarget detection system 120, aradiotherapy device 130, and amedical imaging device 140. In addition,radiotherapy system 100 may include a display device and a user interface (not shown). - As shown in
FIG. 1 ,target detection system 120 may communicate withmedical imaging device 140 to receive one or more medical images.Target detection system 120 may detect one or more targets in the medical image(s) and communicate the detection result withtreatment planning system 110.Treatment planning system 110 may provide a treatment plan based, at least partially, on the detection result, and communicate the treatment plan withradiotherapy device 130. Treatment planning system may also communicate withmedical imaging device 140 to receive medical images directly, such as updated medical image(s) before or during the treatment process. During the treatment,radiotherapy device 130 may administer radiation doses to a patient according to the treatment plan. - In some embodiments,
radiotherapy device 130 may be local with respect totreatment planning system 110. For example,radiotherapy device 130 andtreatment planning system 110 may be located in the same room of a medical facility/clinic. In other embodiments,radiotherapy device 130 may be remote with respect totreatment planning system 110 and the data communication betweenradiotherapy device 130 andtreatment planning system 110 may be carried out through a network (e.g., a local area network (LAN); a wireless network; a cloud computing environment such as software as a service, platform as a service, infrastructure as a service; a client-server; a wide area network (WAN); or the like). Similarly, the communication links betweentarget detection system 120 andtreatment planning system 110, betweentarget detection system 120 andmedical imaging device 140, and betweentreatment planning system 110 andmedical imaging device 140, may also be implemented in a local or remote manner. - In some embodiments,
treatment planning system 110 andtarget detection system 120 may be implemented in a single data processing device, as indicated by the dashed line box inFIG. 1 . For example,treatment planning system 110 andtarget detection system 120 may be implemented as different software programs operating on the same hardware device. In other embodiments,treatment planning system 110 andtarget detection system 120 may be implemented using different data processing devices. -
Medical imaging device 140 may include an MRI imaging device, a CT imaging device, an X-ray imaging device, a positron emission tomography (PET) imaging device, an ultrasound imaging device, a fluoroscopic device, a single-photo emission computed tomography (SPECT) imaging device, or other medical imaging devices for obtaining one or more medical images of a patient. Accordingly,medical imaging device 140 may provide various kinds of medical images. For example, the medical images may include MRI images, CT images, PET images, X-ray images, ultrasound images, SPECT images, etc. -
FIG. 2A illustrates an example of one type of radiotherapy device 130 (e.g., Leksell Gamma Knife), according to some embodiments of the present disclosure. As shown inFIG. 2A , in a radiotherapy treatment session, apatient 202 may wear a coordinateframe 220 to keep stable the patient's body part (e.g., the head) undergoing surgery or radiotherapy. Coordinateframe 220 and apatient positioning system 222 may establish a spatial coordinate system, which may be used while imaging a patient or during radiation surgery.Radiotherapy device 130 may include aprotective housing 214 to enclose a plurality ofradiation sources 212.Radiation sources 212 may generate a plurality of radiation beams (e.g., beam lets) throughbeam channels 216. The plurality of radiation beams may be configured to focus on anisocenter 218 from different directions. While each individual radiation beam may have a relatively low intensity,isocenter 218 may receive a relatively high level of radiation when multiple doses from different radiation beams accumulate atisocenter 218. In some embodiments,isocenter 218 may correspond to a target under surgery or treatment, such as a tumor. -
FIG. 2B illustrates another example of radiotherapy device 130 (e.g., a linear accelerator), according to some embodiments of the present disclosure. Using a linear accelerator, apatient 242 may be positioned on a patient table 243 to receive the radiation dose according to the treatment plan.Linear accelerator 130 may include aradiation head 245 that generates a radiation beam 46.Radiation head 245 may be rotatable around ahorizontal axis 247. In addition, below the patient table 243 there may be provided a flatpanel scintillator detector 244, which may rotate synchronously withradiation head 245 around anisocenter 241. The intersection ofaxis 247 with the center ofbeam 246, produced byradiation head 245, is usually referred to as the “isocenter.” Patient table 243 may be motorized so thatpatient 242 can be positioned with the tumor site at or close to theisocenter 241.Radiation head 245 may rotate about agantry 248 to providepatient 242 with a plurality of varying dosages of radiation according to the treatment plan. -
FIG. 3 illustrates an embodiment of adata processing device 111.Data processing device 111 may implementtarget detection system 120,treatment planning system 110, or both. As shown inFIG. 3 ,data processing device 111 may include one ormore processors 250, a memory orstorage device 260, and acommunication interface 270. Memory/storage device 260 may store computer-executable instructions, such astarget detection software 264. Memory/storage device 260 may optionally storetreatment planning software 262. -
Processor 250 may include one or more processor units, or a single processor unit with one or more cores. As used herein, a “computing core” ofprocessor 250 may refer to either a processor unit or a core of a processor unit that is capable of executing instructions in a parallel-computing manner. For example, a computation task may be partitioned into multiple parallel branches or “threads,” each branch/thread may be executed by a computing core in parallel with other computing core(s). -
Processor 250 may be communicatively coupled to memory/storage device 260 and configured to execute the computer executable instructions stored thereon. For example,processor 250 may executetarget detection software 264 to implement functionalities oftarget detection system 120. Optionally,processor device 250 may execute treatment planning software 262 (e.g., such as Monaco® software manufactured by Elekta) that may interface withtarget detection software 264. -
Processor 250 may communicate with adatabase 150 throughcommunication interface 270 to send/receive data to/fromdatabase 150.Database 150 may communicate withmedical imaging device 140 and store medical image data obtained bymedical imaging device 140.Database 150 may include a plurality of devices located either in a central or distributed manner.Processor 250 may also communicate withmedical imaging device 140 directly throughcommunication interface 270. -
Processor 250 may include one or more general-purpose processing devices such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), or the like. More particularly,processor 250 may include a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing other instruction sets, or a processor implementing a combination of instruction sets.Processor 250 may also include one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a system on a chip (SoC), or the like. - Memory/
storage device 260 may include a read-only memory (ROM), a flash memory, a random access memory (RAM), a static memory, a flash memory, a hard drive, etc. In some embodiments, memory/storage device 260 may include a computer-readable medium. While the computer-readable medium in an embodiment may be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of computer-executable instructions or data. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by a computer and that cause the computer to perform any one or more of the methodologies of the present disclosure. The term “computer readable medium” should accordingly be taken to include, but not be limited to, solid-state memories, optical, and magnetic media. -
Communication interface 270 may include a network adaptor, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adaptor such as fiber, USB 3.0, thunderbolt, and the like, a wireless network adaptor such as a WiFi adaptor, a telecommunication (3G, 4G, LTE and the like) adaptor, and the like.Communication interface 270 may provide the functionality of a local area network (LAN), a wireless network, a cloud computing environment (e.g., software as a service, platform as a service, infrastructure as a service, etc), a client-server, a wide area network (WAN), and the like.Processor 250 may communicate withdatabase 150 andmedical imaging device 140 viacommunication interface 270. - Radiotherapy treatment planning may require detection or delineation of a target, such as a tumor, an OAR, or healthy tissue surrounding the tumor or in close proximity to the tumor. As discussed above, classification, positioning, and segmentation of the target may be performed to allow study of the dose distribution in or around the target.
- During target detection, one or more medical images, such as MRI images, CT images, PET images, fMRI images, X-ray images, ultrasound images, radiotherapy portal images, SPECT images and the like, of the patient undergoing radiotherapy may be obtained by
medical imaging device 140 to reveal the internal structure of a body part.FIG. 4 shows an exemplary medical image of a patient's brain, in which several brain metastases are shown as whiterounded objects 402. After one or more objects in the medical image(s) are identified as target(s), segmentation may be carried out. For example, a 3D structure of one or more identified targets may be generated. The 3D structure may be obtained by contouring the target within each 2D layer or slice of an MRI or CT image and combining the contours of multiple 2D layers or slices. The contours may be generated manually (e.g., by a physician, dosimetrist, or health care worker) or automatically (e.g., using a program such as the Atlas-based Autosegmentation software, ABAS®, manufactured by Elekta). - Embodiments of the present disclosure may perform automatic classification, segmentation, and positioning of a target from one or more medical images based on morphological feature(s) of the target being detected. An exemplary work flow using
target detection system 120 to detect one or more targets in a medical image is shown inFIG. 5 . - Referring to
FIG. 5 ,target detection system 120 may include components forming one or more operational branches that can be executed in parallel. For example, one such branch includes avisibility filter 512, anobject identifier 514, amorphology filter 516, and anobject classifier 518. A parallel branch includessimilar components target detection system 120 may receive a 3D medical image including a plurality of 2D slices, and each 2D slice may be processed by a respective branch. In this way, multiple 2D slices may be processed in parallel, increasing the processing speed. In some embodiments, one or more 2D medical images may also be processed by a single branch in series (e.g., for debugging or review purposes). In the following, detail functions of each component shown inFIG. 5 will be described in the context of detecting a brain metastasis target in a 3D MRI image. Other types of target in other types of medical images may be similarly detected. - As discussed above, a 3D MRI image may include a plurality of 2D slices, and each slice may correspond to an X-Y plane at a fixed Z coordinate. Each 2D slice may be separately processed by a branch to achieve parallel processing. In each branch, for example the branch including 512, 514, 516, and 518, a medical image (e.g., a 2D slice) may first undergo a pre-processing stage in which
visibility filter 512 can enhance the visibility of the medical image. For example,visibility filter 512 may include an anisotropic diffusion filter to reduce noise in the medical image without removing significant features such as edges, lines, or other relevant information for detecting brain metastases.Visibility filter 512 may also include an intensity thresholding filter to remove most background noise through an adaptive/auto contrast processing.Visibility filter 512 may also include a grey level filter to quantize the grey level of the medical image. One or more of the above filters may be used to pre-process the medical image. - After pre-processing,
object identifier 514 may identify one or more objects in the medical image. The objects may include any shape that appears to be different from the background in the medical image.Object identifier 514 may further find the contour of each identified object. The contour may enclose part of or the entire region of the object. An object may also be identified by its corresponding contour. - After one or more objects and their contours have been identified, the medical image may be processed by
morphology filter 516 to remove any object that is likely not a brain metastasis. As described above, a certain type of target may have a target-specific morphological feature that can be used to differentiate that particular type of target from others. For example, most brain metastases resemble a roundish shape. Therefore, if an object is not in a round shape or close to the round shape, that object is likely not a brain metastasis. Therefore,morphology filter 516 may exclude one or more objects identified byobject identifier 514 based on the shape of these objects. For example,morphology filter 516 may determine a depth of a convexity defect found in an object to evaluate the shape of that object. -
FIG. 6 shows a schematic representation of an object having convexity defects. Referring toFIG. 6 , object 602 may be one that has been identified byobject identifier 514, with an identified contour shown in a solid line enclosing the object. To evaluate the shape ofobject 602,morphology filter 516 may first determine aconvex hull 606 enclosingobject 602, shown as a dashed-line enclosure inFIG. 6 .Convex hull 606, as shown inFIG. 6 , may be visualized as a “rubber band” stretched aroundobject 602 as a shape envelope.Morphology filter 516 may then determine all convexity defects in the contour ofobject 602. For example, the contour ofobject 602 has threeconvexity defects convex hull 606. For each convexity defect,morphology filter 516 may determine the depth of the convexity defect, such asdepths convex hull 606 to the deepest point inwards the center (604) ofobjet 602.Morphology filter 516 may compare all depths and use the depth of the deepest convexity defect (e.g., 622) as an indicator of the roundishness (or non-roundishness) ofobject 602. - In addition to the convexity defect,
morphology filter 516 may determine other shape related factors. For example,morphology filter 516 may determine a rectangle surrounding an object/contour, and determine the area of the rectangle. Then,morphology filter 516 may determine the area occupied by the object, and determine a ratio between the area occupied by the object and the area of the rectangle. In another example,morphology filter 516 may determine the height and width of the convex hull and determine a ratio between the height and the width. In a further example,morphology filter 516 may determine the number of convexity defects, the number of turns when moving along the contour of the object, etc. All these factors may be used to evaluate the shape of the object, for example, determine whether the object is roundish enough. If not, then the object may be excluded from a subset of candidate objects that are subject to further classification. For example, if the depth of the deepest convexity defect found in an object exceeds a predetermined threshold, thenmorphology filter 516 may determine that the object is likely not a brain metastasis. The object may then be excluded from the candidate subset for classification purposes. In some embodiments, the object may be removed from the medical image by, for example, flood filling the area enclosed by the contour of the object using the background color. In some embodiments,morphology filter 516 may exclude contour(s) of the skull from the candidate subset. - Referring back to
FIG. 5 ,morphology filter 516 may exclude, for example, as many objects as possible from the candidate subset based on the morphological feature associated with the target subject to classification. In this way,morphology filter 516 effectively reduces the number of objects that need to be further processed. Therefore,morphology filter 516 may also be viewed as selecting the candidate subset (e.g., object(s) that are still remain after the filtering) from all of the objects identified byobject identifier 514. The object(s) in the selected candidate subset may be classified byobject classifier 518. In some embodiments, objectclassifier 518 may classify the objects into a predetermined set of shapes, such as points, lines, rounds, and complex shapes. The object(s)/contour(s) in the candidate subset, as well as their respective shape(s) as classified byobject classifier 518 may be stored for further processing. - After all branches finish processing their respective 2D slices, the processing results may be combined by an
adder 530. For example, all of the 2D slices may be stacked together in the original order (e.g., along the Z axis).Adder 530 may merge adjacent contours (e.g., adjacent along the Z axis across multiple 2D slices) into a single contour. The processing result obtained from each 2D slice may be validated in a 3D context byshape validator 540.Shape validator 540 may further remove or exclude object(s) from a collection of candidate subsets resulting from the processing conducted by the one or more branches. For example,shape validator 540 may remove objects classified as points or lines, leaving only those objects classified as rounds and complex shapes. In another example,shape validator 540 may remove all non-circular shapes from their respective subsets. In a further example,shape validator 540 may remove one or more objects classified as non-circular shapes from their respective subsets when the respective subsets do not include any object classified as the round shape. Depending on the desired recall rate (e.g., likelihood of missing a true target), the removal of objects based on their classified shape type may be made more inclusive (e.g., removing less objects) or more exclusive (e.g., removing more objects). The particular choice may depend on the balance between efficiency and recall rate. -
Shape validator 540 may also remove an object from its respective subset when the object is not adjacent to another object located in an adjacent 2D slice. For example, a brain metastasis, being a roundish object, normally will show in multiple adjacent 2D slices that are located next to each other along the Z axis, and these adjacent objects may be merged byadder 530, as described above. If an object is isolated in a single 2D slice without adjacent counterparts shown in adjacent slices, it is likely that the object is not a brain metastasis. -
Shape validator 540 may also remove a pair of objects mirroring each other from their respective subsets. These mirroring objects are likely ordinary brain structures, not brain metastases. -
Shape validator 540 may also remove one or more objects located at a predetermined anatomical area from their respective subsets. For example, objects located along the horizontal middle blood vessel are likely not brain metastases. - After further removal of objects by
shape validator 540, the remaining objects in the collection of subsets may be passed to targetclassifier 550 for classification.Target classifier 550 may evaluate the factors determined for each object in previous processing and determine a confidence measure for each of the remaining object. The confidence measure may indicate a likelihood that a particular object is a brain metastasis. For example, the confidence measure may be determined based on the degree of roundishness, the depth of any convex defect, the number of convex defects, the ratio between the area occupied by the object and the area of the rectangle enclosing the object, the ratio between the height of the convex hull and the width of the convex hull, etc. -
Target classifier 550 may divide the remaining objects into a plurality of groups based on their respective confidence measures. For example, objects having high confidence measures, indicating that the objects are highly likely brain metastases, may be placed in aprimary group 562. Similarly, objects having lower confidence measures may be placed in asecondary group 564, and objects having even lower confidence measures, but still maybe brain metastases may be placed in atertiary group 566. Each object may be outlined by a contour and may be displayed to a user. -
FIG. 7 shows an exemplary output oftarget detection system 120. As shown inFIG. 7 , a 2D slice of an MRI image of a patient's brain is shown in the center. The image includes aclassified target 702 outlined by a highlighted contour. A bar under the image indicates the slice position in the 3D slice deck (e.g., along the Z axis) and the position of the slice(s) in which a target is found. As indicated by the legends on the left, a solid circle means that the 2D slice at the indicated position includes a target in the primary group, a shadowed circle means that the 2D slice at the indicated position includes a target in the secondary group, and a white circle means that the 2D slice at the indicated position includes a target in the tertiary group. A user may choose a particular position on the slice bar to view the corresponding slice. When the chosen slice includes a classified target, the target is shown by a highlighted contour. - After the target have been detected, a dosimetrist, physician or healthcare worker may determine a dose of radiation to be applied to the target and any other anatomical structures proximate to the target. After the dose is determined for each anatomical structure (e.g., tumor, OAR, etc.), a process known as inverse planning may be performed to determine one or more plan parameters, such as volume delineation (e.g., define target volumes, contour sensitive structures), margins around the tumor and OARs, dose constraints (e.g., full dose to the tumor and zero dose to any OAR; 95% of dose to PTV while spinal cord ≤45 Gy, brain stem ≤55 Gy, and optic structures <54 Gy, etc.), beam angle selection, collimator settings, and beam-on times. The result of inverse planning may constitute a radiotherapy treatment plan that may be stored in
treatment planning system 110.Radiotherapy device 130 may then use the generated treatment plan having these parameters to deliver radiotherapy to a patient. -
FIG. 8 is a flowchart illustrating anexemplary method 800 for detecting a target. Instep 810,object identifier 514 may identify a plurality of objects in a medical image (e.g., an MRI image). Instep 820,morphology filter 516 may select a subset of the objects based on a morphological feature (e.g., a depth of a convexity defect). For example,morphology filter 516 may exclude an object from the subset when the depth of a deepest convexity defect found on the object exceeds a predetermined threshold. Instep 830,object classifier 518 may classify the objects in the subset into one of a predetermined set of shapes, such as the shapes of point, line, round, and complex shape. Instep 840,target classifier 550 may detect an anatomical region of interest (e.g., a brain metastasis) based on the classified objects in the subset. - Various operations or functions are described herein, which may be implemented or defined as software code or instructions. Such content may be directly executable (“object” or “executable” form), source code, or difference code (“delta” or “patch” code). Software implementations of the embodiments described herein may be provided via an article of manufacture with the code or instructions stored thereon, or via a method of operating a communication interface to send data via the communication interface. A machine or computer readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, and the like), such as recordable/non-recordable media (e.g., read only memory (ROM), random access memory (RAM), magnetic disk storage media, optical storage media, flash memory devices, and the like). A communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, and the like, medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, and the like. The communication interface can be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide a data signal describing the software content. The communication interface can be accessed via one or more commands or signals sent to the communication interface.
- The present invention also relates to a system for performing the operations herein. This system may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CDROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus.
- The order of execution or performance of the operations in embodiments of the invention illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the invention may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the invention.
- Embodiments of the invention may be implemented with computer-executable instructions. The computer-executable instructions may be organized into one or more computer-executable components or modules. Aspects of the invention may be implemented with any number and organization of such components or modules. For example, aspects of the invention are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the invention may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
- The words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be interpreted as open ended, in that, an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. In addition, the singular forms “a,” “an,” and “the” are intended to include plural references, unless the context clearly dictates otherwise.
- Having described aspects of the invention in detail, it will be apparent that modifications and variations are possible without departing from the scope of aspects of the invention as defined in the appended claims. As various changes could be made in the above constructions, products, and methods without departing from the scope of aspects of the invention, it is intended that all matter contained in the above description and shown in the accompanying drawings shall be interpreted as illustrative and not in a limiting sense.
Claims (29)
1. A system for detecting an anatomical region of interest, comprising:
a memory device storing computer-executable instructions; and
at least one processor device communicatively coupled to the memory device, wherein the computer-executable instructions, when executed by the at least one processor device, cause the processor device to perform operations including:
identifying a plurality of objects in a medical image;
selecting a subset of the objects by applying a morphology filter to the plurality of objects, wherein the morphology filter determines a morphological feature associated with each of the plurality of objects and excludes at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold;
classifying the objects in the subset into one of a predetermined set of shapes; and
detecting the anatomical region of interest based on the classified objects in the subset.
2. The system of claim 1 , wherein the anatomical region of interest includes a brain metastasis.
3. The system of claim 1 , wherein the morphological feature includes a depth of a convexity defect and the operations further comprise:
determining the depth of the convexity defect based on a distance from the convexity defect to a convex hull enclosing a corresponding object.
4. The system of claim 3 , wherein the object includes a plurality of convexity defects and the morphological feature includes the depth of a deepest convexity defect.
5. The system of claim 3 , wherein the operations further comprise:
determining the convexity defect based on a contour of the corresponding object.
6. The system of claim 1 , wherein the operations further comprise:
removing the at least one object excluded from the subset from the medical image.
7. The system of claim 1 , wherein the predetermined set of shapes includes a point, a line, a round, and a complex shape.
8. The system of claim 7 , wherein the operations further comprises:
removing one or more objects classified as a point or a line from the subset; and
detecting the anatomical region of interest based on the subset after removal of the one or more objects classified as a point or a line.
9. The system of claim 1 , wherein the medical image includes a Magnetic Resonance Imaging (MRI) image.
10. The system of claim 1 , wherein the medical image comprises a two-dimensional (2D) image.
11. The system of claim 10 , wherein the medical image is a 2D slice of a three-dimensional (3D) image, the 3D image including a plurality of 2D slices, and wherein the operations further comprises:
selecting a respective subset of objects for each of multiple 2D slices by applying the morphology filter to each of the multiple 2D slices;
classifying the objects in the respective subset into one of the predetermined set of shapes; and
detecting the anatomical region of interest based on the classified objects in a collection of the subsets corresponding to the multiple 2D slices.
12. The system of claim 11 , wherein the operations further comprises validating the classified objects in the collection of the subsets, including at least one of:
removing one or more objects classified as non-circular shapes from their respective subsets;
removing an object from its corresponding subset when the object is not adjacent to another object located in an adjacent 2D slice;
removing a pair of objects mirroring each other from their respective subsets; or
removing one or more objects located at a predetermined anatomical area from their respective subsets.
13. The system of claim 12 , wherein the operations further comprise:
determining a confidence measure for each of the remaining objects in the collection of the subsets after the validation, where in the confidence measure indicates a likelihood that a particular object is the anatomical region of interest; and
dividing the remaining objects into a plurality of groups according to their respective confidence measures.
14. The system of claim 11 , wherein the at least one processor device includes a plurality of computing cores, and wherein the operations further comprise:
executing, by each of the plurality of computing cores, application of the morphology filter to a separate 2D slice.
15. A method, implemented by at least one processor device executing computer-executable instructions, for detecting an anatomical region of interest, comprising:
identifying a plurality of objects in a medical image;
selecting a subset of the objects by applying a morphology filter to the plurality of objects, wherein the morphology filter determines a morphological feature associated with each of the plurality of objects and excludes at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold;
classifying the objects in the subset into one of a predetermined set of shapes; and
detecting the anatomical region of interest based on the classified objects in the subset.
16. The method of claim 15 , wherein the anatomical region of interest includes a brain metastasis.
17. The method of claim 15 , wherein the morphological feature includes a depth of a convexity defect and the method further comprises:
determining the depth of the convexity defect based on a distance from the convexity defect to a convex hull enclosing a corresponding object.
18. The method of claim 17 , wherein the object includes a plurality of convexity defects and the morphological feature includes the depth of a deepest convexity defect.
19. The method of claim 17 , further comprising:
determining the convexity defect based on a contour of the corresponding object.
20. The method of claim 15 , further comprising:
removing the at least one object excluded from the subset from the medical image.
21. The method of claim 15 , wherein the predetermined set of shapes includes a point, a line, a round, and a complex shape.
22. The method of claim 21 , further comprising:
removing one or more objects classified as a point or a line from the subset; and
detecting the anatomical region of interest based on the subset after removal of the one or more objects classified as a point or a line.
23. The method of claim 15 , wherein the medical image includes a Magnetic Resonance Imaging (MRI) image.
24. The method of claim 15 , wherein the medical image comprises a two-dimensional (2D) image.
25. The method of claim 24 , wherein the medical image is a 2D slice of a three-dimensional (3D) image, the 3D image including a plurality of 2D slices, and the method further comprises:
selecting a respective subset of objects for each of multiple 2D slices by applying the morphology filter to each of the multiple 2D slices;
classifying the objects in the respective subset into one of the predetermined set of shapes; and
detecting the anatomical region of interest based on the classified objects in a collection of the subsets corresponding to the multiple 2D slices.
26. The method of claim 25 , further comprising validating the classified objects in the collection of the subsets, including at least one of:
removing one or more objects classified as non-circular shapes from their respective subsets;
removing an object from its corresponding subset when the object is not adjacent to another object located in an adjacent 2D slice;
removing a pair of objects mirroring each other from their respective subsets; or
removing one or more objects located at a predetermined anatomical area from their respective subsets.
27. The system of claim 26 , further comprising:
determining a confidence measure for each of the remaining objects in the collection of the subsets after the validation, where in the confidence measure indicates a likelihood that a particular object is the anatomical region of interest; and
dividing the remaining objects into a plurality of groups according to their respective confidence measures.
28. The method of claim 25 , wherein the at least one processor device includes a plurality of computing cores, and the method further comprises:
executing, by each of the plurality of computing cores, application of the morphology filter to a separate 2D slice.
29. A non-transitory computer-readable medium that stores a set of instructions that is executable by at least one processor of a device to cause the device to perform a method for detecting an anatomical region of interest, the method comprising:
identifying a plurality of objects in a medical image;
selecting a subset of the objects by applying a morphology filter to the plurality of objects, wherein the morphology filter determines a morphological feature associated with each of the plurality of objects and excludes at least one object from the subset when the morphological feature of the at least one object exceeds a predetermined threshold;
classifying the objects in the subset into one of a predetermined set of shapes; and
detecting the anatomical region of interest based on the classified objects in the subset.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/423,220 US10223792B2 (en) | 2017-02-02 | 2017-02-02 | System and method for detecting brain metastases |
EP18702191.0A EP3577598B1 (en) | 2017-02-02 | 2018-01-24 | System and method for detecting brain metastases |
PCT/EP2018/051760 WO2018141607A1 (en) | 2017-02-02 | 2018-01-24 | System and method for detecting brain metastases |
CN201880011829.8A CN110291537A (en) | 2017-02-02 | 2018-01-24 | System and method for detecting brain metastes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/423,220 US10223792B2 (en) | 2017-02-02 | 2017-02-02 | System and method for detecting brain metastases |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180218501A1 true US20180218501A1 (en) | 2018-08-02 |
US10223792B2 US10223792B2 (en) | 2019-03-05 |
Family
ID=61094476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/423,220 Active 2037-05-28 US10223792B2 (en) | 2017-02-02 | 2017-02-02 | System and method for detecting brain metastases |
Country Status (4)
Country | Link |
---|---|
US (1) | US10223792B2 (en) |
EP (1) | EP3577598B1 (en) |
CN (1) | CN110291537A (en) |
WO (1) | WO2018141607A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11024000B2 (en) * | 2017-08-31 | 2021-06-01 | Siemens Healthcare Gmbh | Controlling a medical imaging system |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111358484B (en) * | 2020-03-23 | 2021-12-24 | 广州医科大学附属第一医院(广州呼吸中心) | Nuclear medicine lung perfusion imaging quantitative analysis method, analysis equipment and storage medium |
CN115366711B (en) * | 2022-07-29 | 2024-10-15 | 哈尔滨工业大学 | Foreign matter detection system and method for wireless charging system of electric automobile based on thermal infrared image processing |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120178093A1 (en) * | 2009-07-31 | 2012-07-12 | Scottsdale Healthcare | Methods of assessing a risk of cancer progression |
US20150371420A1 (en) * | 2014-06-19 | 2015-12-24 | Samsung Electronics Co., Ltd. | Systems and methods for extending a field of view of medical images |
US20160070949A1 (en) * | 2013-05-14 | 2016-03-10 | Pathxl Limited | Method And Apparatus |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2003070102A2 (en) * | 2002-02-15 | 2003-08-28 | The Regents Of The University Of Michigan | Lung nodule detection and classification |
US8059900B2 (en) * | 2004-10-08 | 2011-11-15 | General Electric Company | Method and apparatus to facilitate visualization and detection of anatomical shapes using post-processing of 3D shape filtering |
US7646902B2 (en) * | 2005-02-08 | 2010-01-12 | Regents Of The University Of Michigan | Computerized detection of breast cancer on digital tomosynthesis mammograms |
CN102656607B (en) * | 2009-12-16 | 2015-11-25 | 皇家飞利浦电子股份有限公司 | The target developing new optimization is collected in application plan |
CN103298406B (en) * | 2011-01-06 | 2017-06-09 | 美国医软科技公司 | System and method for carrying out treating planning to organ disease in function and dissection level |
WO2013183051A1 (en) * | 2012-06-04 | 2013-12-12 | Tel Hashomer Medical Research Infrastructure And Services Ltd. | Ultrasonographic images processing |
JP6643899B2 (en) * | 2012-06-20 | 2020-02-12 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Graphical user interface for medical devices |
KR101683706B1 (en) * | 2012-09-28 | 2016-12-07 | 제이엑스 에네루기 가부시키가이샤 | Device for inspecting substrate having irregular rough surface and inspection method using same |
GB201316853D0 (en) * | 2013-09-23 | 2013-11-06 | St Georges Hosp Medical School | Analysing MRI Data to Determine tumour type |
CN106255531B (en) * | 2014-04-15 | 2019-07-26 | 医科达公司 | Method and system for calibration |
CN104657984B (en) * | 2015-01-28 | 2018-10-16 | 复旦大学 | The extraction method of three-D ultrasonic mammary gland total volume interesting image regions |
CN105138990A (en) * | 2015-08-27 | 2015-12-09 | 湖北师范学院 | Single-camera-based gesture convex hull detection and palm positioning method |
-
2017
- 2017-02-02 US US15/423,220 patent/US10223792B2/en active Active
-
2018
- 2018-01-24 CN CN201880011829.8A patent/CN110291537A/en active Pending
- 2018-01-24 WO PCT/EP2018/051760 patent/WO2018141607A1/en unknown
- 2018-01-24 EP EP18702191.0A patent/EP3577598B1/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120178093A1 (en) * | 2009-07-31 | 2012-07-12 | Scottsdale Healthcare | Methods of assessing a risk of cancer progression |
US20160070949A1 (en) * | 2013-05-14 | 2016-03-10 | Pathxl Limited | Method And Apparatus |
US20150371420A1 (en) * | 2014-06-19 | 2015-12-24 | Samsung Electronics Co., Ltd. | Systems and methods for extending a field of view of medical images |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11024000B2 (en) * | 2017-08-31 | 2021-06-01 | Siemens Healthcare Gmbh | Controlling a medical imaging system |
Also Published As
Publication number | Publication date |
---|---|
US10223792B2 (en) | 2019-03-05 |
WO2018141607A1 (en) | 2018-08-09 |
EP3577598B1 (en) | 2022-05-04 |
CN110291537A (en) | 2019-09-27 |
EP3577598A1 (en) | 2019-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10987522B2 (en) | Three dimensional localization and tracking for adaptive radiation therapy | |
RU2696428C2 (en) | Target object tracking system and method using a quality indicator during radiation therapy | |
US10152790B2 (en) | Three dimensional localization of a moving target for adaptive radiation therapy | |
US20190362522A1 (en) | Neural network for generating synthetic medical images | |
EP3751582B1 (en) | Radiotherapy system, and therapy planning method | |
US11426606B2 (en) | Method and system of evaluating a radiation therapy treatment plan | |
EP3577598B1 (en) | System and method for detecting brain metastases | |
CN110975172A (en) | Flux map reconstruction method and system | |
US10238893B2 (en) | Dose rate modulated stereotactic radio surgery | |
Oria | Proton radiography for in vivo range verification in adaptive proton therapy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: ELEKTA AB (PUBL), SWEDEN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HILLBORG, MIKAEL;REEL/FRAME:041161/0713 Effective date: 20170130 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |