WO2016033458A1 - Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr) - Google Patents
Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr) Download PDFInfo
- Publication number
- WO2016033458A1 WO2016033458A1 PCT/US2015/047425 US2015047425W WO2016033458A1 WO 2016033458 A1 WO2016033458 A1 WO 2016033458A1 US 2015047425 W US2015047425 W US 2015047425W WO 2016033458 A1 WO2016033458 A1 WO 2016033458A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dose
- pet
- low
- dose pet
- Prior art date
Links
- 238000002600 positron emission tomography Methods 0.000 title claims abstract description 399
- 239000000700 radioactive tracer Substances 0.000 title description 23
- 238000000034 method Methods 0.000 claims abstract description 102
- 210000001519 tissue Anatomy 0.000 claims description 92
- 210000004556 brain Anatomy 0.000 claims description 49
- 238000003384 imaging method Methods 0.000 claims description 32
- 210000001175 cerebrospinal fluid Anatomy 0.000 claims description 21
- 210000004884 grey matter Anatomy 0.000 claims description 20
- 210000004885 white matter Anatomy 0.000 claims description 20
- 239000000284 extract Substances 0.000 claims description 10
- 238000003745 diagnosis Methods 0.000 claims description 8
- 238000002595 magnetic resonance imaging Methods 0.000 description 91
- 238000012549 training Methods 0.000 description 49
- 102100037681 Protein FEV Human genes 0.000 description 29
- 101710198166 Protein FEV Proteins 0.000 description 29
- 238000012360 testing method Methods 0.000 description 19
- 239000013598 vector Substances 0.000 description 19
- 238000003066 decision tree Methods 0.000 description 15
- 238000013459 approach Methods 0.000 description 14
- 210000005013 brain tissue Anatomy 0.000 description 12
- 238000012879 PET imaging Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 230000000694 effects Effects 0.000 description 10
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 206010073306 Exposure to radiation Diseases 0.000 description 8
- 230000004044 response Effects 0.000 description 8
- 230000006872 improvement Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 206010028980 Neoplasm Diseases 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 5
- 201000010099 disease Diseases 0.000 description 5
- 238000002610 neuroimaging Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 241001653634 Russula vesca Species 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 208000035475 disorder Diseases 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 230000002503 metabolic effect Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 238000002790 cross-validation Methods 0.000 description 3
- 230000003902 lesion Effects 0.000 description 3
- 238000012417 linear regression Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000007637 random forest analysis Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- CEGXZKXILQSJHO-AKNDJLBFSA-N (3R,4S,5R)-3,4,5,6-tetrahydroxyhexan(18F)oyl fluoride Chemical compound [18F]C(=O)C[C@@H](O)[C@H](O)[C@H](O)CO CEGXZKXILQSJHO-AKNDJLBFSA-N 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 210000002832 shoulder Anatomy 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- ZCXUVYAZINUVJD-AHXZWLDOSA-N 2-deoxy-2-((18)F)fluoro-alpha-D-glucose Chemical compound OC[C@H]1O[C@H](O)[C@H]([18F])[C@@H](O)[C@@H]1O ZCXUVYAZINUVJD-AHXZWLDOSA-N 0.000 description 1
- 208000014644 Brain disease Diseases 0.000 description 1
- 208000024172 Cardiovascular disease Diseases 0.000 description 1
- 208000012902 Nervous system disease Diseases 0.000 description 1
- 208000025966 Neurological disease Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000090 biomarker Substances 0.000 description 1
- 230000037396 body weight Effects 0.000 description 1
- 230000011157 brain segmentation Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 210000003739 neck Anatomy 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000010412 perfusion Effects 0.000 description 1
- RGCLLPNLLBQHPF-HJWRWDBZSA-N phosphamidon Chemical group CCN(CC)C(=O)C(\Cl)=C(/C)OP(=O)(OC)OC RGCLLPNLLBQHPF-HJWRWDBZSA-N 0.000 description 1
- 238000012636 positron electron tomography Methods 0.000 description 1
- 238000011158 quantitative evaluation Methods 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 210000004872 soft tissue Anatomy 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 210000002784 stomach Anatomy 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000013334 tissue model Methods 0.000 description 1
- 230000008728 vascular permeability Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/02—Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
- A61B6/03—Computed tomography [CT]
- A61B6/037—Emission tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/44—Constructional features of apparatus for radiation diagnosis
- A61B6/4417—Constructional features of apparatus for radiation diagnosis related to combined acquisition of different diagnostic modalities
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/50—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
- A61B6/501—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5205—Devices using data or image processing specially adapted for radiation diagnosis involving processing of raw data to produce diagnostic data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5211—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
- A61B6/5229—Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/52—Devices using data or image processing specially adapted for radiation diagnosis
- A61B6/5258—Devices using data or image processing specially adapted for radiation diagnosis involving detection or reduction of artifacts or noise
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R33/00—Arrangements or instruments for measuring magnetic variables
- G01R33/20—Arrangements or instruments for measuring magnetic variables involving magnetic resonance
- G01R33/44—Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
- G01R33/48—NMR imaging systems
- G01R33/4808—Multimodal MR, e.g. MR combined with positron emission tomography [PET], MR combined with ultrasound or MR combined with computed tomography [CT]
- G01R33/481—MR combined with positron emission tomography [PET] or single photon emission computed tomography [SPECT]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/004—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
- A61B5/0042—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part for the brain
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/542—Control of apparatus or devices for radiation diagnosis involving control of exposure
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30016—Brain
Definitions
- the subject matter herein generally relates to positron emission tomography (PET) images, and more particularly, to methods, systems, and computer readable media for predicting, estimating, and/or generating high (diagnostic) quality PET images using PET images acquired with a dose substantially lower than the widely used clinical dose (low-dose PET) and magnetic resonance imaging (MRI) acquired from the same subject.
- PET positron emission tomography
- MRI magnetic resonance imaging
- PET Positron emission tomography
- PSNR peak signal-to-noise ratio
- the image quality of PET is largely determined by two factors: the dosage of radionuclide (tracer) injected within a patient and imaging acquisition time. Although the latter factor could be easily increased, a long data acquisition time could lead to more motion related artifacts and is not applicable for radiotracers with a short half-life. The former factor can be easily understood as a higher dose generates more detected events and thus obtains images with higher PSNR. However, due to concerns about internal radiation exposure in patients, efforts to reduce currently used clinical dose while preserving PET image quality without compromising the ability to make accurate diagnosis have been actively pursued by scientists.
- the image (A) on the left is an example of a low-dose PET image
- a corresponding standard clinical dose PET image (B) is on the right.
- both the quality and PSNR of a low-dose PET image is inferior to that of a standard clinical dose PET image.
- the difference is visibly noticeable, for example, in the contrast between image (A) and image (B).
- the quality of the low-dose PET image is further decreased due to various factors during the process of acquisition and transmission. Consequently, the tracer dosage and process variability affect the accurate diagnosis of diseases/disorders.
- a higher dosage of radionuclide (tracer) needs to be injected into the patient's body.
- MR magnetic resonance
- MRI MR imaging
- a combined PET/MRI system provides the benefits (i.e., by scanning both low-dose PET and MRI images simultaneously) for generating standard clinical dose PET images using a combination of low-dose PET and MRI images to predict clinical dose PET values.
- PET positron emission tomography
- An exemplary method for predicting and/or generating an estimated high-dose PET image without injecting a high-dose radiotracer to a patient PET scan includes extracting appearance features from the at least one magnetic resonance (MR) image, extracting appearance features from at least one low-dose PET image, and generating a predicted (estimated) high-dose PET image using the appearance features of the at least one MR image and at least one low-dose PET image.
- MR magnetic resonance
- the system includes a hardware computing processor and a high-dose PET Prediction Module (HDPPM) implemented using the processor.
- the HDPPM is configured to extract appearance features from each of a MR image and at least one corresponding low-dose PET image and generate a high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
- a non-transitory computer readable medium has stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps.
- the steps include extracting appearance features from at least one MR image, extracting appearance features from at least one low-dose PET image, and generating an estimated high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
- the subject matter described herein can be implemented in software in combination with hardware and/or firmware.
- the subject matter described herein can be implemented in software executed by one or more processors.
- the subject matter described herein may be implemented using a non-transitory computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps.
- Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory devices, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits.
- a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms.
- module refers to hardware, firmware, or software in combination with hardware and/or firmware for implementing features described herein.
- low-dose and high-dose refer to the dosage (e.g., quantity, amount, or measurement) of a radionuclide or radioactive tracer, injected into a patient prior to PET imaging.
- the standard, high-dose is approximately 5 millicuries (mCi).
- body (i.e., non-brain) 18F- FDG imaging the standard, high-dose is approximately 10 mCi.
- a target or low-dose is any amount less than approximately 2.5 mCi, which is any amount less than or approximately 1/2 of the standard, high-dose.
- standard-dose “clinical dose”, and “high-dose” as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate more detected events and obtain PET images having a higher PSNR.
- FIG. 1 illustrates exemplary low-dose and high-dose positron emission tomography (PET) images
- Figure 2 is a schematic diagram illustrating methods, systems, and computer readable media for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
- Figure 3 is a schematic diagram illustrating an exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
- Figure 4 is another exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
- Figures 5 to 9 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
- Figure 10 is an illustration of prediction results regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
- Figures 1 1 and 12 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
- Figure 13 is a block diagram of an exemplary system for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
- Figure 14 is an example of dynamic image acquisition acquired via methods, systems, and computer readable media described herein;
- Figure 15 is a block diagram illustrating an exemplary method for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
- Figure 16 is a schematic block diagram illustrating an overview of a model learning methodology
- Figure 17 is an overview of a constructed regression forest (RF); and Figure 18 is a schematic block diagram of and a specific example of a decision tree for the regression model used in predicting and generating estimated high-dose PET images.
- RF constructed regression forest
- a regression forest (RF) based framework is used in predicting and generating an estimate of a standard, high-dose PET image by using both low-dose PET and MRI images.
- the prediction method includes two approaches. One approach includes prediction of a standard, high-dose PET image by tissue-specific regression forest (RF) models with the image appearance features extracted from both low-dose PET and MRI images. Another approach includes incremental refinement of a predicted standard-dose PET image by iteratively estimating the image difference between the current prediction and the target standard-dose PET. By incrementally adding the estimated image difference towards the target standard-dose PET, methods and systems described herein are able to gradually improve the quality of predicted standard, high-dose PET.
- low-dose and high-dose refer to the dosage (e.g., quantity, amount, or measurement) of radionuclide or radioactive tracer, injected into a patient prior to and/or during PET imaging.
- Methods, systems, and computer readable media herein can minimize radiation exposure in patients, by predicting higher quality high-dose PET image values and, thereby, generating an estimated high-dose PET image from a combination of low-dose PET and MR images.
- Standard-dose and “high-dose” as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate sufficient detected events and obtain clinically acceptable PET images having a sufficiently high PSNR.
- Standard, high-dose images are obtained as a result of injecting a patient with the standard, high-dose quantity or medically accepted amount of tracers.
- the standard, high-dose is approximately 5 millicuries (mCi).
- body (i.e., non-brain) 18F-FDG imaging the standard, high-dose is approximately 10 mCi.
- Such amounts may also refer to any other clinically accepted dose calculated in view of a patient's body weight and/or a body mass index (BMI).
- BMI body mass index
- target-dose and low-dose are synonymous, and refer to a minimized dose of tracer injected into a patient for PET imaging.
- the low-dose image is then used, in part, to predict a high- dose PET image.
- the target, low-dose amount of tracer injected into a patient is advantageous in minimizing radiation exposure.
- the target, low-dose or dosage amount of tracer injected into a patient is anywhere from approximately 1/2 to 1/10, or less, of the standard or high- dose (i.e., at least less than 50% of the standard, high-dose). That is, for brain imaging, the target, low-dose is approximately 2.5 mCi or less, approximately 1.25 mCi or less, or approximately 0.5 mCi or less.
- the target, low-dose or dosage amount of tracer injected into a patient is also at least less than 50% of the standard, high- dose.
- the target, low-dose is approximately 5 mCi or less, approximately 2.5 mCi or less, or approximately 1 mCi or less. While the above examples focus on the use of FDG, it should be noted that the approach may be generalized to any other radiotracers.
- voxel is defined as a value or position on a regular grid in three-dimensional (3D) space. Voxel is a combination of the terms “volume” and "pixel” where pixel is a combination of "picture” and "element”.
- a voxel is analogous to a texel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). The position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image) in an image.
- Model 1 a model, generally designated "Model 1 " is generated.
- Model 1 may be determined or generated from data obtained from a plurality of MRI, low-dose PET images, and high, standard-dose PET images.
- a prediction model can be calculated such that the high, standard-dose PET images can be predicted using the model built or trained from the data set.
- tissue-specific models can be built using low-dose PET and MRI images.
- the first model or "Model 1 " can be refined, and iteratively updated into a refined model designated "Model 2" to "Model N" (e.g., where N is a whole number integer > 2) via estimating the image difference between the predicted and actual high-dose PET image.
- PET Positron emission tomography
- PET is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in a human body.
- PET has been widely used in various clinical applications, such as diagnosis of tumors, diseases, and diffuse brain disorders.
- High quality PET images play an essential role in diagnosing diseases/disorders and assessing the response to therapy.
- a standard or high-dose radionuclide tracer
- a standard or high-dose radionuclide needs to be injected into the patient's living body.
- the risk of radiation exposure increases.
- researchers have attempted to acquire low- dose PET images, as opposed of high-dose images, to minimize the radiation risk, at the cost of reduced image quality or lengthening imaging acquisition time.
- a regression forest (RF) based framework is used for generating an estimated standard or high-dose PET image by using values predicted from a low-dose PET image and its corresponding magnetic resonance imaging (MRI) image.
- MRI magnetic resonance imaging
- Exemplary embodiments herein include prediction of standard-dose PET images of brain tissue using simultaneously acquired low-dose PET/MIR images. Prediction of standard- dose PET images for any non-brain tissue (e.g., body tissue) can also be provided.
- Systems and methods herein are not limited to predicting standard-dose PET images of the brain, but rather, systems and methods herein can be used to predict standard-dose PET images of any anatomical member of a patient's body, or tissue thereof, such as of the foot, knee, back, shoulder, stomach, lung, neck, shoulder, etc.
- any standard-dose PET scan (even whole body scans) can be predicted using systems and methods described herein.
- prediction methods, systems, and computer readable media herein are used to transform MR and low-dose PET images, or data obtained therefrom, into a high-dose PET image.
- the proposed method includes two steps. First, based on the segmented tissues (i.e., cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) in the example) in the MRI image, the appearance features for each patch in the brain image from both low-dose PET and MRI images is extracted to build tissue-specific models that can be used to predict standard, high-dose PET images. Second, a refinement strategy via estimating the predicted image difference is used to further improve the prediction accuracy.
- the proposed approach has been evaluated on a dataset consisting of eight (8) subjects with MRI, low-dose PET and high-dose PET images, using leave-one-out cross- validation. The proposed method is also compared with the sparse representation (SR) based method. Both qualitative and quantitative results indicate better performance using methods, systems, and computer readable media described herein.
- SR sparse representation
- Random forest often called a regression forest (RF) when applied to non-linear regression tasks, was originally proposed by Breiman [4]. It consists of multiple binary decision trees, with each tree trained independently with random features and thresholds. The final prediction of a random forest is the average over the predictions of all its individual trees. As an ensemble method, it has proved to be a powerful tool (e.g., training tool) in the machine learning field, and has recently gained much popularity on both classification and regression problems, such as remote sensing image classification [29, 15], medical image segmentation [22, 27, 46], human diseases/disorders diagnosis [2, 13, 39], facial analysis [6] and so on. Similar to other supervised models, the use of regression forest involves both training and testing stages.
- RF regression forest
- regression forest aims to learn a non-linear model for predicting the target t based on the input features f.
- each binary decision tree is trained independently.
- a binary decision tree consists of two types of nodes, namely split nodes (non-leaf nodes) and leaf nodes.
- the optimal combination of j and ⁇ is learned by maximizing the average variance decrease in each dimension of the regression target after splitting.
- the leaf node stores the average regression target of training samples falling into this node.
- the training of binary decision tree starts with finding the optimal split at the root node, and recursively proceeds on child nodes until either the maximum tree depth is reached or the number of training samples is too small to split.
- a new testing sample is pushed through each learned decision tree, starting at the root node.
- the associated decision stump function g(f ⁇ j, 9) is applied to the testing sample. If the result is false, then this testing sample is sent to the left child; otherwise, it is sent to the right child.
- the average regression target stored in that leaf node will be taken as the output of this binary decision tree.
- the final prediction value of the entire forest is the average of outputs from all binary decision trees.
- the goal of a RF based approach is to predict the intensity of each voxel 17 ⁇ I 3 in a new subject.
- the approach consists of two major steps or approaches, the initial standard- dose PET prediction step and the incremental refinement step via estimating image differences. Both steps adopt regression forest as the non-linear prediction model. Each approach is discussed in detail below.
- a human brain Due to the large volume of a human brain (e.g., usually with millions of image voxels), it is intractable to learn a global regression model for predicting the high-dose PET image over the entire brain. Many studies [9, 36] have shown that learning multiple local models would improve the prediction performance, compared with a single global model. Thus, one RF can be learned for each type of tissue.
- brain tissue models are learned where one model corresponds to white matter (WM), one corresponds to gray matter (GM), and/or one corresponds to cerebrospinal fluid (CSF). Since the appearance variation within each brain tissue is much less than that across different brain tissues, tissue-specific regression forest models yield more accurate predictions than a global regression forest model (trained for the entire brain).
- the proposed method consists of training stage and testing stage as follows. Non-brain (e.g., body) tissue specific models can also be learned, trained, and/or provided.
- training data consists of MRI, low-dose PET, and standard-dose PET from different training patients (i.e., "training subjects").
- Each training subject has one set of MRI images (for example, T1 -weighted images), two sets of low-dose PET images (that are scanned separately, one after the other, with details explained in Section 4.1 below, Datasets and preprocessing) and the one set of corresponding standard-dose PET images.
- the four images of all training subjects e.g., 1-MR image, 2-low-dose PET images, and 1 -high-dose PET image
- FLIRT flexible image registration toolbox
- a brain segmentation method [47] is adopted to segment the entire brain region into WM, GM and CSF for each training subject, based on the respective MRI image.
- Figure 3 illustrates extracting training data (features and response) to train the tissue-specific regression forests for predicting the initial standard-dose PET image.
- a prediction node e.g., 100, Figure 13
- Prediction node(s) described herein include a computing platform having a hardware processor and memory element configured to execute steps for predicting and generating an estimated high-dose PET image without actually having to perform a high- dose PET scan.
- the testing stage Given a testing subject with both MRI and low- dose PETs, first linearly align the MRI and low-dose PET images onto a common space (as defined in the training stage) by using FLIRT [8], and automatically segment the MRI image into three brain tissues [47]. Then, the high-dose PET image can be predicted in a voxel-wise manner by using the local image appearance information from the aligned MRI and low-dose PET images. Specifically, for each voxel in the unknown standard-dose PET image, similar to the training stage as shown in Figure 3, the prediction node can extract the local intensity patches at the same location from both MRI and low-dose PET images.
- the prediction node can apply the corresponding tissue-specific regression forest to predict the standard-dose PET value for this voxel. By iterating all image voxels, a standard-dose PET image can be predicted.
- Table 1 the initial prediction framework with both training and testing (prediction) stages is summarized in Table 1 , as follows:
- Sub-framework 1 Initially predicting high-dose PET image by using tissue-specific regression forests (RF).
- WM white matter
- GM gray matter
- CSF cerebrospinal fluid
- tissue-specific RFs for gradually (i.e., incrementally or iteratively) minimizing the image difference between the predicted image and the target image or actual, standard-dose PET images obtained during the training stage.
- the tissue-specific RFs at iteration k aim to estimate the image difference between the predicted standard-dose PET image by the previous k iterations and the target or actual standard-dose PET image.
- training tissue-specific RFs as described in the above Subsection 3.1
- Figure 4 illustrates extracting training data (features and response) to train the tissue-specific RFs for predicting (estimating) the image difference.
- a prediction node of a special purpose computing platform as described in detail below can learn three tissue-specific regression forests during the training stage, which are configured to predict (estimate) the image difference within a respective tissue region.
- the new updated prediction may be closer to the target standard-dose PET image, thus improving the prediction accuracy.
- the learned tissue-specific RFs can be applied sequentially to obtain a final predicted standard-dose PET image.
- the first iteration e.g., to obtain "Model 1 ", Figure 2)
- the tissue-specific regression forests in the next iterations e.g., Model 2 to Model N, Figure 2 will be used to sequentially estimate the image difference between the current prediction and the target standard-dose PET image.
- the estimated image differences by the later regression forests will be sequentially added onto the initially predicted high-dose PET image for incremental refinement.
- the incremental refinement framework can further boost the prediction accuracy of tissue- specific regression forests.
- Sub-framework 2 Incremental refinement via estimating the image difference Given: M I, low-dose PET, and previously predicted standard-dose PET images.
- Leave-one-out cross-validation which has been adopted in numerous papers [20, 37, 43], can be used to evaluate the performance of the outlined approach. Specifically, at each leave-one-out case, seven (7) subjects are selected as training images, and the remaining one is used as a testing image. This process is repeated until each image is taken as the testing image once. In both the training and testing stages, all images from each subject are linearly aligned onto a common space via FLIRT [8]. The dataset and preprocessing steps are described in detail in the following subsection.
- each element is also investigated, i.e., the effect of MRI to help low-dose PET for predicting standard-dose PET image, the effect of tissue-specific models, the effect of image difference estimation for incremental refinement, and the effect of combining more low-dose PETs with MRI.
- All experiments include the following parameters: patch size: 9x9x9; the number of trees in a forest: 10; the number of randomly selected features: 1000; the maximum tree depth: 15; the minimum number of samples at each leaf: 5; and the number of iterations in incremental refinement is 2.
- the method was evaluated on a dataset consisting of eight (8) patients. Patients were chosen from a group referred for PET scans for clinical indications. In each case, the diagnosis was unknown and not used in the analysis. Patients were administered an average of 203 megabecquerel (MBq) (range: 191 MBq to 229 Bq) of an exemplary radiotracer, such as 18F-fluorodeoxyglucose (18F-FDG).
- MBq megabecquerel
- the first PET scan (the "standard-dose”, aka, the "high-dose” scan) was performed for a full 12 minutes within sixty minutes of injection, in accordance with standard protocols.
- a second PET dataset was acquired in list- mode for 12 minutes, which was broken up into separate three-minute sets (the "low-dose” scans). Note that the reduced acquisition time at standard- dose as a surrogate for standard acquisition time at reduced dose. In this case, the "low-dose” is approximately 25% of the standard-dose.
- each subject In processing, four images for each subject are used: one MRI, two low-dose PETs, and one standard-dose PET. All data was acquired on a Siemens Biograph mMR (a hybrid MR-PET or PET-MR system). Of note, for all subjects, the low-dose PET image sets are the completely separate acquisitions from the standard-dose PET image sets. Moreover, each of the low-dose PET images is a separate acquisition (for simulating the image acquisition at different time points). Meanwhile, a T1 -weighted MR image was also scanned. T1-weighted MR image was affine-aligned to the PET image space.
- NMSE normalized mean squared error
- PSNR peak signal-to-noise ratio
- H is the ground-truth standard-dose PET image
- H is the predicted high-dose PET image
- L is the maximal intensity range of images H and H
- M is the total number of voxels in the image.
- a good algorithm provides lower NMSE and higher PSNR.
- anatomical information provided by MRI image can compensate for the molecular information of PET in the PET/MRI imaging system [41].
- the effect of combining MRI with low-dose PET image for predicting standard-dose PET images is investigated.
- the following are respectively used 1 ) MRI, 2) one of the low-dose PETs, and 3) the combination of MRI and low-dose PET, to build the global models for predicting standard-dose PET image of the entire brain without separation of the model into tissue-specific components.
- Table 3 lists the prediction performances, in terms of NMSE and PSNR.
- Figure 5 illustrates a further comparison between the model built using low-dose PET 1 and the model built using the combination of MRI and low-dose PET 1.
- Figure 5 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by two global models built by 1) low-dose PET 1 and 2) MRI + low-dose PET 1.
- tissue-specific models are built for each type of brain tissue (WM, GM, and CSF) and used for predicting standard-dose PET of the respective brain tissue, while the global model is built for the entire brain and used for predicting whole-brain standard-dose PET.
- the tissue-specific model and the global model are built using the same MRI plus low-dose PET 1 .
- Table 4 lists prediction performances, in terms of NMSE and PSNR.
- Figure 6 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by a global model and tissue-specific models, respectively.
- Table 4 and Figure 6 collectively illustrate, it is apparent that the tissue-specific models yield better overall performance compared with the global model, i.e., with lower NMSE and higher PSNR.
- Figure 6 illustrates a comparison for the prediction performances by using tissue-specific models and a global model.
- Table 4 compares prediction performances yielded by global model and tissue-specific models, respectively. Each illustrate the tissue-specific models yielding better overall performance than the global model in terms of a model having lower NMSE and higher PSNR.
- the prediction performance can be further improved by auto-context models [9, 35, 40].
- the performance improvement by estimating image differences between the previously predicted standard- dose PET and the original standard-dose PET (ground truth) is examined.
- the term "one-layer model” is the above model that directly estimates high-dose PET as the one-layer model
- the term "two-layer model” is the above model + image difference estimation as the two-layer model. Note that all these methods use tissue-specific models, built using the MRI plus low-dose PET 1.
- Table 5 lists the prediction performances, in terms of NMSE and PSNR for multiple subjects. Table 5 compares prediction performances yielded by one-layer model and two-layer model, respectively.
- Figure 7 is a graphical illustration of the performance comparison between one-layer model and two-layer model. From both Table 5 and Figure 7, it can be seen that, compared with the one- layer model, the two-layer model (ensemble model) achieves better prediction performance, indicated by lower NMSE and higher PSNR values.
- Figure 7 is the comparison, in terms of NMSE and PSNR, yielded by one- layer tissue-specific model and two-layer tissue-specific model, respectively.
- both one-layer and two-layer models are built by using the combination of MRI and Low-dose PET 1 .
- Figure 8 is a graphical comparison between the one-layer model (first layer model, "Model 1 ”) and the two-layer model (two layers model, "Model 1 + 2") on a sequence of voxels with maximal prediction errors using Model 1 . From Table 5 and Figure 7, it is apparent that the overall performance for the entire brain is improved slightly by additionally using "Model 2". However, as shown in Figure 8, for the voxels with maximal prediction errors by Model 1 , the performance improvement by further using Model 2 (e.g., Model 1 + 2) is visibly apparent, especially for some subjects as shown by ("one subject") lines in Figure 8.
- Model 1 already achieves very good performance, thus affecting the calculation of overall improvement amount by Model 2.
- Figure 8 is the performance comparison between the proposed Model 1 and the Model 1 + 2, in terms of NMSE and PSNR.
- the "OVERALL" lines in Figure 8 denote the results from all subjects, while "ONE SUBJECT” lines denote for the results from a selected subject.
- both Model 1 and Model 1 + 2 use the tissue-specific models built with the combination of MRI and low-dose PET 1. 4.6. Effect of combining more low-dose PETs with MRI in predicting standard-dose PET
- Figure 9 shows the comparison of prediction performances yielded by using the models constructed with different combinations of modalities as described above.
- Table 6 compares of prediction performances yielded by the models using different combinations of modalities.
- Model 1 refers to a one layer model (the first layer model)
- Model 1 + 2 refers to a two layer model in which the first layer model is used to predict the initial high-dose PET image, and the second layer model is used to estimate the image difference.
- Both Table 6 and Figure 9 demonstrate that the best performance is yielded by the model built using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
- Figure 9 is a comparison of prediction performances, in terms of NMSE and PSNR, yielded by the models built with different combinations of modalities.
- all models (the first layer models (Model 1 ) and the second layer models (Model 2) for three kinds of combinations) use tissue specific models.
- Equation (3) ⁇ ⁇ D v a v - f(v) ⁇ ⁇ + ⁇ + 1 ⁇ 2
- f(v) is the feature vector of voxel v, defined as the vector of concatenated intensities of local patches from both MRI and low-dose PET;
- v is the sparse coefficient of voxel v ⁇ o be estimated;
- D v is the dictionary of voxel v, consisting of feature vectors of voxels within a small neighborhood of voxel v from all training subjects; ⁇ 1 and ⁇ ⁇ control sparsity and smoothness of the estimated sparse coefficient a v .
- Equation (5) is the dictionary that contains intensity patches from the high-dose PET images corresponding to the elements in the overall dictionary ⁇ . Then, by taking the center value from P O), the intensity of voxel v at the new predicted standard-dose PET is obtained as in Equation (5) below:
- C(-) is the operation of taking the center value from a column vector.
- Figure 10 is an illustration of prediction results between SR and the methods above, (i.e., RF, Model 1 + 2) on the different subjects as shown in the first and second rows, from columns (A) to (H) respectively.
- the lower contrast "difference" maps (shown in column (E) and column (G)) are computed between the predicted high-dose PET and the original high-dose PET (ground truth).
- Figure 10 illustrates the qualitative results of predicted standard-dose PET using, respectively, 1 ) the proposed method (regression forest (RF) based method (two-layer model (Model 1 + 2))), and 2), SR based method, on the two randomly selected subjects.
- RF regression forest
- Model 1 + 2 two-layer model
- Table 7 below and Figure 11 show the quantitative comparison between the proposed RF method (including one-layer model (Model 1) and two-layer model (Model 1 + 2)) and an SR-based method, in terms of NMSE and PSNR.
- Model 1 one-layer model
- Model 2 + 2 two-layer model
- Figure 12 shows the quantitative comparison between the proposed RF method (including one-layer model (Model 1) and two-layer model (Model 1 + 2)) and an SR-based method, in terms of NMSE and PSNR.
- Figure 12 in order to demonstrate the quality improvement of RF predicted standard-dose PET image over the original low-dose PET image, both NMSE and PSNR for the low-dose PET with respect to the ground-truth (original standard-dose PET) is also calculated.
- the parameters' settings for the instant and improved RF method (and RF-based models) are same as the settings described in the above Subsection 4.6.
- Table 7 compares prediction performances, in terms of NMSE and PSNR.
- the term "Low-dose PET 1 and 2 (Mean)" is indicative of the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth.
- the RF-based models i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models. All methods, including SR, are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
- RF models e.g. , RF(Model 1 + 2) is improved over SR techniques.
- RF models achieves more desirable predictions than the SR technique, and with much smaller difference magnitudes (see e.g., column (G) in Figure 10) and are more similar in regards to image appearance with the ground-truth.
- FIG. 1 1 is a graphical plot comparing prediction performances, in terms of NMSE and PSNR, with respect to the ground truth (original high-dose PET).
- all models are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
- the regression forest (RF) based models i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models and not a global model.
- Figure 12 is a comparison of image quality, in terms of NMSE and PSNR, with respect to the ground truth (i.e., the original high-dose PET).
- Low-dose PET 1 and 2 is the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth.
- RF(Model 1 + 2) stands for the (NMSE or PSNR) value of predicted high-dose PET image with respect to the ground truth, by using the regression forest based method, i.e., RF(Model 1 + 2).
- RF(Model 1 + 2) also uses tissue-specific models, not a global model.
- the limited prediction accuracy of SR may be due to two or more reasons, for example, one reason being that both MRI and low-dose PET modalities are treated equally in the sparse representation, and a possible second reason being that only linear prediction models are adopted, which might be insufficient to capture the complex relationship among MRI, low- dose PET, and high-dose PET.
- the instant and improved RF- based method adopts RF to simultaneously identify informative features from MRI and low-dose PET for predicting and generating an estimated standard- dose PET images, and further learn the intrinsic relationship among MRI, low-dose PET and standard-dose PET. Consequently, by addressing the limitations of SR, the proposed method (e.g., RF method discussed above) achieves much higher prediction accuracy.
- novel methods, systems, and computer readable media are disclosed, in which a standard, high-dose PET image is predicted using a machine (e.g., computing platform) learning based framework to generate a prediction of a standard, high-dose PET image.
- a machine e.g., computing platform
- the proposed method utilizes low-dose PET, combined with an MR structural image, to predict standard-dose PET. Results show and described herein illustrate that the instant method substantially improves the quality of low-dose PET. The prediction performance obtained also indicates a good practicability of the proposed framework.
- each element in the methods discussed above has its own contribution in improving the prediction performance.
- high-resolution brain anatomical information provided by MRI helps low-dose PET to predict standard-dose PET.
- the complementary information from different modalities significantly improves the prediction results.
- the tissue-specific model gains better prediction performance than the global model. The main reason is that, due to the large volume of human brain (often with different tissue properties); it is difficult to learn a global regression model for accurate prediction of standard-dose PET over the entire brain. In contrast, learning multiple tissue-specific models improved the prediction performance as indicated in both Table 4 and Figure 6 discussed above.
- tissue-specific models can be trained simultaneously, thus the training time can also be reduced significantly. Furthermore, by estimating image differences between previously-predicted standard-dose PET and the original standard-dose PET, the prediction accuracy can be further improved, especially for the voxels with maximal prediction errors using the previous layer model, as shown in Figure 8 discussed above.
- Figure 13 is a block diagram illustrating an exemplary system or node
- Node 100 (e.g., a single or multiple processing core computing device or computing platform) for predicting standard, high-dose PET values for generating an estimated high-dose PET images according to embodiments of the subject matter described herein.
- Node 100 may include any suitable entity, such as a computing device or computing platform for performing one more aspects of the present subject matter described herein or in the manuscript entitled "Prediction of High-dose PET Image with MRI and Low- dose PET images"; the disclosure of which is incorporated herein by reference in its entirety.
- components, computing modules, and/or portions of node 100 may be implemented or distributed across one or more (e.g., multiple) devices or computing platforms.
- a cluster of nodes 100' may be used to perform various portions of high-dose PET image prediction, refinement, and/or application.
- node 100 and its components and functionality described herein constitute a special purpose test node, special purpose computing device, or machine that improves the technological field of brain and/or body imaging (e.g., MR and/or PET imaging) by allowing prediction and generation of a high-dose PET image without performing a high-dose PET scan, thus advantageously reducing a patient's exposure to radiation.
- MR and PET imaging, and improvements thereto are necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks (i.e., the need to predict a high- dose PET scan from MR and low-dose PET scans).
- the methods, systems, and computer readable media described herein are not capable of manual processing (i.e., such cannot be manually performed by a human being), as such, the methods, systems, and computer readable media described herein are achieved upon utilization of physical computing hardware components, devices, and/or machines necessarily rooted in computer technology.
- node 100 includes a computing platform that includes one or more processors 102.
- processor 102 includes a hardware processor or microprocessor, such as a multi-core processor, or any suitable other processing core, including processors for virtual machines, adapted to execute or implement instructions stored by an access memory 104.
- Memory 104 may include any non-transitory computer readable medium and may be operative to communicate with one or more of processors 102.
- Memory 104 may include and/or have stored therein a standard or high-dose PET Prediction Module (HDPPM) 106 for execution by processor 102.
- HDPPM high-dose PET Prediction Module
- HDPPM 106 may be configured to extract appearance information or features from corresponding MR and low-dose PET image locations or patches, segment the MR image based upon a tissue type (e.g., GM, WM, or CSF), classify the MR and low-dose PET image locations or patches per tissue type, and execute and apply a tissue specific model to the extracted information for predicting a high-dose PET image, per voxel, and for generating an estimated (e.g., not actual
- Processor 102 may predict, and transmit as output, a high-dose PET image generated from a plurality of predicted high-dose PET voxels via HDPPM 106.
- node 100 obviates the need for performing a high-dose PET scan, and instead predicts and generates a high-dose PET image from at least one low-dose PET image and at least one MR image.
- This is advantageous, as a patient's exposure to radiation is minimized, in some aspects by approximately 1/2, 1/4, or by 1/10 or less.
- the low-dose PET image and MR image may be obtained simultaneously (e.g., via a PET/MRI scanning system) for faster prediction/generation of a high-dose PET image.
- the low-dose PET and MR images may be obtained separately (i.e., non-simultaneously) from separate PET/MR scanning machines or imaging systems.
- HDPPM 106 is configured to implement one or more RF-based analysis and/or modeling techniques for use in prediction of high-dose PET images. Exemplary RF techniques or models described above may be used, executed, and/or implemented by HDPPM 106. For example, HDPPM 106 may execute one or more tissue specific models using, as inputs, appearance features extracted from at least one MR image (anatomical imaging features) and at least one low-dose PET image (molecular imaging features). HDPPM 106 may be configured to initially predict high-dose PET values and/or a high-dose PET image using tissue- specific RF modeling. HDPPM 106 may then incrementally refine the predicted values and predicted high-dose PET image via machine estimated image differences. The estimated differences may be applied to the previously predicted and generated standard-dose PET values and image, respectively, for incremental refinement, where desired. HDPPM 106 may be used to predict high-dose PET values for generating estimated high-dose PET images of the brain and/or body.
- node 100 receives extracted appearance features from at least one low-dose PET image and at least MR image of a subject brain or body portion.
- node 100 is configured to receive the images as input, and extract the appearance features from at least one low-dose PET image and at least one corresponding MR image via HDPPM 106.
- the corresponding MR/low-dose PET images may be received and aligned onto a common space, and appearance features (e.g., local intensity patches at a same location) may be extracted by HDPPM 106.
- the MR image may also be segmented by tissue at each location (e.g., into GM, WM, CSF in the brain) by HDPPM 106.
- tissue-specific RF is used to predict a high-dose PET value for each voxel.
- a high-dose PET image can be predicted and generated at HDPPM 106.
- HDPPM 106 may further refine the predicted image by applying tissue- specific regression forests (e.g., models) to extracted appearance features to get the predicted difference values for each voxel.
- tissue-specific regression forests e.g., models
- HDPPM 106 may be configured to work in parallel with a plurality of processors (e.g., processors 102) and/or other computing platforms or nodes.
- a plurality of processor cores may each be associated with a tissue specific model and/or imaging technique (e.g., receiving MR or low-dose PET features).
- Figure 13 is for illustrative purposes and that various nodes, their locations, and/or their functions may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. In a second example, a node and/or function may be located at or implemented by two or more nodes.
- the methods, systems, and computer readable media described herein can improve imaging during an uptake interval or time ⁇ , by improving image quality without further increasing tracer dosage.
- multiple scans may be taken over an uptake time spanning ⁇ , which is the time over which a tracer is metabolized, until reaching a steady state Tss.
- the shorter, almost dynamically obtained scans i.e., obtained at each b
- Figure 15 is a block diagram illustrating an exemplary method of predicting and generating an estimated high-dose PET image without actually performing a high-dose PET scan. The method may be performed at a computing node (e.g., 100, Figure 13) having a processor and executable instructions stored thereon, such that when executed by the processor of a computer control the computer to perform steps, such as those in Figure 15.
- appearance features may be extracted from at least one MR image.
- Appearance features include information or data regarding a tissue structure, anatomical information, molecular information, or functional information (e.g., tissue perfusion, diffusion, vascular permeability, or the like) and/or information per image location, as indicated by a local intensity.
- the MR image may also be segmented or categorized (e.g., by location) upon a tissue type (e.g., GM, WM, CSF in the brain), where needed (e.g., brain imaging).
- appearance features may be extracted from at least one low-dose PET image.
- Appearance features obtained from low-dose PET imaging may include a local intensity that is indicative of metabolic information derived from impingement of gamma rays to tissue injected with a biologically active radioactive tracer.
- the information obtained from low- dose PET image is associated with tissue metabolic activity.
- the appearance features of MR/low-dose PET images can be aligned, classified per tissue type, and input into tissue-specific RF (e.g., models) for predicting high-dose PET values per voxel, from which a high-dose PET image is generated.
- the predicted values may be iteratively refined as described above (see, e.g., Table 2).
- a standard, estimated high-dose PET image may be generated using the appearance features of the MR image and the low-dose PET image, and the high-dose PET values predicted therefrom.
- the appearance features include local intensity patches, to which tissue specific RF is applied in predicting high-dose PET values, per voxel. By iterating all image voxels, a high-dose PET image can be predicted and generated, without subjecting a patient to a high-dose PET scan.
- Figure 16 is a schematic block diagram illustrating an overview of training a model M for high-dose PET prediction and image generation via model learning (e.g., machine learning).
- the methods, systems, and computer readable media include machine learning-based methodology, for example, a computing machine having a decision tree-based (e.g., RF) prediction of a high-dose PET image using MR and low-dose PET images.
- the machine learning methodology includes two main stages, i.e., a training stage and an application stage.
- estimated high-dose PET images may be used for at least one of diagnosis, treatment, and/or treatment planning of one or more patients.
- one task includes learning decision trees for generating an RF model.
- Multiple trees can be grouped to form a forest, and in the case of regression, the random forest is often called a regression forest (RF).
- RF regression forest
- learned parameters of the tree are stored at each node (i.e., a split node or leaf/terminal node).
- the input is a set of voxels, and the corresponding high-dose PET intensities, as shown in Figure 16.
- Each voxel is represented by a pair of MR and low-dose PET patches Pi and P 2 .
- the goal of training is to learn multiple trees (a forest, as shown in Figure 17) for best predicting the high- dose PET intensity P3 from a pair of MR and low-dose PET patches Pi and P 2 .
- a high-dose PET image is predicted voxel by voxel.
- a pair of MR and low-dose PET images is extracted and centered at that respective voxel (i.e., as shown in Figure 18).
- the high-dose PET intensity at that voxel can be calculated or generated.
- a split decision or node i.e., one node of Tree
- each split node (solid circles) has at least one "leaf” shown in broken circles) stores a split function's parameters, including one selected feature index and its corresponding threshold.
- a feature vector f is computed as the concatenated vector from a pair of MRI and low-dose PET patches.
- the parameters stored at the /-th split node include one selected feature index ⁇ ( ⁇ ) and the corresponding threshold ⁇ ( ⁇ ).
- leaf node i.e.
- each leaf is indicated in a broken circle labeled as 4, 6, 8, 9, 10, and 1 1
- stores the mean high-dose PET intensity e.g., / mea n 0 '
- °f voxels reached at this y ' -th node e.g., where nodes are indicated in solid circles labeled as 1 , 2, 3, 5, and 7.
- each tree Ti to ⁇ is a prediction model (e.g. , a RF or regression model) or prediction result.
- the input to each tree is a MRI patch and its corresponding low-dose PET patch.
- the output from each tree is the predicted high-dose PET intensity at the center location of the given MRI patch.
- RF models consist of multiple trees, and the final prediction of a RF model is the average of all predicted values from all individual trees.
- Each tree is also referred to as a binary decision tree, and is a prediction model, similar to linear regression model, but it is used for non-linear regression problems.
- the output from each tree is a high-dose PET intensity value predicted at a center location of a given MR/low-dose PET patch.
- The. average of all trees in the forest is the final prediction result.
- the prediction process of Figure 17 is repeated patch by patch. Specifically, for each location, MRI and low-dose PET patches are extracted to predict the high-dose PET intensity at a given location.
- the split function shown in Figure 17 includes a specific feature and a threshold.
- the type of feature and the value of threshold are automatically learned according to the training data.
- the best combinations of feature and threshold will be learned to predict the high- dose PET from features of MR and low-dose PET patches.
- the split functions can be fixed and applied to a new subject.
- Figure 18 is a specific example associated with the application stage, where an RF model has been learned and can now be applied to the extracted MR/low-dose PET features or intensity values for predicting high- dose PET intensities.
- a 2D case is shown.
- the 3D case can be readily derived using the 2D case as an example.
- Figure 18 illustrates MR and low-dose PET images for a new subject. That is, each new subject will have at least one MR image and at least one low-dose PET image generated for a brain or non-brain body part.
- two 3 x 3 patches are extracted (i.e.
- the feature vector f is passed through each learned decision tree
- the prediction of each tree is the mean high-dose PET intensity stored at the leaf node, where this voxel falls into.
- the routing of the voxel in one tree is as follows.
- Figure 18 illustrates a specific example of a voxel-wise prediction procedure for one tree.
- the input is the feature vector f (i.e., extracted for MR/low-dose PET patches).
- the output is the final predicted intensity value of the voxel in a predicted high-dose PET image F P .
- the feature vector f is fed to the tree TT, and firstly f reaches at node 1 (i.e., the double circle with at "1 " in it), a split node.
- a learned split function (i.e. , as listed in Table 1 of Figure 18) is applied to f.
- the value of the inequality is a decision in the RF decision tree, and ⁇ ⁇ ( ⁇ )> 0(1 ) is true, so f goes to right child, i.e., node 3. (See decision tree Tj in Figure 18, where nodes are labeled 1-1 1 ).
- a RF based framework of high-dose PET image prediction and generation is proposed, for effectively predicting and generating standard, high-dose PET by using the combination of MRI and low-dose PET, thus reducing the radionuclide dose.
- tissue-specific models are built via RF framework for separately predicting standard-dose PET values and images in different tissue types, such as GM, WM, and CSF type brain tissue.
- an incremental refinement strategy is also employed for estimating an image difference for refining the predicted high-dose PET values and image. Results described herein illustrate that this method can achieve very promising, accurate, machine learned prediction for generation of standard-dose PET images.
- the proposed method outperforms the SR technique under various comparisons.
- aspects as disclosed herein can provide, for example and without limitation, one or more of the following beneficial technical effects: minimized exposure of patients to radiation; improved imaging at lower dosages of radionuclides; improved accuracy/refinement of predicted image; obtaining faster results (e.g., via simultaneous MR/PET imaging).
- the methods, systems, and computer readable media herein are performed at a predictions node (e.g., Figure 13).
- the prediction node and/or functionality associated with the prediction node as described herein constitute a special purpose computer. It will be appreciated that the prediction node and/or functionality described herein improve the technological field pertaining to brain and/or body imaging occurring at a special MR and/or PET imaging machine, which may be combined or separated. Predicting high-dose PET imaging via a prediction node is necessarily rooted in computer technology as such overcomes a problem specifically arising in the realm of computer networks, for example, obtaining a high-dose PET image without having to actually perform the high-dose PET image.
- Some embodiments of the present subject matter can utilize devices, systems, methods, and/or computer readable media as, such as described in any of the following publications, each publication of which is hereby incorporated by reference as if set forth fully herein:
- Bai, W., Brady, M., 201 Motion correction and attenuation correction for respiratory gated PET images. IEEE Trans. Med. Imaging 30, 351-365.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Radiology & Medical Imaging (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- High Energy & Nuclear Physics (AREA)
- Molecular Biology (AREA)
- Optics & Photonics (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Pulmonology (AREA)
- General Physics & Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Nuclear Medicine (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Methods, systems, and computer readable media for predicting high- dose positron emission tomography (PET) values and/or images are disclosed. A method for predicting and generating a high-dose PET image is performed at a prediction node including at least one computing processor, and includes extracting appearance features from at least one magnetic resonance (MR) image, extracting appearance features from at least one low-dose PET image, and generating a high-dose PET image using the appearance features of the at least one MR image and the at least one low- dose PET image.
Description
DESCRIPTION
RESTORING IMAGE QUALITY OF REDUCED RADIOTRACER DOSE POSITRON EMISSION TOMOGRAPHY (PET) IMAGES USING COMBINED PET AND MAGNETIC RESONANCE (MR) IMAGES
PRIORITY CLAIM
This application claims the benefit of U.S. Provisional Patent Application Serial No. 62/044,154, filed August 29, 2014; the disclosure of which is incorporated herein by reference in its entirety.
GOVERNMENT INTEREST
This invention was made with government support under Grant Nos. CA140413, MH100217, AG042599, MH070890, EB006733, EB008374, EB009634, NS055754, MH064065, and HD053000, awarded by the National Institutes of Health. The government has certain rights in the invention.
TECHNICAL FIELD
The subject matter herein generally relates to positron emission tomography (PET) images, and more particularly, to methods, systems, and computer readable media for predicting, estimating, and/or generating high (diagnostic) quality PET images using PET images acquired with a dose substantially lower than the widely used clinical dose (low-dose PET) and magnetic resonance imaging (MRI) acquired from the same subject.
BACKGROUND
Positron emission tomography (PET) is a molecular imaging technique that produces 3D images reflecting tissue metabolic activity in the human body. Since it was developed in the early 1970s, PET imaging has been widely used in oncology for diagnosing a variety of cancers [29]. Moreover, it is also widely used for clinically diagnosing and evaluating neurological disorders [1 1], cardiovascular diseases [33], monitoring therapy response, and guiding the treatment planning in radiation therapy [26].
Obtaining a high-quality PET image is crucial in diagnosing diseases and/or disorders, as well as in assessing a patient's response to therapy. The quality of a PET image can be measured using peak signal-to-noise ratio (PSNR) [16], and high quality PET images are needed in clinical practice to ensure accurate diagnosis and assessment. The image quality of PET is largely determined by two factors: the dosage of radionuclide (tracer) injected within a patient and imaging acquisition time. Although the latter factor could be easily increased, a long data acquisition time could lead to more motion related artifacts and is not applicable for radiotracers with a short half-life. The former factor can be easily understood as a higher dose generates more detected events and thus obtains images with higher PSNR. However, due to concerns about internal radiation exposure in patients, efforts to reduce currently used clinical dose while preserving PET image quality without compromising the ability to make accurate diagnosis have been actively pursued by scientists.
In Figure , the image (A) on the left is an example of a low-dose PET image, and a corresponding standard clinical dose PET image (B) is on the right. As Figure 1 illustrates, both the quality and PSNR of a low-dose PET image is inferior to that of a standard clinical dose PET image. The difference is visibly noticeable, for example, in the contrast between image (A) and image (B). The quality of the low-dose PET image is further decreased due to various factors during the process of acquisition and transmission. Consequently, the tracer dosage and process variability affect the accurate diagnosis of diseases/disorders. In practice, in order to obtain the high quality PET image, a higher dosage of radionuclide (tracer) needs to be injected into the patient's body. As a result, it will inevitably increase the patient's exposure to radiation. Actually, like computed tomography (CT) imaging, the total exposure dose to a patient should be considered [25], particularly when repeated examinations are required for therapeutic monitoring. Therefore, reducing and/or limiting the amount of radiation exposure are very important to patients and caregivers, especially for children and younger patients [31].
In the process of acquiring PET (e.g., via scanning), image reconstruction benefits from the use of anatomical information obtained with other imaging modalities, such as MRI [41]. Recently, one combined PET/MRI imaging system has been developed as an alternative to PET/CT, which has matured into an important diagnostic tool [31]. One advantage of this dual-modality imaging system over the stand-alone PET is that, in a PET/MRI imaging system, the excellent soft-tissue contrast of MRI complements the molecular information of PET. This method, however, is not used in prediction or modeling of standard, high-dose PET images.
Although many methods have been proposed for improving the PET image quality, most of them focused on low-dose/high-dose PET itself, such as partial volume correction [5, 21], motion correction [12, 28], and attenuation correction [1 , 3, 19].
Accordingly, a need exists in predicting and generating an estimated standard clinical dose PET image, for example, using a low-dose PET image or the combination of low-dose PET and magnetic resonance (MR) images (e.g., MR imaging (MRI)). In some aspects, a combined PET/MRI system provides the benefits (i.e., by scanning both low-dose PET and MRI images simultaneously) for generating standard clinical dose PET images using a combination of low-dose PET and MRI images to predict clinical dose PET values.
SUMMARY
The subject matter described herein discloses methods, systems, and computer readable media for predicting high-dose positron emission tomography (PET) values for generating an estimated standard, high-dose PET images.
An exemplary method for predicting and/or generating an estimated high-dose PET image without injecting a high-dose radiotracer to a patient PET scan is provided. The method includes extracting appearance features from the at least one magnetic resonance (MR) image, extracting appearance features from at least one low-dose PET image, and generating
a predicted (estimated) high-dose PET image using the appearance features of the at least one MR image and at least one low-dose PET image.
An exemplary system for predicting and generating an estimated high-dose PET image without performing a high-dose PET scan or without injecting a high-dose radiotracer to a patient is provided. The system includes a hardware computing processor and a high-dose PET Prediction Module (HDPPM) implemented using the processor. The HDPPM is configured to extract appearance features from each of a MR image and at least one corresponding low-dose PET image and generate a high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
A non-transitory computer readable medium is also provided. The non-transitory computer readable medium has stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps. The steps include extracting appearance features from at least one MR image, extracting appearance features from at least one low-dose PET image, and generating an estimated high-dose PET image using the appearance features of the at least one MR image and the at least one low-dose PET image.
The subject matter described herein can be implemented in software in combination with hardware and/or firmware. For example, the subject matter described herein can be implemented in software executed by one or more processors. In one exemplary implementation, the subject matter described herein may be implemented using a non-transitory computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer control the computer to perform steps.
Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory devices, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across
multiple devices or computing platforms.
As used herein, the term "node" refers to a physical computing platform or device including one or more processors and memory.
As used herein, the term "module" refers to hardware, firmware, or software in combination with hardware and/or firmware for implementing features described herein.
The terms "low-dose" and "high-dose" as used herein refer to the dosage (e.g., quantity, amount, or measurement) of a radionuclide or radioactive tracer, injected into a patient prior to PET imaging. For brain [18F]-fluoro-2-deoxy-D-glucose (18F-FDG) imaging, the standard, high-dose is approximately 5 millicuries (mCi). Similarly, for body (i.e., non-brain) 18F- FDG imaging, the standard, high-dose is approximately 10 mCi. For brain imaging, a target or low-dose is any amount less than approximately 2.5 mCi, which is any amount less than or approximately 1/2 of the standard, high-dose. For non-brain, body imaging, a target or low-dose is any amount less than approximately 5 mCi, which is also any amount less than 1/2 of the standard, high dose. Note that while clinical or standard dose for different tracers may vary, the above quantitative statements remain valid. The terms "target-dose" and "low-dose" are synonymous and the terms "standard-dose" and "high-dose" are synonymous.
The terms "standard-dose", "clinical dose", and "high-dose" as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate more detected events and obtain PET images having a higher PSNR.
BRIEF DESCRIPTION OF THE DRAWINGS
Preferred embodiments of the subject matter described herein will now be explained with reference to the accompanying drawings, wherein like reference numerals represent like parts, of which:
Figure 1 illustrates exemplary low-dose and high-dose positron emission tomography (PET) images;
Figure 2 is a schematic diagram illustrating methods, systems, and computer readable media for predicting high-dose PET values for generating
high-dose PET images according to an embodiment of the subject matter described herein;
Figure 3 is a schematic diagram illustrating an exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
Figure 4 is another exemplary method for predicting high-dose PET values for generating high-dose PET images according to an embodiment of the subject matter described herein;
Figures 5 to 9 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
Figure 10 is an illustration of prediction results regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
Figures 1 1 and 12 illustrate graphical information regarding prediction of high-dose PET images according to embodiments of the subject matter described herein;
Figure 13 is a block diagram of an exemplary system for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
Figure 14 is an example of dynamic image acquisition acquired via methods, systems, and computer readable media described herein;
Figure 15 is a block diagram illustrating an exemplary method for predicting high-dose PET values for generating an estimated high-dose PET images according to an embodiment of the subject matter described herein;
Figure 16 is a schematic block diagram illustrating an overview of a model learning methodology;
Figure 17 is an overview of a constructed regression forest (RF); and Figure 18 is a schematic block diagram of and a specific example of a decision tree for the regression model used in predicting and generating estimated high-dose PET images.
DETAILED DESCRIPTION
The subject matter described herein discloses methods, systems, and computer readable media for predicting high-dose positron emission tomography (PET) values for generating an estimate of standard, high-dose PET images. In some aspects, a regression forest (RF) based framework is used in predicting and generating an estimate of a standard, high-dose PET image by using both low-dose PET and MRI images. The prediction method includes two approaches. One approach includes prediction of a standard, high-dose PET image by tissue-specific regression forest (RF) models with the image appearance features extracted from both low-dose PET and MRI images. Another approach includes incremental refinement of a predicted standard-dose PET image by iteratively estimating the image difference between the current prediction and the target standard-dose PET. By incrementally adding the estimated image difference towards the target standard-dose PET, methods and systems described herein are able to gradually improve the quality of predicted standard, high-dose PET.
The terms "low-dose" and "high-dose" as used herein refer to the dosage (e.g., quantity, amount, or measurement) of radionuclide or radioactive tracer, injected into a patient prior to and/or during PET imaging. Methods, systems, and computer readable media herein can minimize radiation exposure in patients, by predicting higher quality high-dose PET image values and, thereby, generating an estimated high-dose PET image from a combination of low-dose PET and MR images.
The terms "standard-dose" and "high-dose" as used herein are synonymous, and refer to the conventional dosage amounts clinically accepted by the medical community, which generate sufficient detected events and obtain clinically acceptable PET images having a sufficiently high PSNR. Standard, high-dose images are obtained as a result of injecting a patient with the standard, high-dose quantity or medically accepted amount of tracers. For brain [18F]-fluoro-2-deoxy-D-glucose (18F-FDG) imaging, the standard, high-dose is approximately 5 millicuries (mCi). Similarly, for body (i.e., non-brain) 18F-FDG imaging, the standard, high-dose is approximately 10 mCi. Such amounts may also refer to any other clinically accepted dose
calculated in view of a patient's body weight and/or a body mass index (BMI). Furthermore, while clinical or standard dose for different tracers may vary, the above quantitative statements remain valid.
The terms "target-dose" and "low-dose" as used herein are synonymous, and refer to a minimized dose of tracer injected into a patient for PET imaging. The low-dose image is then used, in part, to predict a high- dose PET image. The target, low-dose amount of tracer injected into a patient is advantageous in minimizing radiation exposure. For brain imaging, the target, low-dose or dosage amount of tracer injected into a patient is anywhere from approximately 1/2 to 1/10, or less, of the standard or high- dose (i.e., at least less than 50% of the standard, high-dose). That is, for brain imaging, the target, low-dose is approximately 2.5 mCi or less, approximately 1.25 mCi or less, or approximately 0.5 mCi or less. For non- brain (i.e., body) imaging, the target, low-dose or dosage amount of tracer injected into a patient is also at least less than 50% of the standard, high- dose. Thus, for body imaging, the target, low-dose is approximately 5 mCi or less, approximately 2.5 mCi or less, or approximately 1 mCi or less. While the above examples focus on the use of FDG, it should be noted that the approach may be generalized to any other radiotracers.
The term "voxel" is defined as a value or position on a regular grid in three-dimensional (3D) space. Voxel is a combination of the terms "volume" and "pixel" where pixel is a combination of "picture" and "element". A voxel is analogous to a texel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). The position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image) in an image.
Referring now to Figure 2, a schematic diagram is shown for illustrating the approaches for predicting standard, high-dose PET images as described herein. In one approach, a model, generally designated "Model 1 " is generated. Model 1 may be determined or generated from data obtained from a plurality of MRI, low-dose PET images, and high, standard-dose PET images. Upon obtaining information from the plurality of MRI, low-dose PET, and high-dose PET images, a prediction model can be calculated such that
the high, standard-dose PET images can be predicted using the model built or trained from the data set.
As Figure 2 further illustrates, tissue-specific models can be built using low-dose PET and MRI images. In a further approach, the first model or "Model 1 " can be refined, and iteratively updated into a refined model designated "Model 2" to "Model N" (e.g., where N is a whole number integer > 2) via estimating the image difference between the predicted and actual high-dose PET image. 1. Introduction
Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in a human body. PET has been widely used in various clinical applications, such as diagnosis of tumors, diseases, and diffuse brain disorders. High quality PET images play an essential role in diagnosing diseases/disorders and assessing the response to therapy. In practice, in order to obtain the high quality PET images, a standard or high-dose radionuclide (tracer) needs to be injected into the patient's living body. As a result, the risk of radiation exposure increases. Recently, researchers have attempted to acquire low- dose PET images, as opposed of high-dose images, to minimize the radiation risk, at the cost of reduced image quality or lengthening imaging acquisition time.
As described herein, a regression forest (RF) based framework is used for generating an estimated standard or high-dose PET image by using values predicted from a low-dose PET image and its corresponding magnetic resonance imaging (MRI) image. Exemplary embodiments herein include prediction of standard-dose PET images of brain tissue using simultaneously acquired low-dose PET/MIR images. Prediction of standard- dose PET images for any non-brain tissue (e.g., body tissue) can also be provided. Systems and methods herein are not limited to predicting standard-dose PET images of the brain, but rather, systems and methods herein can be used to predict standard-dose PET images of any anatomical member of a patient's body, or tissue thereof, such as of the foot, knee,
back, shoulder, stomach, lung, neck, shoulder, etc. In some embodiments, any standard-dose PET scan (even whole body scans) can be predicted using systems and methods described herein. In some aspects, prediction methods, systems, and computer readable media herein are used to transform MR and low-dose PET images, or data obtained therefrom, into a high-dose PET image.
The proposed method includes two steps. First, based on the segmented tissues (i.e., cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) in the example) in the MRI image, the appearance features for each patch in the brain image from both low-dose PET and MRI images is extracted to build tissue-specific models that can be used to predict standard, high-dose PET images. Second, a refinement strategy via estimating the predicted image difference is used to further improve the prediction accuracy. In one exemplary embodiment, the proposed approach has been evaluated on a dataset consisting of eight (8) subjects with MRI, low-dose PET and high-dose PET images, using leave-one-out cross- validation. The proposed method is also compared with the sparse representation (SR) based method. Both qualitative and quantitative results indicate better performance using methods, systems, and computer readable media described herein.
2. Regression Forest (RF)
Random forest, often called a regression forest (RF) when applied to non-linear regression tasks, was originally proposed by Breiman [4]. It consists of multiple binary decision trees, with each tree trained independently with random features and thresholds. The final prediction of a random forest is the average over the predictions of all its individual trees. As an ensemble method, it has proved to be a powerful tool (e.g., training tool) in the machine learning field, and has recently gained much popularity on both classification and regression problems, such as remote sensing image classification [29, 15], medical image segmentation [22, 27, 46], human diseases/disorders diagnosis [2, 13, 39], facial analysis [6] and so on.
Similar to other supervised models, the use of regression forest involves both training and testing stages. In the training stage, given a set of training data {¾ί{) |ί = 1,•••, N), where f{ and t£ indicate the feature vector and regression target of the i-th training sample, regression forest aims to learn a non-linear model for predicting the target t based on the input features f. In the regression forest, each binary decision tree is trained independently. A binary decision tree consists of two types of nodes, namely split nodes (non-leaf nodes) and leaf nodes. The split node is often associated with a decision stump function g(f\j, 9) = π·(ί) < Θ, where π;-(ί) indicates the response of the ;-th feature in the input feature vector f, and Θ is a threshold.
The optimal combination of j and Θ is learned by maximizing the average variance decrease in each dimension of the regression target after splitting. The leaf node stores the average regression target of training samples falling into this node. The training of binary decision tree starts with finding the optimal split at the root node, and recursively proceeds on child nodes until either the maximum tree depth is reached or the number of training samples is too small to split. In the testing stage, a new testing sample is pushed through each learned decision tree, starting at the root node. At each split node, the associated decision stump function g(f\j, 9) is applied to the testing sample. If the result is false, then this testing sample is sent to the left child; otherwise, it is sent to the right child. Once the testing sample reaches a leaf node, the average regression target stored in that leaf node will be taken as the output of this binary decision tree. The final prediction value of the entire forest is the average of outputs from all binary decision trees.
3. Methods
The goal of a RF based approach is to predict the intensity of each voxel 17 ε I3 in a new subject. For this purpose, as shown in Figure 2, the approach consists of two major steps or approaches, the initial standard- dose PET prediction step and the incremental refinement step via estimating
image differences. Both steps adopt regression forest as the non-linear prediction model. Each approach is discussed in detail below.
3.1. Prediction of an initial standard-dose PET image by tissue-specific regression forests
Due to the large volume of a human brain (e.g., usually with millions of image voxels), it is intractable to learn a global regression model for predicting the high-dose PET image over the entire brain. Many studies [9, 36] have shown that learning multiple local models would improve the prediction performance, compared with a single global model. Thus, one RF can be learned for each type of tissue. In one exemplary embodiment, brain tissue models are learned where one model corresponds to white matter (WM), one corresponds to gray matter (GM), and/or one corresponds to cerebrospinal fluid (CSF). Since the appearance variation within each brain tissue is much less than that across different brain tissues, tissue-specific regression forest models yield more accurate predictions than a global regression forest model (trained for the entire brain). As similar to most learning-based methods, the proposed method consists of training stage and testing stage as follows. Non-brain (e.g., body) tissue specific models can also be learned, trained, and/or provided.
In the training stage, training data consists of MRI, low-dose PET, and standard-dose PET from different training patients (i.e., "training subjects"). Each training subject has one set of MRI images (for example, T1 -weighted images), two sets of low-dose PET images (that are scanned separately, one after the other, with details explained in Section 4.1 below, Datasets and preprocessing) and the one set of corresponding standard-dose PET images. Before learning the tissue-specific regression forests, the four images of all training subjects (e.g., 1-MR image, 2-low-dose PET images, and 1 -high-dose PET image) are linearly aligned onto a common space by a flexible image registration toolbox ("FLIRT") [8]. Then, a brain segmentation method [47] is adopted to segment the entire brain region into WM, GM and CSF for each training subject, based on the respective MRI image. To train the regression forest for each tissue type, a set of training samples/points
{(fi,fi) |i = !, ·■■, N] is randomly sampled (e.g., where it and tt indicate the feature vector and regression target, respectively) within this tissue region for every training subject. Figure 3 illustrates extracting training data (features and response) to train the tissue-specific regression forests for predicting the initial standard-dose PET image.
As shown in Figure 3, for the -th sample/point at positions e E3 , extraction of the local intensity patches Pi, P2, and P3 from the MRI and low- dose PET images centered at position v for serving as the input features ff (ΐ = 1,■·· , #) in the regression forest. The voxel intensity of the corresponding standard-dose PET image at the position v P4 is taken as the regression target tj(i = 1, ··· , Ν). In this way, a prediction node (e.g., 100, Figure 13) can execute a model M stored in a memory element for learning three tissue-specific regression forests in the training stage, which will be in charge of predicting and generating an estimated high-dose PET image within the respective tissue regions. Prediction node(s) described herein include a computing platform having a hardware processor and memory element configured to execute steps for predicting and generating an estimated high-dose PET image without actually having to perform a high- dose PET scan.
In the testing stage, given a testing subject with both MRI and low- dose PETs, first linearly align the MRI and low-dose PET images onto a common space (as defined in the training stage) by using FLIRT [8], and automatically segment the MRI image into three brain tissues [47]. Then, the high-dose PET image can be predicted in a voxel-wise manner by using the local image appearance information from the aligned MRI and low-dose PET images. Specifically, for each voxel in the unknown standard-dose PET image, similar to the training stage as shown in Figure 3, the prediction node can extract the local intensity patches at the same location from both MRI and low-dose PET images. Based on the extracted intensity patches and the tissue label at this location, the prediction node can apply the corresponding tissue-specific regression forest to predict the standard-dose PET value for this voxel. By iterating all image voxels, a standard-dose PET image can be
predicted. In summary, the initial prediction framework with both training and testing (prediction) stages is summarized in Table 1 , as follows:
TABLE 1
Sub-framework 1 : Initially predicting high-dose PET image by using tissue- specific regression forests (RF).
Given: MRI, low-dose PET, and high-dose PET images.
Training:
1 ) For all given training subjects, segment the respective MRI images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF).
2) Based on the tissue segmentation results, sample the training samples { ft ti \i = 1, ··· , Ν} for each tissue label. Specifically, for each tissue, extract the patch-based appearance features ft (i = 1, >>- , N) from low-dose PET and MRI images, and extract the voxel intensity tt i = 1, ··· , Ν) from the high- dose PET image.
3) Take ft (i = 1,•■•, Ν) as input features and tt(i = 1, ·'· , Λ ) as regression response, to independently construct the three kinds of tissue-specific RF (models).
Testing:
1) For a given new subject, segment the respective MRI image into WM, GM, and CSF.
2) For every voxel, extract the patch-based appearance features fnew from both low-dose PET and MRI images.
3) Based on the segmented tissue label, apply the corresponding tissue- specific RF (models) on fne to get the predicted high-dose PET value t for each voxel for which a high-dose PET image can be generated. 3.2. Incremental refinement by estimating image difference
Motivated by the success of auto-context models [10, 35, 40], an incremental refinement framework is also set forth herein, for iteratively improving the quality of the predicted standard-dose PET image. To accomplish this, a prediction node learns a sequence of tissue-specific RFs for gradually (i.e., incrementally or iteratively) minimizing the image
difference between the predicted image and the target image or actual, standard-dose PET images obtained during the training stage. In particular, the tissue-specific RFs at iteration k aim to estimate the image difference between the predicted standard-dose PET image by the previous k iterations and the target or actual standard-dose PET image. Specifically, similar to training tissue-specific RFs (as described in the above Subsection 3.1 ) for predicting initial standard-dose PET image, first randomly sample a set of training samples/points
= 1,— , Ν} within each tissue region for every training subject.
Figure 4 illustrates extracting training data (features and response) to train the tissue-specific RFs for predicting (estimating) the image difference. As shown in Figure 4, for the i-ih sample/point at position p e t3 , extract the local intensity patches Pi , P2, and P3 from both MRI and low-dose PET images centered at position v for serving as the input features f£ (i = l, ~- , N) in the RF. The voxel value V of the real difference map at the position v is taken as the regression target
= 1, ··· , ). In this way, a prediction node of a special purpose computing platform as described in detail below can learn three tissue-specific regression forests during the training stage, which are configured to predict (estimate) the image difference within a respective tissue region. By adding the estimated image difference on top of the previously predicted standard-dose PET image, the new updated prediction may be closer to the target standard-dose PET image, thus improving the prediction accuracy.
In the testing stage, given a new subject with MRI and low-dose PET images, the learned tissue-specific RFs (with models shown in Figure 2) can be applied sequentially to obtain a final predicted standard-dose PET image. Specifically, in the first iteration (e.g., to obtain "Model 1 ", Figure 2), only the technique described in Subsection 3.1 is adopted to predict an initial standard-dose PET image. Then, the tissue-specific regression forests in the next iterations (e.g., Model 2 to Model N, Figure 2) will be used to sequentially estimate the image difference between the current prediction and the target standard-dose PET image. The estimated image differences by the later regression forests will be sequentially added onto the initially
predicted high-dose PET image for incremental refinement. The incremental refinement framework can further boost the prediction accuracy of tissue- specific regression forests.
The incremental refinement framework with both training and testing (prediction) stages is summarized in Table 2 below.
TABLE 2
Sub-framework 2: Incremental refinement via estimating the image difference Given: M I, low-dose PET, and previously predicted standard-dose PET images.
Training:
1) For all given training subjects, segment the respective MRI images into WM, GM, and CSF.
2) Compute the difference tflff(i = 1, ··· , Ν) between the original (i.e., actual) standard-dose PET image (ground truth) and the standard-dose PET image predicted in the previous step.
3) Based on the tissue segmentation results, sample the training samples {(ii tfLff) \ i = l, "- , N} for each brain tissue. Specifically, for each brain tissue, extract the patch-based appearance features ft (t = 1, ··· , N) from low-dose PET and MRI images, and extract the voxel value t ff {i = 1, ··■ , N~) from the difference map computed in 2).
4) Take ft (i = 1,— , N) as input features and t l /(i = 1, ·· · , ΛΓ) as regression response, to independently construct the three kinds of tissue-specific regression forests (models).
Testing:
1 ) For a given new subject, segment the corresponding MRI image into WM, GM, and CSF.
2) For every voxel, extract the patch-based appearance features fnew from both low-dose PET and MRI images.
3) Based on the segmented tissue labels, apply the corresponding tissue- specific regression forests (models) on fnew to get the predicted difference value fdi Zfor each voxel.
Once the image difference is predicted, it can be added onto the previously predicted standard-dose PET image for incremental refinement. The procedure can be repeated until either the incremental refinement is negligible or a specified iteration number is reached.
4. Experimental results
In this section, the performance of the approach using a dataset consisting of eight (8) subjects is discussed. Leave-one-out cross-validation, which has been adopted in numerous papers [20, 37, 43], can be used to evaluate the performance of the outlined approach. Specifically, at each leave-one-out case, seven (7) subjects are selected as training images, and the remaining one is used as a testing image. This process is repeated until each image is taken as the testing image once. In both the training and testing stages, all images from each subject are linearly aligned onto a common space via FLIRT [8]. The dataset and preprocessing steps are described in detail in the following subsection. The contribution of each element is also investigated, i.e., the effect of MRI to help low-dose PET for predicting standard-dose PET image, the effect of tissue-specific models, the effect of image difference estimation for incremental refinement, and the effect of combining more low-dose PETs with MRI.
All experiments include the following parameters: patch size: 9x9x9; the number of trees in a forest: 10; the number of randomly selected features: 1000; the maximum tree depth: 15; the minimum number of samples at each leaf: 5; and the number of iterations in incremental refinement is 2.
4.1. Datasets and preprocessing
The method was evaluated on a dataset consisting of eight (8) patients. Patients were chosen from a group referred for PET scans for clinical indications. In each case, the diagnosis was unknown and not used in the analysis. Patients were administered an average of 203 megabecquerel (MBq) (range: 191 MBq to 229 Bq) of an exemplary radiotracer, such as 18F-fluorodeoxyglucose (18F-FDG). The first PET scan
(the "standard-dose", aka, the "high-dose" scan) was performed for a full 12 minutes within sixty minutes of injection, in accordance with standard protocols. Immediately after, a second PET dataset was acquired in list- mode for 12 minutes, which was broken up into separate three-minute sets (the "low-dose" scans). Note that the reduced acquisition time at standard- dose as a surrogate for standard acquisition time at reduced dose. In this case, the "low-dose" is approximately 25% of the standard-dose.
In processing, four images for each subject are used: one MRI, two low-dose PETs, and one standard-dose PET. All data was acquired on a Siemens Biograph mMR (a hybrid MR-PET or PET-MR system). Of note, for all subjects, the low-dose PET image sets are the completely separate acquisitions from the standard-dose PET image sets. Moreover, each of the low-dose PET images is a separate acquisition (for simulating the image acquisition at different time points). Meanwhile, a T1 -weighted MR image was also scanned. T1-weighted MR image was affine-aligned to the PET image space. All PET datasets, i.e., standard-dose PET and low-dose PETs were reconstructed using standard methods from the manufacturer, including MRI-based attenuation correction using the Dixon sequence, and corrections for scatters. Iterative reconstruction was performed with the' OSEM algorithm [18].
All images were preprocessed according to the following. 1) Linear image alignment: four images (MRI, two low-dose PETs, and standard-dose PET) of each subject were linearly aligned onto a common space by FLIRT [8]; 2) Skull stripping: non-brain tissue (e.g., CSF) parts were removed from the aligned images [38]; 3) Intensity normalization: each modality image was normalized via histogram matching; 4) Tissue segmentation: WM, GM and CSF were segmented from each skull-stripped MRI brain image [47].
4.2. Quantitative evaluation of predicted standard-dose PET image
For quantitatively evaluating the performance of the method, two widely used metrics, i.e., normalized mean squared error (NMSE) [7] and peak signal-to-noise ratio (PSNR) [49], are employed to measure the quality of predicted standard-dose PET image, with respect to the ground truth (i.e.,
original, standard-dose PET image). The NMSE and PSNR are defined according to Equations 1 and 2 as follows:
Equation (1 ): NMSE = i"-»ilz
\ \ \
Equation (2): PSNR = 101og10(T υ
In the above equations, H is the ground-truth standard-dose PET image, H is the predicted high-dose PET image, L is the maximal intensity range of images H and H, and M is the total number of voxels in the image. In general, a good algorithm provides lower NMSE and higher PSNR.
4.3. Effect of using MRI to help low-dose PET for predicting standard-dose PET image
Generally, anatomical information provided by MRI image can compensate for the molecular information of PET in the PET/MRI imaging system [41]. In this subsection, the effect of combining MRI with low-dose PET image for predicting standard-dose PET images is investigated. For this purpose, the following are respectively used 1 ) MRI, 2) one of the low-dose PETs, and 3) the combination of MRI and low-dose PET, to build the global models for predicting standard-dose PET image of the entire brain without separation of the model into tissue-specific components. Table 3 lists the prediction performances, in terms of NMSE and PSNR.
Table 3 below compares prediction performances yielded by three models built, respectively, using MRI only, low-dose PET 1 only, and MRI plus low-dose PET 1 . Here, low-dose PET 1 denotes for one of the two low- dose PETs available.
TABLE 3
Subject MRI Low-dose PET 1 MRI +
# Low-dose PET 1
NMSE PSNR NMSE PSNR NMSE PSNR
1 0.0411 18.9908 0.0141 23.6387 0.0133 23.8907
2 0.0238 20.6193 0.0076 25.4967 0.0072 25.6990
3 0.0243 20.8297 0.0140 23.2306 0.01 16 24.0132
4 0.0298 20.2742 0.0225 21 .5084 0.0193 22.1748
5 0.0788 16.7868 0.0122 24.8828 0.01 17 25.0385
6 0.0442 18.6904 0.0269 20.6610 0.0247 21.0634
7 0.0536 18.3999 0.0218 22.1492 0.0198 22.5743
8 0.0308 19.7942 0.0093 25.2008 0.0078 25.9630
Mean 0.0408 19.2982 0.0161 23.3460 0.0144 23.8021
In order to visually observe the better performance obtained by using the combination of MRI and low-dose PET 1 , Figure 5 illustrates a further comparison between the model built using low-dose PET 1 and the model built using the combination of MRI and low-dose PET 1. For example, Figure 5 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by two global models built by 1) low-dose PET 1 and 2) MRI + low-dose PET 1.
From both Table 3 and Figure 5, it is apparent that the model built by using the combination of MRI and low-dose PET 1 gains better prediction performance (as indicated by lower NMSE and higher PSNR values) across all subjects. As noted above and in general, a good algorithm provides lower NMSE and higher PSNR. Thus, the model combining both structural and functional imaging modalities (e.g., a multi or dual modality) achieves higher prediction accuracy than the model using only a single imaging modality, i.e., using low-dose PET.
4.4. Effect of using tissue-specific models for predicting standard-dose PET image
It is worth noting the difficulty of learning a global regression model to accurately predict standard-dose PET image over the entire brain, due to large volume of human brain (often with different tissue properties). Learning multiple local models can improve the prediction performance, compared with a single global model [9, 36]. In this subsection, the advantage of building tissue-specific models is explored and compared to a global model in predicting the standard-dose PET image. Note that the tissue-specific models are built for each type of brain tissue (WM, GM, and CSF) and used for predicting standard-dose PET of the respective brain tissue, while the global model is built for the entire brain and used for predicting whole-brain standard-dose PET. The tissue-specific model and the global model are built using the same MRI plus low-dose PET 1 . Table 4 below lists prediction performances, in terms of NMSE and PSNR.
TABLE 4
Subject Global Model Tissue-specific
# Models
NMSE PSNR NMSE PSNR
1 0.0133 23.8907 0.0153 23.2600
2 0.0072 25.6990 0.0069 25.8490
3 0.01 16 24.0132 0.0105 24.4422
4 0.0193 22.1748 0.0175 22.5863
5 0.01 17 25.0385 0.0121 24.9220
6 0.0247 21.0634 0.0244 21 .1584
7 0.0198 22.5743 0.0165 23.3614
8 0.0078 25.9630 0.0061 26.9196
Mean 0.0144 23.8021 0.0137 24.0624
Figure 6 is a graphical illustration of performance comparison, in terms of NMSE and PSNR, yielded by a global model and tissue-specific models, respectively. As Table 4 and Figure 6 collectively illustrate, it is
apparent that the tissue-specific models yield better overall performance compared with the global model, i.e., with lower NMSE and higher PSNR. Figure 6 illustrates a comparison for the prediction performances by using tissue-specific models and a global model. Table 4 compares prediction performances yielded by global model and tissue-specific models, respectively. Each illustrate the tissue-specific models yielding better overall performance than the global model in terms of a model having lower NMSE and higher PSNR. 4.5. Effect of estimating image difference in incremental refinement
The prediction performance can be further improved by auto-context models [9, 35, 40]. In this subsection, the performance improvement by estimating image differences between the previously predicted standard- dose PET and the original standard-dose PET (ground truth) is examined. In the following, the term "one-layer model" is the above model that directly estimates high-dose PET as the one-layer model, and the term "two-layer model" is the above model + image difference estimation as the two-layer model. Note that all these methods use tissue-specific models, built using the MRI plus low-dose PET 1. Table 5 lists the prediction performances, in terms of NMSE and PSNR for multiple subjects. Table 5 compares prediction performances yielded by one-layer model and two-layer model, respectively.
TABLE 5
Subject One-layer Two-layer
# Model Model
NMSE PSNR NMSE PSNR
1 0.0153 23.2600 0.0120 24.0222
2 0.0069 25.849 0.0055 26.5882
3 0.0105 24.4422 0.0091 24.8222
4 0.0175 22.5863 0.0149 22.9422
5 0.0121 24.922 0.0103 25.4121
6 0.0244 21.1584 0.0222 21.2546
7 0.0165 23.3614 0.0131 23.9744
8 0.0061 26.9196 0.0062 26.8942 Mean 0.0137 24.0624 0.01 17 24.4888
For visual purposes, Figure 7 is a graphical illustration of the performance comparison between one-layer model and two-layer model. From both Table 5 and Figure 7, it can be seen that, compared with the one- layer model, the two-layer model (ensemble model) achieves better prediction performance, indicated by lower NMSE and higher PSNR values. Figure 7 is the comparison, in terms of NMSE and PSNR, yielded by one- layer tissue-specific model and two-layer tissue-specific model, respectively. Here, both one-layer and two-layer models are built by using the combination of MRI and Low-dose PET 1 .
To further illustrate the performance improvement by using the second layer model (Model 2) for the case of combining MRI and low-dose PET 1 , Figure 8 is a graphical comparison between the one-layer model (first layer model, "Model 1 ") and the two-layer model (two layers model, "Model 1 + 2") on a sequence of voxels with maximal prediction errors using Model 1 . From Table 5 and Figure 7, it is apparent that the overall performance for the entire brain is improved slightly by additionally using "Model 2". However, as shown in Figure 8, for the voxels with maximal prediction errors by Model 1 , the performance improvement by further using Model 2 (e.g., Model 1 + 2) is visibly apparent, especially for some subjects as shown by ("one subject") lines in Figure 8. The main reason for this phenomenon is that, for most voxels in the brain, Model 1 already achieves very good performance, thus affecting the calculation of overall improvement amount by Model 2. Figure 8 is the performance comparison between the proposed Model 1 and the Model 1 + 2, in terms of NMSE and PSNR. The "OVERALL" lines in Figure 8 denote the results from all subjects, while "ONE SUBJECT" lines denote for the results from a selected subject. Here, both Model 1 and Model 1 + 2 use the tissue-specific models built with the combination of MRI and low-dose PET 1.
4.6. Effect of combining more low-dose PETs with MRI in predicting standard-dose PET
Regression forest is aimed at capturing the intrinsic relationship among different modalities/images. Combining more low-dose PETs with MRI can provide richer feature information for constructing regression forest, in which each decision tree is independently build by randomly selecting features and thresholds as mentioned above [4], Thus, the problem of over fitting can be addressed to some extent.
In this subsection, the performance improvement by additionally using Jow-dose PET 2 combined with MRI and low-dose PET 1 is described. Thus, using two low-dose PET images and one MR image is described. Both the one-layer model and the two-layer model discussed below use the tissue-specific models, built using 1 ) the combination of MRI and low-dose PET 1 , and 2) the combination of MRI and low-dose PET 2, and 3) the combination of MRI, low-dose PET 1 , and low-dose PET 2. Table 6 below lists the prediction performances, in terms of NMSE and PSNR.
TABLE 6
MRI + Low-dose MRI + Low-dose MRI + Low-dose
PET 1 PET 2 PET 1 and 2
NMSE PSNR NMSE PSNR NMSE PSNR
Model 1 0.0137 24.0624 0.0144 23.7830 0.0133 24.1978
Model 1 + 2 0.0117 24.4888 0.0122 24.2393 0.0113 24.6228
For illustration purposes, Figure 9 shows the comparison of prediction performances yielded by using the models constructed with different combinations of modalities as described above. Table 6 compares of prediction performances yielded by the models using different combinations of modalities. In the above table, "Model 1" refers to a one layer model (the first layer model), and "Model 1 + 2" refers to a two layer model in which the first layer model is used to predict the initial high-dose PET image, and the second layer model is used to estimate the image difference. Both Table 6
and Figure 9 demonstrate that the best performance is yielded by the model built using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
Figure 9 is a comparison of prediction performances, in terms of NMSE and PSNR, yielded by the models built with different combinations of modalities. Here, all models (the first layer models (Model 1 ) and the second layer models (Model 2) for three kinds of combinations) use tissue specific models.
4.7. Comparison with sparse representation (SR) method
Recently, patch-based SR has attracted a significant amount of interest. This method assumes that image patches can be represented by a sparse linear combination of image patches in an over-complete dictionary [44, 45]. This strategy has been widely used in a great deal of image processing problems, such as image super-resolution [30, 45], medical image processing and analysis [9, 17, 42] and so on, and has achieved the state-of-the-art performance. SR techniques can also be adopted for voxel- wise prediction of the standard-dose PET by utilizing information from both MRI and low-dose PET. Specifically, in the SR technique, to estimate the high-dose PET intensity t for a voxel i? e I3, a sparse coefficient av needs to be first calculated by solving the following Elastic-Net problem, in Equation (3) below:
Equation (3): \ \Dvav - f(v) \ \\ + λ^α^ + ½ | |αν| |!
In Equation 3 above, f(v) is the feature vector of voxel v, defined as the vector of concatenated intensities of local patches from both MRI and low-dose PET; v is the sparse coefficient of voxel v \o be estimated; Dv is the dictionary of voxel v, consisting of feature vectors of voxels within a small neighborhood of voxel v from all training subjects; λ1 and λζ control sparsity and smoothness of the estimated sparse coefficient av. Once av is obtained, the image-patch vector of voxel v in the predicted standard-dose
PET, P (v), can be estimated according to Equation (4) below:
Equation (4): P 0) = Ζ a. V
In Equation (4) above, Ζλ is the dictionary that contains intensity patches from the high-dose PET images corresponding to the elements in the overall dictionary^. Then, by taking the center value from P O), the intensity of voxel v at the new predicted standard-dose PET is obtained as in Equation (5) below:
Equation (5) : £? = <C(P (v)) )
In Equation (5) above, C(-) is the operation of taking the center value from a column vector.
For comparison, in the following SR based experiments, the parameters are optimized via cross validation. Finally, λχ and λ2 are set to be 0.1 and 0.01 , respectively; the neighborhood (searching window) size is set to 5 x 5 x 5; and the patch size is set to 5 χ 5 χ 5. SR based prediction of standard-dose PET also uses the combination of MRI, low-dose PET 1 , and low-dose PET 2 as input to calculate the sparse coefficients between the training and testing images. In the instant implementation, a SLEP toolbox [23] is used to solve the Elastic-Net problem.
Figure 10 is an illustration of prediction results between SR and the methods above, (i.e., RF, Model 1 + 2) on the different subjects as shown in the first and second rows, from columns (A) to (H) respectively. Here, the lower contrast "difference" maps (shown in column (E) and column (G)) are computed between the predicted high-dose PET and the original high-dose PET (ground truth). Figure 10 illustrates the qualitative results of predicted standard-dose PET using, respectively, 1 ) the proposed method (regression forest (RF) based method (two-layer model (Model 1 + 2))), and 2), SR based method, on the two randomly selected subjects.
Table 7 below and Figure 11 show the quantitative comparison between the proposed RF method (including one-layer model (Model 1) and
two-layer model (Model 1 + 2)) and an SR-based method, in terms of NMSE and PSNR. In addition, as shown in Figure 12, in order to demonstrate the quality improvement of RF predicted standard-dose PET image over the original low-dose PET image, both NMSE and PSNR for the low-dose PET with respect to the ground-truth (original standard-dose PET) is also calculated. Here, the parameters' settings for the instant and improved RF method (and RF-based models) are same as the settings described in the above Subsection 4.6. Table 7 compares prediction performances, in terms of NMSE and PSNR.
In Table 7 above, the term "Low-dose PET 1 and 2 (Mean)" is indicative of the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth. Here, the RF-based models, i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models. All methods, including SR, are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2.
From Figure 10, it is apparent that RF models (e.g. , RF(Model 1 + 2) is improved over SR techniques. RF models achieves more desirable predictions than the SR technique, and with much smaller difference
magnitudes (see e.g., column (G) in Figure 10) and are more similar in regards to image appearance with the ground-truth.
Further, both Table 7 and Figure 1 1 illustrate the RF method achieving much better prediction accuracy (with much lower NMSE and higher PSNR) than the SR technique. Figure 1 1 is a graphical plot comparing prediction performances, in terms of NMSE and PSNR, with respect to the ground truth (original high-dose PET). Here, all models are built by using the combination of MRI, low-dose PET 1 , and low-dose PET 2. Note that the regression forest (RF) based models, i.e., RF (Model 1) and RF (Model 1 + 2), use tissue-specific models and not a global model.
Figure 12 is a comparison of image quality, in terms of NMSE and PSNR, with respect to the ground truth (i.e., the original high-dose PET). "Low-dose PET 1 and 2" is the average (NMSE or PSNR) of low-dose PET 1 and low-dose PET 2 with respect to the ground truth. "RF(Model 1 + 2)" stands for the (NMSE or PSNR) value of predicted high-dose PET image with respect to the ground truth, by using the regression forest based method, i.e., RF(Model 1 + 2). In Figure 12, RF(Model 1 + 2) also uses tissue-specific models, not a global model.
The limited prediction accuracy of SR may be due to two or more reasons, for example, one reason being that both MRI and low-dose PET modalities are treated equally in the sparse representation, and a possible second reason being that only linear prediction models are adopted, which might be insufficient to capture the complex relationship among MRI, low- dose PET, and high-dose PET. By contrast, the instant and improved RF- based method adopts RF to simultaneously identify informative features from MRI and low-dose PET for predicting and generating an estimated standard- dose PET images, and further learn the intrinsic relationship among MRI, low-dose PET and standard-dose PET. Consequently, by addressing the limitations of SR, the proposed method (e.g., RF method discussed above) achieves much higher prediction accuracy.
In addition, as shown in Figure 1 , the quality of low-dose PET image is obviously lower than that of the standard-dose PET image. Fortunately, as expected, from Figure 2, it is now known that by using RF methods, the
quality (reflected by NMSE and PSNR) of the predicted standard-dose PET is better than the quality of low-dose PET, i.e., the predicted standard-dose PET has a much lower NMSE and higher PSNR with respect to the ground truth (original standard-dose PET).
5. Discussion
As described herein, novel methods, systems, and computer readable media are disclosed, in which a standard, high-dose PET image is predicted using a machine (e.g., computing platform) learning based framework to generate a prediction of a standard, high-dose PET image. Different from the traditional technique of acquiring the standard-dose PET, the proposed method utilizes low-dose PET, combined with an MR structural image, to predict standard-dose PET. Results show and described herein illustrate that the instant method substantially improves the quality of low-dose PET. The prediction performance obtained also indicates a good practicability of the proposed framework.
As an attempt to explore the prediction of standard-dose PET using the low-dose PET, each element in the methods discussed above (e.g., RF- based methods) has its own contribution in improving the prediction performance. Specifically, high-resolution brain anatomical information provided by MRI helps low-dose PET to predict standard-dose PET. As derived from Table 3 and Figure 5 discussed above, the complementary information from different modalities significantly improves the prediction results. In addition, the tissue-specific model gains better prediction performance than the global model. The main reason is that, due to the large volume of human brain (often with different tissue properties); it is difficult to learn a global regression model for accurate prediction of standard-dose PET over the entire brain. In contrast, learning multiple tissue-specific models improved the prediction performance as indicated in both Table 4 and Figure 6 discussed above. In addition, different from learning a global model for the entire brain, tissue-specific models can be trained simultaneously, thus the training time can also be reduced significantly. Furthermore, by estimating image differences between previously-predicted
standard-dose PET and the original standard-dose PET, the prediction accuracy can be further improved, especially for the voxels with maximal prediction errors using the previous layer model, as shown in Figure 8 discussed above.
Figure 13 is a block diagram illustrating an exemplary system or node
100 (e.g., a single or multiple processing core computing device or computing platform) for predicting standard, high-dose PET values for generating an estimated high-dose PET images according to embodiments of the subject matter described herein. Node 100 may include any suitable entity, such as a computing device or computing platform for performing one more aspects of the present subject matter described herein or in the manuscript entitled "Prediction of High-dose PET Image with MRI and Low- dose PET images"; the disclosure of which is incorporated herein by reference in its entirety. In accordance with embodiments of the subject matter described herein, components, computing modules, and/or portions of node 100 may be implemented or distributed across one or more (e.g., multiple) devices or computing platforms. For example, a cluster of nodes 100' may be used to perform various portions of high-dose PET image prediction, refinement, and/or application.
It should be noted that node 100 and its components and functionality described herein constitute a special purpose test node, special purpose computing device, or machine that improves the technological field of brain and/or body imaging (e.g., MR and/or PET imaging) by allowing prediction and generation of a high-dose PET image without performing a high-dose PET scan, thus advantageously reducing a patient's exposure to radiation. Notably, MR and PET imaging, and improvements thereto, are necessarily rooted in computer technology in order to overcome a problem specifically arising in the realm of computer networks (i.e., the need to predict a high- dose PET scan from MR and low-dose PET scans). The methods, systems, and computer readable media described herein are not capable of manual processing (i.e., such cannot be manually performed by a human being), as such, the methods, systems, and computer readable media described herein
are achieved upon utilization of physical computing hardware components, devices, and/or machines necessarily rooted in computer technology.
In some embodiments, node 100 includes a computing platform that includes one or more processors 102. In some embodiments, processor 102 includes a hardware processor or microprocessor, such as a multi-core processor, or any suitable other processing core, including processors for virtual machines, adapted to execute or implement instructions stored by an access memory 104.
Memory 104 may include any non-transitory computer readable medium and may be operative to communicate with one or more of processors 102. Memory 104 may include and/or have stored therein a standard or high-dose PET Prediction Module (HDPPM) 106 for execution by processor 102. In accordance with embodiments of the subject matter described herein, HDPPM 106 may be configured to extract appearance information or features from corresponding MR and low-dose PET image locations or patches, segment the MR image based upon a tissue type (e.g., GM, WM, or CSF), classify the MR and low-dose PET image locations or patches per tissue type, and execute and apply a tissue specific model to the extracted information for predicting a high-dose PET image, per voxel, and for generating an estimated (e.g., not actual|) high-dose PET image upon a compilation of all predicted voxels. Processor 102 may predict, and transmit as output, a high-dose PET image generated from a plurality of predicted high-dose PET voxels via HDPPM 106.
In some embodiments, node 100 obviates the need for performing a high-dose PET scan, and instead predicts and generates a high-dose PET image from at least one low-dose PET image and at least one MR image. This is advantageous, as a patient's exposure to radiation is minimized, in some aspects by approximately 1/2, 1/4, or by 1/10 or less. The low-dose PET image and MR image may be obtained simultaneously (e.g., via a PET/MRI scanning system) for faster prediction/generation of a high-dose PET image. In other embodiments, the low-dose PET and MR images may be obtained separately (i.e., non-simultaneously) from separate PET/MR scanning machines or imaging systems.
In some embodiments, HDPPM 106 is configured to implement one or more RF-based analysis and/or modeling techniques for use in prediction of high-dose PET images. Exemplary RF techniques or models described above may be used, executed, and/or implemented by HDPPM 106. For example, HDPPM 106 may execute one or more tissue specific models using, as inputs, appearance features extracted from at least one MR image (anatomical imaging features) and at least one low-dose PET image (molecular imaging features). HDPPM 106 may be configured to initially predict high-dose PET values and/or a high-dose PET image using tissue- specific RF modeling. HDPPM 106 may then incrementally refine the predicted values and predicted high-dose PET image via machine estimated image differences. The estimated differences may be applied to the previously predicted and generated standard-dose PET values and image, respectively, for incremental refinement, where desired. HDPPM 106 may be used to predict high-dose PET values for generating estimated high-dose PET images of the brain and/or body.
In one embodiment, node 100 receives extracted appearance features from at least one low-dose PET image and at least MR image of a subject brain or body portion. In other embodiments, node 100 is configured to receive the images as input, and extract the appearance features from at least one low-dose PET image and at least one corresponding MR image via HDPPM 106. The corresponding MR/low-dose PET images may be received and aligned onto a common space, and appearance features (e.g., local intensity patches at a same location) may be extracted by HDPPM 106. The MR image may also be segmented by tissue at each location (e.g., into GM, WM, CSF in the brain) by HDPPM 106. Based on the given extracted intensity patches and tissue label at each location, tissue-specific RF is used to predict a high-dose PET value for each voxel. By iterating all image voxels, a high-dose PET image can be predicted and generated at HDPPM 106. HDPPM 106 may further refine the predicted image by applying tissue- specific regression forests (e.g., models) to extracted appearance features to get the predicted difference values for each voxel.
In accordance with embodiments of the subject matter described herein, HDPPM 106 may be configured to work in parallel with a plurality of processors (e.g., processors 102) and/or other computing platforms or nodes. For example, a plurality of processor cores may each be associated with a tissue specific model and/or imaging technique (e.g., receiving MR or low-dose PET features).
It will be appreciated that Figure 13 is for illustrative purposes and that various nodes, their locations, and/or their functions may be changed, altered, added, or removed. For example, some nodes and/or functions may be combined into a single entity. In a second example, a node and/or function may be located at or implemented by two or more nodes.
HDPPM 106 of node 100 may also be used for dynamic image acquisition and dynamic monitoring of tracer uptake versus time as illustrated in Figure 14. As different lesions (e.g., tumors) can uptake (e.g., metabolize) biomarkers (i.e., radionuclides or tracers) at different rates, a need exists for dynamically monitoring a tracer uptake period or time interval for assessing the presence of different types of lesions/tumors. For example, Figure 14 illustrates different curves associated with a tracer uptake. The solid line is indicative of tracer uptake in normal tissue (e.g., normal tissue uptake), and the broken line is indicative of tracer uptake in an abnormal tumor or lesion. Normally, it would be difficult to assess or differentiate the difference in uptake times, as the counts during ΔΤ have a low signal to noise ratio (SNR). Notably, the methods, systems, and computer readable media described herein can improve imaging during an uptake interval or time ΔΤ, by improving image quality without further increasing tracer dosage.
In some embodiments, multiple scans (i.e., designated with an asterisk (*) in Figure 14) may be taken over an uptake time spanning ΔΤ, which is the time over which a tracer is metabolized, until reaching a steady state Tss. The shorter, almost dynamically obtained scans (i.e., obtained at each b), may be modeled by tissue specific RF as described herein for improving a SNR and thus, the image, of each scan.
Figure 15 is a block diagram illustrating an exemplary method of predicting and generating an estimated high-dose PET image without actually performing a high-dose PET scan. The method may be performed at a computing node (e.g., 100, Figure 13) having a processor and executable instructions stored thereon, such that when executed by the processor of a computer control the computer to perform steps, such as those in Figure 15.
At block 110, appearance features may be extracted from at least one MR image. Appearance features include information or data regarding a tissue structure, anatomical information, molecular information, or functional information (e.g., tissue perfusion, diffusion, vascular permeability, or the like) and/or information per image location, as indicated by a local intensity. The MR image may also be segmented or categorized (e.g., by location) upon a tissue type (e.g., GM, WM, CSF in the brain), where needed (e.g., brain imaging).
At block 112, appearance features may be extracted from at least one low-dose PET image. Appearance features obtained from low-dose PET imaging may include a local intensity that is indicative of metabolic information derived from impingement of gamma rays to tissue injected with a biologically active radioactive tracer. The information obtained from low- dose PET image is associated with tissue metabolic activity. The appearance features of MR/low-dose PET images can be aligned, classified per tissue type, and input into tissue-specific RF (e.g., models) for predicting high-dose PET values per voxel, from which a high-dose PET image is generated. The predicted values may be iteratively refined as described above (see, e.g., Table 2).
At block 114, a standard, estimated high-dose PET image may be generated using the appearance features of the MR image and the low-dose PET image, and the high-dose PET values predicted therefrom. The appearance features include local intensity patches, to which tissue specific RF is applied in predicting high-dose PET values, per voxel. By iterating all image voxels, a high-dose PET image can be predicted and generated, without subjecting a patient to a high-dose PET scan.
Figure 16 is a schematic block diagram illustrating an overview of training a model M for high-dose PET prediction and image generation via model learning (e.g., machine learning). The methods, systems, and computer readable media include machine learning-based methodology, for example, a computing machine having a decision tree-based (e.g., RF) prediction of a high-dose PET image using MR and low-dose PET images. The machine learning methodology includes two main stages, i.e., a training stage and an application stage. Notably, estimated high-dose PET images may be used for at least one of diagnosis, treatment, and/or treatment planning of one or more patients.
During the training stage, one task includes learning decision trees for generating an RF model. Multiple trees can be grouped to form a forest, and in the case of regression, the random forest is often called a regression forest (RF).
For each tree in the constructed forest, learned parameters of the tree are stored at each node (i.e., a split node or leaf/terminal node). During the training stage, the input is a set of voxels, and the corresponding high-dose PET intensities, as shown in Figure 16. Each voxel is represented by a pair of MR and low-dose PET patches Pi and P2. The goal of training is to learn multiple trees (a forest, as shown in Figure 17) for best predicting the high- dose PET intensity P3 from a pair of MR and low-dose PET patches Pi and P2.
During the application stage, a high-dose PET image is predicted voxel by voxel. For each voxel, a pair of MR and low-dose PET images is extracted and centered at that respective voxel (i.e., as shown in Figure 18).
By applying the learned regression, the high-dose PET intensity at that voxel can be calculated or generated.
Referring to Figure 17, a split decision or node (i.e., one node of Tree
T(Tj), designated as a double-ringed circle, each split node (solid circles) has at least one "leaf" shown in broken circles) stores a split function's parameters, including one selected feature index and its corresponding threshold. A feature vector f is computed as the concatenated vector from a pair of MRI and low-dose PET patches.
As Figure 17 further illustrates, the parameters stored at the /-th split node include one selected feature index π(ί) and the corresponding threshold θ(ΐ). Whereas, leaf node (i.e. , where each leaf is indicated in a broken circle labeled as 4, 6, 8, 9, 10, and 1 1 ) stores the mean high-dose PET intensity (e.g., /mean 0')) °f voxels reached at this y'-th node (e.g., where nodes are indicated in solid circles labeled as 1 , 2, 3, 5, and 7).
Still referring to Figure 17, each tree Ti to Ττ is a prediction model (e.g. , a RF or regression model) or prediction result. The input to each tree is a MRI patch and its corresponding low-dose PET patch. The output from each tree is the predicted high-dose PET intensity at the center location of the given MRI patch. RF models consist of multiple trees, and the final prediction of a RF model is the average of all predicted values from all individual trees. Each tree is also referred to as a binary decision tree, and is a prediction model, similar to linear regression model, but it is used for non-linear regression problems.
As Figure 17 illustrates, the output from each tree is a high-dose PET intensity value predicted at a center location of a given MR/low-dose PET patch. The. average of all trees in the forest is the final prediction result. To predict the entire high-dose PET image, the prediction process of Figure 17 is repeated patch by patch. Specifically, for each location, MRI and low-dose PET patches are extracted to predict the high-dose PET intensity at a given location.
The split function shown in Figure 17 includes a specific feature and a threshold. The type of feature and the value of threshold are automatically learned according to the training data. In the offline training stage, the best combinations of feature and threshold will be learned to predict the high- dose PET from features of MR and low-dose PET patches. In the application stage, the split functions can be fixed and applied to a new subject.
Figure 18 is a specific example associated with the application stage, where an RF model has been learned and can now be applied to the extracted MR/low-dose PET features or intensity values for predicting high- dose PET intensities. For simplicity, a 2D case is shown. The 3D case can be readily derived using the 2D case as an example.
Figure 18 illustrates MR and low-dose PET images for a new subject. That is, each new subject will have at least one MR image and at least one low-dose PET image generated for a brain or non-brain body part. To predict the high-dose PET intensity at one voxel, two 3 x 3 patches are extracted (i.e. , see top/bottom patches at left hand side of Figured 18) and centered at this voxel, one patch is from the MR image, and another patch is from the low-dose PET image. These two patches are concatenated as a long vector (see long vector in table at top of Figure 18) and stacked together to form a feature vector for this voxel.
The feature vector f is passed through each learned decision tree
(from T-i to TT) to get a prediction (high-dose PET intensity). The prediction of each tree is the mean high-dose PET intensity stored at the leaf node, where this voxel falls into. The routing of the voxel in one tree is as follows.
At the i-th split node, we evaluate the inequality fn(i)>9(i), where f^) is the value at the 7r(i)-th entry of feature vector f, and 0(i) is the threshold at the t-th split node. Both π(ί) and 0(i) are learned in the training stage. If the inequality is true, the voxel goes to the right child. Otherwise, it goes to the left child. This evaluation continues until the voxel reaches a leaf node of the tree. Then, the mean high-dose PET intensity stored in the leaf node is used as the prediction of this tree. The final prediction is the average over the predictions of all trees.
Figure 18 illustrates a specific example of a voxel-wise prediction procedure for one tree. The input is the feature vector f (i.e., extracted for MR/low-dose PET patches). The output is the final predicted intensity value of the voxel in a predicted high-dose PET image FP. In a first step, the feature vector f is fed to the tree TT, and firstly f reaches at node 1 (i.e., the double circle with at "1 " in it), a split node.
In a second step, a learned split function (i.e. , as listed in Table 1 of Figure 18) is applied to f. In this case, the learned/stored ID of entry of feature vector f at node 1 is π(1 )=3, and the threshold is 0(1 )= 1000. So from extracted feature vector (shown at the top), it can be seen that the value of third entry of f at node 1 is f^i)=i ooi >1000=0(1 ). Thus, the value of the inequality is a decision in the RF decision tree, and ίπ(ΐ)> 0(1 ) is true, so f
goes to right child, i.e., node 3. (See decision tree Tj in Figure 18, where nodes are labeled 1-1 1 ).
In a third step, similar to node 1 , at node 3, π(3)=6, 0(3)=1 OOO. So
Then the f goes to right child, i.e., node 7.
In a fourth step, similar to node 1 and 3, at node 7, rr(7)=9, 0(7)=1 OOO.
In a fifth step, node 10 is a leaf/terminal node, so from Table 1 , we can see that learnt parameter Imean stored at node 10 is Imean (10)=6321. Thus, the final predicted Fp results for f in the tree Tj is 6321 .
In summary, a RF based framework of high-dose PET image prediction and generation is proposed, for effectively predicting and generating standard, high-dose PET by using the combination of MRI and low-dose PET, thus reducing the radionuclide dose. Specifically, tissue- specific models are built via RF framework for separately predicting standard-dose PET values and images in different tissue types, such as GM, WM, and CSF type brain tissue. In addition, an incremental refinement strategy is also employed for estimating an image difference for refining the predicted high-dose PET values and image. Results described herein illustrate that this method can achieve very promising, accurate, machine learned prediction for generation of standard-dose PET images. Moreover, the proposed method outperforms the SR technique under various comparisons.
Aspects as disclosed herein can provide, for example and without limitation, one or more of the following beneficial technical effects: minimized exposure of patients to radiation; improved imaging at lower dosages of radionuclides; improved accuracy/refinement of predicted image; obtaining faster results (e.g., via simultaneous MR/PET imaging).
As noted above, the methods, systems, and computer readable media herein are performed at a predictions node (e.g., Figure 13). The prediction node and/or functionality associated with the prediction node as described herein constitute a special purpose computer. It will be appreciated that the prediction node and/or functionality described herein improve the technological field pertaining to brain and/or body imaging
occurring at a special MR and/or PET imaging machine, which may be combined or separated. Predicting high-dose PET imaging via a prediction node is necessarily rooted in computer technology as such overcomes a problem specifically arising in the realm of computer networks, for example, obtaining a high-dose PET image without having to actually perform the high-dose PET image.
While the subject matter has been has been described herein in reference to specific aspects, embodiments, features, and illustrative embodiments, it will be appreciated that the utility of the subject matter is not thus limited, but rather extends to and encompasses numerous other variations, modifications and alternative embodiments, as will suggest themselves to those of ordinary skill in the field of the present subject matter, based on the disclosure herein.
Some embodiments of the present subject matter can utilize devices, systems, methods, and/or computer readable media as, such as described in any of the following publications, each publication of which is hereby incorporated by reference as if set forth fully herein:
[1] Andersen, F.L., Ladefoged, C.N., Beyer, T., Keller, S.H., Hansen, A.E., H0jgaard, L, Kjasr, A., Law, I., Holm, S., 2014. Combined PET/MR imaging in neurology: MR-based attenuation correction implies a strong spatial bias when ignoring bone. Neurolmage 84, 206-216;
[2] Azar, AT., Elshazly, H.I., Hassanien, A.E., Elkorany, A.M., 2014. A random forest classifier for lymph diseases. Computer Methods and
Programs in Biomedicine 1 13, 465-473.
[3] Bai, W., Brady, M., 201 1. Motion correction and attenuation correction for respiratory gated PET images. IEEE Trans. Med. Imaging 30, 351-365.
[4] Breiman, L, 2001. Random forests. Machine Learning 45, 5-32.
[5] Coello, C, Willoch, F., Seines, P., Gjerstad, L, Fladby, T., Skretting, A.,
2013. Correction of partial volume effect in 18F-FDG PET brain studies using coregistered MR volumes: Voxel based analysis of tracer uptake in the white matter. Neurolmage 72, 83-192.
[6] Fanelli, G., Dantone, M., Gall, J., Fossati, A., Gool, L. V., 2013. Random forests for real time 3D face analysis. Int. J. Comput. Vis. 101 , 437-458.
[7] Faramarzi, E., Rajan, D., Christensen, M. P., 2013. Unified blind method for multi-image super-resolution and single/multi-image blur deconvolution. IEEE Trans. Image Process. 22, 2101-21 14.
[8] Fischer, B., Modersitzki, J., 2003. FLIRT: A flexible image registration toolbox. Biomedical Image Registration 2717, 261-270.
[9] Gao, Y., Liao, S., Shen, D., 2012. Prostate segmentation by sparse representation based classification. Med. Phys, 39, 6372-6387.
[10] Gao, Y., Zhan, Y., Shen, D., 2014. Incremental learning with selective memory (ILSM): towards fast prostate localization for image guided radiotherapy. IEEE Trans. Med. Imaging 33, 518-534.
[1 1 ] Garraux, G., Phillips, C, Schrouff, J., Kreisler, A., Lemaire, C, Degueldre, C, Delcour, C, Hustinx, R., Luxen, A., Destee, AL, Salmon, E., 2013. Multiclass classification of FDG PET scans for the distinction between Parkinson's disease and atypical parkinsonian syndromes. Neurolmage: Clinical 2, 883-893.
[12] Gigengack, F., Ruthotto, L, Burger, M., Wolters, C.H., Jiang, X., Schafers, K.P., 2012. Motion correction in dual gated cardiac PET using mass-preserving image registration. IEEE Trans. Med. Imaging 31 , 698-712.
[13] Gray, K.R., Aljabar, P., Heckemann, R.A., Hammers, A., Rueckert, D., Alzheimer's Disease Neuroimaging Initiative, 2013. Random forest-based similarity measures for multi-modal classification of Alzheimer's disease. Neurolmage 65, 167-175.
[14] Guo, L, Chehata, N., Mallet, C, Boukir, S., 201 1 . Relevance of airborne Lidar and multispectral image data for urban scene classification using random forests. ISPRS Journal of Photogrammetry and Remote Sensing 66, 56-66.
[15] Ham, J., Chen, Y., Crawford, M.M., Ghosh, J., 2005. Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing 43, 492-501.
[16] Hanif, A., Mansoor, A.B., Ejaz, T., 2010. Iterative tomographic image reconstruction by compressive sampling. 17th IEEE International Conference on Image Processing (ICIP), 4313-4316.
[17] Huang, X., Dione, D ., Compas, C.B., Papademetris, X., Lin, B.A., Bregasi, A., Sinusas, A.J., Staib, L.H., Duncan, J.S., 2014. Contour tracking in echocardiographic sequences via sparse representation and dictionary learning. Med. Image Anal. 18, 253-271.
[18] Hudson, H.M., Larkin, R.S., 1994. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans. Med. Imaging 13, 601-609.
[19] Keller, S.H., Svarer, C, Sibomana, M., 2013. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization. IEEE Trans. Med. Imaging 32, 161 1-1621.
[20] Kim, M, Wu, G., Li, W., Wang, L, Son, Y., Cho, Z., Shen, D., 2013.
Automatic hippocampus segmentation of 7.0 Tesla MR images by combining multiple atlases and auto-context models. Neurolmage 83, 335-345.
[21 ] Lehnert, W., Gregoire, M.C., Reilhac, A., Meikle, S.R., 2012. Characterization of partial volume effect and region-based correction in small animal positron emission tomography (PET) of the rat brain. Neurolmage 60,
2144-2157.
[22] Lindner, C, Thiagarajah, S., Wilkinson, J.M., arcOGEN Consortium, Wallis, G.A., Cootes, T.F., 2013. Fully automatic segmentation of the proximal femur using random forest regression voting. IEEE Trans. Med. Imaging 32, 1462-1472.
[23] Liu, J., Ji, S. , Ye, J., 2009. SLEP: Sparse learning with efficient projections. SLEP: manual, Arizona State University, http://www.public.asu.edu/~ive02/Software/SLEP.
[24] Liu, F., Wee, C.Y., Chen, H., Shen, D., 2014. Inter-modality relationship constrained multi-modality multi-task feature selection for Alzheimer's disease and mild cognitive impairment identification. Neurolmage 84, 466- 475.
[25] Ma, J., Huang, J., Feng, Q., Zhang, H., Lu, H., Liang, Chen, W., 201 1. Low-dose computed tomography image restoration using previous normal-dose scan. Med. Phys. 38, 5713-5731 .
[26] MacManus, M., Nestle, U., Rosenzweig, K.E., Carrio, I., Messa, C, Belohlavek, O., Danna, M. , Inoue, T., Elizabeth, D.A., Schipani, S.,
Watanabe, N., Dondi, M., Jeremic, B., 2009. Use of PET and PET/CT for radiation therapy planning: IAEA expert report 2006-2007. Radiotherapy and Oncology 91 , 85-94.
[27] Mcintosh, C, Svistoun, I., Purdie, T.G., 2013. Groupwise conditional random forests for automatic shape classification and contour quality assessment in radiotherapy planning. IEEE Trans. Med. Imaging 32, 1043 - 1057.
[28] Olesen, O.V., Sullivan, J.M., Mulnix, T., Paulsen, R.R., Hojgaard, L, Roed, B., Carson, R.E., Morris, E.D., Larsen, R., 2013. List-mode PET motion correction using markerless head tracking: proof-of-concept with scans of human subject. IEEE Trans. Med. Imaging 32, 200-209.
[29] Pace, L, Nicolai, E., Luongo, A., Aiello, M., Catalano, O.A., Soricelli, A., Salvatore, M., 2014. Comparison of whole-body PET/CT and PET/MR I in breast cancer patients: Lesion detection and quantitation of 18F- deoxyglucose uptake in lesions and in normal organ tissues. European Journal of Radiology 83, 289-296.
[30] Peleg, T., Elad, M., 2014. A statistical prediction model based on sparse representations for single image super-resolution. IEEE Trans. Image
Process. 23, 2569-2582.
[31] Pichler, B.J., Kolb, A., Nagele, T., Schlemmer, H.P., 2010. PET/MRI: paving the way for the next generation of clinical multimodality imaging applications. Journal of Nuclear Medicine 51 , 333-336.
[32] Rohren, E.M., Turkington, T.G., Coleman, R.E., 2004. Clinical applications of PET in oncology. Radiology 231 , 305-332.
[33] Saboury, B., Ziai, P., Alavi, A., 201 1. Role of global disease assessment by combined PET-CT-MR imaging in examining cardiovascular disease.
PET Clinics 6, 421-429.
[34] Schindler, T.H., Schelbert, H.R., Quercioli, A., Dilsizian, V., 2010. Cardiac PET imaging for the detection and monitoring of coronary artery disease and microvascular health. Journal of the American College of Cardiology 3, 623-640.
[35] Seyedhosseini, M., Tasdizen, T., 2013. Multi-class multi-scale series contextual model for image segmentation. IEEE Trans. Image Process. 22, 4486-4496.
[36] Shao, Y., Gao, Y., Guo, Y., Shi, Y., Yang, X., Shen, D, 2014. Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE Trans. Med. Imaging (available online).
[37] Shi, F., Fan, Y., Tang, S., Gilmore, J.H., Lin, W., Shen, D., 2009. Neonatal brain image segmentation in longitudinal MRI studies. Neurolmage 49, 391-400.
[38] Shi, F, Wang, L, Dai, Y„ Gilmore, J.H., Lin, W., Shen, D„ 2012. LABEL: Pediatric brain extraction using learning-based meta-algorithm. Neurolmage 62, 1975-1986.
[39] Tripoliti, E.E., Fotiadis, D.I., Manis, G., 2012. Automated diagnosis of diseases based on classification: dynamic determination of the number of trees in random forests algorithm. IEEE Trans, on Information Technology in Biomedicine 16, 615-622.
[40] Tu, Z., Bai, X., 2010. Auto-context and its application to high-level vision tasks and 3D brain image segmentation. IEEE Trans. Pattern. Anal. Mach.
Intell. 32, 1744-1757.
[41 ] Vunckx, K., Atre, A., Baete, K., Reilhac, A., Deroose, CM., Van Laere,
K., Nuyts, J., 2012. Evaluation of three MRI-based anatomical priors for quantitative PET brain imaging. IEEE Trans. Med. Imaging 31 , 599-612.
[42] Wang, L, Shi, F., Gao, Y., Li, G., Gilmore, J., Lin, W., Shen, D., 2014a.
Integration of sparse multi-modality representation and anatomical constraint for isointense infant brain MR image segmentation. Neurolmage 89, 152-
164.
[43] Wang, L, Shi, F., Li, G., Gao, Y., Lin, W., Gilmore, J.H., Shen, D., 2014b. Segmentation of neonatal brain MR images using patch-driven level sets. Neurolmage 84, 141-158.
[44] Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma Y., 2009. Robust face recognition via sparse representation. IEEE Trans. Pattern. Anal. Mach. Intell. 31 , 210-227.
[45] Yang, J., Wright, J., Huang, T.S., Ma, Y., 2010. Image super-resolution via sparse representation. IEEE Trans. Image Process. 19, 2861-2873.
[46] Yaqub, M., Javaid, M.K., Cooper, C, Noble, J.A., 2014. Investigation of the role of feature selection and weighted voting in random forests for 3-D volumetric segmentation. IEEE Trans, Med. Imaging 33, 258-271.
[47] Zhang, Y., Brady, M., Smith, S., 2001 . Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm. IEEE Trans. Med. Imaging 20, 45-57.
[48] Zhang, D., Wang, Y., Zhou, L, Yuan, H., Shen, D., 201 1. Multimodal classification of Alzheimer's disease and mild cognitive impairment. Neurolmage 55, 85-867.
[49] Zhang, K., Gao, X., Tao, D., Li, X., 2013. Single image super-resolution with multiscale similarity learning. IEEE Trans, on Neural Networks and Learning Systems 24, 1648-1659.
The disclosures of the foregoing publications (i.e., [1] to [49]) are hereby incorporated by reference as if set forth fully herein.
Various combinations and sub-combinations of the structures, machines, functionality, and/or features described herein are contemplated and will be apparent to a skilled person having knowledge of this disclosure. Any of the various features and elements as disclosed herein can be combined with one or more other disclosed features and elements unless indicated to the contrary herein. Correspondingly, the subject matter as hereinafter claimed is intended to be broadly construed and interpreted, as including all such variations, modifications and alternative embodiments, within its scope and including equivalents of the claims.
Claims
1 . A method for predicting and generating a high-dose positron emission tomography (PET) image without performing a high-dose PET scan, the method comprising:
at a PET prediction node including at least one processor:
extracting appearance features from at least one magnetic resonance (MR) image;
extracting appearance features from at least one low- dose positron emission tomography (PET) image;
generating an estimated high-dose PET image, voxel by voxel, using the appearance features of the at least one MR image and the at least one low-dose PET image.
2. The method of claim 1 , wherein the estimated high-dose PET image is of a brain.
3. The method of claim 1 , wherein the estimated high-dose PET image is a non-brain portion of the body.
4. The method of claim 1 , wherein the at least one MR image and at least one low-dose PET image are acquired simultaneously via a combined PET/MRI imaging system.
5. The method of claim 1 , wherein the at least one MR image and at least one low-dose PET image are acquired via separate, non- simultaneous PET and MRI imaging systems.
6. The method of claim 1 , further comprising extracting appearance features from a plurality of low-dose PET images.
7. The method of claim 1 , wherein the estimated high-dose PET image is a brain image that is substantially equivalent to an actual brain image obtained during a high-dose PET scan in which 18F-FDG is administered at a dosage of approximately 5 millicuries (mCi).
8. The method of claim 1 , wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 5 millicuries (mCi).
9. The method of claim 1 , wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 1.25 millicuries (mCi).
10. The method of claim 1 , wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 0.5 millicuries (mCi).
1 1. The method of claim 1 , wherein the low-dose PET image is a non- brain, body image obtained by administering 18F-FDG at a dosage of less than approximately 0 millicuries (mCi).
12. The method of claim 1 , wherein the low-dose PET image is a non- brain, body image obtained by administering 18F-FDG at a dosage of less than approximately 1 millicurie (mCi).
13. The method of claim 1 , wherein the appearance features include local intensity patches at corresponding locations of the at least one MR image and the at least one low-dose PET image.
14. The method of claim 1 , wherein the appearance features serve as input features in a tissue specific regression forest (RF) model.
15. The method of claim 1 , further comprising linearly aligning the MR image and the low-dose PET image on a common space, segmenting the at least one MR image based upon tissue type, and applying a corresponding tissue specific RF to predict a high-dose PET value for each voxel in the common space.
16. The method of claim 15, wherein the tissue type includes gray matter (GM), white matter (WM), or cerebrospinal fluid (CSF).
17. The method of claim 1 , further comprising using the estimated high- dose PET image for one of diagnosis, treatment, and treatment planning.
18. A system for predicting and generating a high-dose positron emission tomography (PET) image without performing a high-dose PET scan, the system comprising:
a processor; and
a high-dose PET Prediction Module (HDPPM) implemented by the processor, wherein the HDPPM is configured to extract
appearance features from each of a magnetic resonance (MR) image and at least one corresponding low-dose positron emission tomography (PET) image and generate an estimated high-dose PET image, voxel by voxel, using the appearance features of the at least one MR image and the at least one low-dose PET image.
19. The system of claim 18, wherein the estimated high-dose PET image is of a brain.
20. The system of claim 18, wherein the estimated high-dose PET image is a non-brain portion of the body.
21. The system of claim 18, wherein the at least one MR image and at least one low-dose PET image are acquired simultaneously via a combined PET/MRI imaging system.
22. The system of claim 18, wherein the at least one MR image and at least one low-dose PET image are acquired via separate, non- simultaneous PET and MRI imaging systems.
23. The system of claim 18, wherein the HDPPM extracts features from each of a MR image and a plurality of corresponding low-dose PET images.
24. The system of claim 18, wherein the estimated high-dose PET image is a brain image that is substantially equivalent to an actual brain image obtained during a high-dose PET scan in which 18F-FDG is administered at a dosage of approximately 5 millicuries (mCi).
25. The system of claim 18, wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 5 millicuries (mCi).
26. The system of claim 18, wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 1.25 millicuries (mCi).
27. The system of claim 18, wherein the low-dose PET image is a brain image obtained by administering 18F-FDG at a dosage of less than approximately 0.5 millicuries (mCi).
28. The system of claim 18, wherein the low-dose PET image is a non- brain, body image obtained by administering 18F-FDG at a dosage of less than approximately 10 millicuries (mCi).
29. The system of claim 18, wherein the low-dose PET image is a non- brain, body image obtained by administering 18F-FDG at a dosage of less than approximately 1 millicurie (mCi).
30. The system of claim 18, wherein the appearance features include local intensity patches at corresponding locations of the at least one MR image and the at least one low-dose PET image.
31 . The system of claim 18, wherein the appearance features serve as input features in a tissue specific regression forest (RF) model.
32. The system of claim 18, wherein the HDPPM is configured to linearly align the MR image and the low-dose PET image on a common space, segment the at least one MR image based upon tissue type, and apply a corresponding tissue specific RF to predict a high-dose
PET value for each voxel in the common space.
33. The system of claim 18, wherein the tissue type includes gray matter (GM), white matter (WM), or cerebrospinal fluid (CSF).
34. A non-transitory computer readable medium having stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps comprising:
extracting appearance features from at least one magnetic resonance (MR) image;
extracting appearance features from at least one low-dose positron emission tomography (PET) image;
generating an estimated high-dose PET image, voxel by voxel, using the appearance features of the at least one MR image and the at least one low-dose PET image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462044154P | 2014-08-29 | 2014-08-29 | |
US62/044,154 | 2014-08-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016033458A1 true WO2016033458A1 (en) | 2016-03-03 |
Family
ID=55400657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/047425 WO2016033458A1 (en) | 2014-08-29 | 2015-08-28 | Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr) |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016033458A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109215093A (en) * | 2018-07-27 | 2019-01-15 | 深圳先进技术研究院 | Low dosage PET image reconstruction method, device, equipment and storage medium |
WO2019081256A1 (en) * | 2017-10-23 | 2019-05-02 | Koninklijke Philips N.V. | Positron emission tomography (pet) system design optimization using deep imaging |
CN109949318A (en) * | 2019-03-07 | 2019-06-28 | 西安电子科技大学 | A fully convolutional neural network segmentation method for epilepsy lesions based on multimodal images |
WO2019204146A1 (en) * | 2018-04-18 | 2019-10-24 | Sony Interactive Entertainment Inc. | Context embedding for capturing image dynamics |
CN110753935A (en) * | 2017-04-25 | 2020-02-04 | 小利兰·斯坦福大学托管委员会 | Dose reduction using deep convolutional neural networks for medical imaging |
CN112384279A (en) * | 2018-06-18 | 2021-02-19 | 皇家飞利浦有限公司 | Treatment planning apparatus |
WO2021061710A1 (en) * | 2019-09-25 | 2021-04-01 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
US20210196219A1 (en) * | 2019-12-31 | 2021-07-01 | GE Precision Healthcare LLC | Methods and systems for motion detection in positron emission tomography |
WO2021182103A1 (en) * | 2020-03-11 | 2021-09-16 | 国立大学法人筑波大学 | Trained model generation program, image generation program, trained model generation device, image generation device, trained model generation method, and image generation method |
WO2022120588A1 (en) * | 2020-12-08 | 2022-06-16 | 深圳先进技术研究院 | Low-dose pet image restoration method and system, device, and medium |
US20220334208A1 (en) * | 2019-09-25 | 2022-10-20 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
WO2023272491A1 (en) * | 2021-06-29 | 2023-01-05 | 深圳高性能医疗器械国家研究院有限公司 | Pet image reconstruction method based on joint dictionary learning and deep network |
US11918390B2 (en) | 2019-12-31 | 2024-03-05 | GE Precision Healthcare LLC | Methods and systems for motion detection in positron emission tomography |
-
2015
- 2015-08-28 WO PCT/US2015/047425 patent/WO2016033458A1/en active Application Filing
Non-Patent Citations (4)
Title |
---|
BERND J. PICHLER ET AL.: "PET/MRI: paving the way for the next generation of clinical multimodality imaging applications", J NUCL MED, vol. 51, 11 February 2010 (2010-02-11), pages 333 - 336 * |
CHRISTOPHER COELLO ET AL.: "Correction of partial volume effect in 18F-FDG PET brain studies using coregistered MR volumes: voxel based analysis of tracer uptake in the white matter", NEUROIMAGE, vol. 72, 28 January 2013 (2013-01-28), pages 183 - 192 * |
FLEMMING LITTRUP ANDERSEN ET AL.: "Combined PET/MR imaging in neurology: MR-based attenuation correction implies a strong spatial bias when ignoring bone", NEUROIMAGE, vol. 84, 29 August 2013 (2013-08-29), pages 206 - 216 * |
JIAYIN KANG ET AL.: "Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images", MED. PHYS., vol. 42, no. 9, 18 August 2015 (2015-08-18), pages 5301 - 5309 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110753935A (en) * | 2017-04-25 | 2020-02-04 | 小利兰·斯坦福大学托管委员会 | Dose reduction using deep convolutional neural networks for medical imaging |
WO2019081256A1 (en) * | 2017-10-23 | 2019-05-02 | Koninklijke Philips N.V. | Positron emission tomography (pet) system design optimization using deep imaging |
US11748598B2 (en) | 2017-10-23 | 2023-09-05 | Koninklijke Philips N.V. | Positron emission tomography (PET) system design optimization using deep imaging |
WO2019204146A1 (en) * | 2018-04-18 | 2019-10-24 | Sony Interactive Entertainment Inc. | Context embedding for capturing image dynamics |
US11967127B2 (en) | 2018-04-18 | 2024-04-23 | Sony Interactive Entertainment Inc. | Context embedding for capturing image dynamics |
CN112384279A (en) * | 2018-06-18 | 2021-02-19 | 皇家飞利浦有限公司 | Treatment planning apparatus |
CN112384279B (en) * | 2018-06-18 | 2023-08-22 | 皇家飞利浦有限公司 | Treatment Planning Equipment |
CN109215093B (en) * | 2018-07-27 | 2022-12-23 | 深圳先进技术研究院 | Low-dose PET image reconstruction method, device, equipment and storage medium |
CN109215093A (en) * | 2018-07-27 | 2019-01-15 | 深圳先进技术研究院 | Low dosage PET image reconstruction method, device, equipment and storage medium |
CN109949318A (en) * | 2019-03-07 | 2019-06-28 | 西安电子科技大学 | A fully convolutional neural network segmentation method for epilepsy lesions based on multimodal images |
CN109949318B (en) * | 2019-03-07 | 2023-11-14 | 西安电子科技大学 | Full convolution neural network epileptic focus segmentation method based on multi-modal image |
US11624795B2 (en) * | 2019-09-25 | 2023-04-11 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced MRI |
JP2022550688A (en) * | 2019-09-25 | 2022-12-05 | サトゥル メディカル,インコーポレイテッド | Systems and methods for improving low-dose volume-enhanced MRI |
US20220334208A1 (en) * | 2019-09-25 | 2022-10-20 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
US20230296709A1 (en) * | 2019-09-25 | 2023-09-21 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
WO2021061710A1 (en) * | 2019-09-25 | 2021-04-01 | Subtle Medical, Inc. | Systems and methods for improving low dose volumetric contrast-enhanced mri |
US11179128B2 (en) * | 2019-12-31 | 2021-11-23 | GE Precision Healthcare LLC | Methods and systems for motion detection in positron emission tomography |
US20210196219A1 (en) * | 2019-12-31 | 2021-07-01 | GE Precision Healthcare LLC | Methods and systems for motion detection in positron emission tomography |
US11918390B2 (en) | 2019-12-31 | 2024-03-05 | GE Precision Healthcare LLC | Methods and systems for motion detection in positron emission tomography |
WO2021182103A1 (en) * | 2020-03-11 | 2021-09-16 | 国立大学法人筑波大学 | Trained model generation program, image generation program, trained model generation device, image generation device, trained model generation method, and image generation method |
JP7527675B2 (en) | 2020-03-11 | 2024-08-05 | 国立大学法人 筑波大学 | Trained model generation program, image generation program, trained model generation device, image generation device, trained model generation method, and image generation method |
WO2022120588A1 (en) * | 2020-12-08 | 2022-06-16 | 深圳先进技术研究院 | Low-dose pet image restoration method and system, device, and medium |
WO2023272491A1 (en) * | 2021-06-29 | 2023-01-05 | 深圳高性能医疗器械国家研究院有限公司 | Pet image reconstruction method based on joint dictionary learning and deep network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12115015B2 (en) | Deep convolutional neural networks for tumor segmentation with positron emission tomography | |
WO2016033458A1 (en) | Restoring image quality of reduced radiotracer dose positron emission tomography (pet) images using combined pet and magnetic resonance (mr) | |
Ramon et al. | Improving diagnostic accuracy in low-dose SPECT myocardial perfusion imaging with convolutional denoising networks | |
Kang et al. | Prediction of standard‐dose brain PET image by using MRI and low‐dose brain [18F] FDG PET images | |
Yang et al. | MRI-based attenuation correction for brain PET/MRI based on anatomic signature and machine learning | |
US11020077B2 (en) | Simultaneous CT-MRI image reconstruction | |
Huynh et al. | Estimating CT image from MRI data using structured random forest and auto-context model | |
US12171542B2 (en) | Systems and methods for estimating histological features from medical images using a trained model | |
Arabi et al. | Magnetic resonance imaging-guided attenuation correction in whole-body PET/MRI using a sorted atlas approach | |
US8655040B2 (en) | Integrated image registration and motion estimation for medical imaging applications | |
Zaidi et al. | Novel quantitative PET techniques for clinical decision support in oncology | |
US11995745B2 (en) | Systems and methods for correcting mismatch induced by respiratory motion in positron emission tomography image reconstruction | |
US10910101B2 (en) | Image diagnosis support apparatus, image diagnosis support method, and image diagnosis support program | |
US20200261032A1 (en) | Automatic identification and segmentation of target regions in pet imaging using dynamic protocol and modeling | |
Jiang et al. | Super Resolution of Pulmonary Nodules Target Reconstruction Using a Two-Channel GAN Models | |
Turco et al. | Partial volume and motion correction in cardiac PET: First results from an in vs ex vivo comparison using animal datasets | |
Kang et al. | Prediction of standard-dose PET image by low-dose PET and MRI images | |
Wang et al. | A preliminary study of dual‐tracer PET image reconstruction guided by FDG and/or MR kernels | |
Liu et al. | Improving Automatic Segmentation of lymphoma with Additional Medical Knowledge Priors | |
Elnakib | Developing advanced mathematical models for detecting abnormalities in 2D/3D medical structures. | |
Hachama et al. | A classifying registration technique for the estimation of enhancement curves of DCE-CT scan sequences | |
Tang et al. | Validation of mutual information-based registration of CT and bone SPECT images in dual-isotope studies | |
Roy et al. | 5 Enhancing with Modality-Based Patient Care Image Registration in Modern Healthcare | |
Jaganathan et al. | MultiResolution 3D Magnetic Resonance Imaging Analysis for Prostate Cancer Imaging | |
Li et al. | PET-guided liver segmentation for low-contrast CT via regularized Chan-Vese model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15835060 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15835060 Country of ref document: EP Kind code of ref document: A1 |