Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Beta-Thalassemia and Male Infertility: Unraveling the Oxidative Stress Connection—An Up-to-Date Review
Previous Article in Journal
Urinary Immune Complexes Reflect Renal Pathology in Lupus Nephritis
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data

by
Maria Elkjær Montgomery
1,
Flemming Littrup Andersen
1,2,
René Mathiasen
2,3,
Lise Borgwardt
1,
Kim Francis Andersen
1 and
Claes Nøhr Ladefoged
1,4,*
1
Department of Clinical Physiology and Nuclear Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
2
Department of Clinical Medicine, University of Copenhagen, 2200 Copenhagen, Denmark
3
Department of Paediatrics and Adolescent Medicine, Copenhagen University Hospital, Rigshospitalet, 2100 Copenhagen, Denmark
4
Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(24), 2788; https://doi.org/10.3390/diagnostics14242788
Submission received: 30 October 2024 / Revised: 3 December 2024 / Accepted: 10 December 2024 / Published: 12 December 2024
Figure 1
<p>Illustrative sample patient with banana artifact presented. Panels (<b>a</b>,<b>b</b>) show the normal CT and corresponding PET. The synthetic CT (sCT) and corresponding sPET are seen in (<b>c</b>,<b>d</b>). PET is fused on top of the CT scan in (<b>e</b>), illustrating the mismatch between CT and emission data. The blue line represents the superior part of the liver at the time of CT scanning. Panel (<b>f</b>) shows the sPET fused on top of the sCT.</p> ">
Figure 2
<p>Sample patient with metal implant exhibiting streaking artefacts in the CT image (<b>a</b>), which are absent in the sCT image (<b>b</b>). The corresponding PET images are seen for PET and sPET, respectively (<b>c</b>,<b>d</b>). The zoom panels have been magnified by a factor of 2.3.</p> ">
Figure 3
<p>Representation of a 1-year-old patient featuring the CT, sCT, and corresponding PET images PET and sPET (<b>a</b>–<b>d</b>). Additionally, a relative percent difference map between the PET and sPET images (<b>e</b>) highlights that the discrepancies in the PET images are localised in the patient’s cranium and left arm.</p> ">
Figure 4
<p>Relative max difference (<b>a</b>) and mean difference (<b>b</b>) between the PET and sPET for lesions found in the examinations. The colour and size of each point represent the lesion type and size, respectively.</p> ">
Figure 5
<p>Violin-plot showing the mean relative percent difference between PET and sPET for selected organs. The white dot in each presents the median value, and the solid black box represents the interquartile range, whereas the line extends to 1.5 times the interquartile range.</p> ">
Figure 6
<p>Violin-plot showing the mean relative difference between PET, PET<sub>60</sub>, sPET<sub>90</sub>, and sPET<sub>60LC</sub> for selected organs.</p> ">
Figure A1
<p>Flowchart for the proposed method. In the first step, we pretrain just the generator using paired NAC-PET and CT data from adult patients. This resulting generator is identical to the pre-trained generator from [<a href="#B47-diagnostics-14-02788" class="html-bibr">47</a>]. Next, we train a cGAN where the generator is initialised with weights from the pretraining step. The cGAN is optimized with data from n = 172 paediatric examinations. Finally, in the test phase, we use the trained generator to predict synthetic CT (sCT) images from NAC-PET patches, which are combined into full volumes.</p> ">
Review Reports Versions Notes

Abstract

:
Background/Objectives: Paediatric PET/CT imaging is crucial in oncology but poses significant radiation risks due to children’s higher radiosensitivity and longer post-exposure life expectancy. This study aims to minimize radiation exposure by generating synthetic CT (sCT) images from emission PET data, eliminating the need for attenuation correction (AC) CT scans in paediatric patients. Methods: We utilized a cohort of 128 paediatric patients, resulting in 195 paired PET and CT images. Data were acquired using Siemens Biograph Vision 600 and Long Axial Field-of-View (LAFOV) Siemens Vision Quadra PET/CT scanners. A 3D parameter transferred conditional GAN (PT-cGAN) architecture, pre-trained on adult data, was adapted and trained on the paediatric cohort. The model’s performance was evaluated qualitatively by a nuclear medicine specialist and quantitatively by comparing sCT-derived PET (sPET) with standard PET images. Results: The model demonstrated high qualitative and quantitative performance. Visual inspection showed no significant (19/23) or minor clinically insignificant (4/23) differences in image quality between PET and sPET. Quantitative analysis revealed a mean SUV relative difference of −2.6 ± 5.8% across organs, with a high agreement in lesion overlap (Dice coefficient of 0.92 ± 0.08). The model also performed robustly in low-count settings, maintaining performance with reduced acquisition times. Conclusions: The proposed method effectively reduces radiation exposure in paediatric PET/CT imaging by eliminating the need for AC CT scans. It maintains high diagnostic accuracy and minimises motion-induced artifacts, making it a valuable alternative for clinical application. Further testing in clinical settings is warranted to confirm these findings and enhance patient safety.

1. Introduction

PET/CT imaging plays a key role in many fields of medicine, notably in oncology. While its application is crucial for both adults and children, PET/CT in paediatric imaging comes with added challenges, as children are at a higher risk of the adverse effects of radiation compared to adults [1]. This is both because children’s tissue and organs are more radiosensitive as they are not fully mature and because children have a higher postexposure life expectancy where the negative effects of radiation can manifest [2,3,4]. When optimising PET/CT imaging for paediatrics it is therefore particularly relevant to focus on minimising radiation. Possible strategies to achieve this include eliminating unnecessary diagnostic examinations, decreasing the tracer dose needed for PET imaging [5,6], and reducing the CT dose [7].
In PET, artificial intelligence (AI) has been proposed to remove the noise associated with reducing the radiation dose [8]. For paediatric imaging, the focus has primarily been on PET/MRI as the scanner allows for CT-free examination of the patients, thus removing the dose exposure from CT altogether. Theruvath et al. demonstrated retained clinical quality with a 50% dose reduction in whole-body PET examinations of children and young adults with lymphoma using a commercially available method [9]. Wang et al. proposed a convolutional neural network (CNN) trained with an attention mask that allowed for a reduction of dose to one-sixteenth of the original dose when given both the ultra-low-dose PET and MRI as input. The authors showed retained clinical accuracy and lesion quantifiability in a cohort of 23 paediatric patients with lymphoma [10].
With the advent of Long Axial Field-of-View (LAFOV) PET/CT scanners, such as the Biograph Vision Quadra, Siemens Healthineers [11], and the uEXPLORER, United Imaging [12], it is now possible to reduce the administered radiotracer activity by a factor of ten or more without loss of diagnostic quality primarily due to the larger detector coverage compared to traditional PET/CT systems [13]. These findings have been reproduced in paediatric cohorts [14,15,16]. Therefore, especially in combination with low-dose AI-based methods, the main contributor of radiation dose in a PET/CT examination is now the attenuation correction (AC) CT required for accurate quantification of PET images in the PET/CT setup. To further reduce the amount of ionising radiation that the patient is exposed to, it thus makes sense to focus on radiation minimisation associated with the CT scan.
One approach is to synthesise high-quality CT images from lower-dose CT images [17]. Several studies have achieved this using deep learning architectures such as CNN [18,19,20] and generative adversarial networks (GAN) [21,22]. Initially, these studies were mostly focused on adults, but more recently, studies focusing on paediatric patients have also been proposed [23,24,25,26,27,28,29].
An alternative strategy involves the elimination of the CT scan altogether. One approach is to use a separate MRI for attenuation correction [30]. MRI-based attenuation correction poses a challenge with paediatric patients using traditional segmentation- or atlas-based methods [31,32]. While AI-based MRI-CT methods have been proposed [33,34,35], only a few of these have been applied to paediatric patients [36], in part due to the scarcity of paired training data. Furthermore, MRI is not always available for LAFOV PET imaging, and when it is, there is the challenge of registration due to the movement of the patient during scanning [37].
Finally, another approach is therefore a PET-derived solution for attenuation correction. Here, the non-attenuation corrected (NAC) PET can be used to synthesise the attenuation and scatter-corrected PET image directly or to synthesise the AC CT image. While the direct NAC PET-to-PET methods have demonstrated promising results [38,39,40,41], they come with the disadvantage that they are hard to debug when determining whether errors such as hallucinations have occurred during synthesis. Contrary to this, the NAC PET-to-CT methods could better fit the clinical workflow as artifacts in the synthetic CT are easily identifiable, making it easier to visually evaluate the output of these models. Several studies have achieved this with positive results [42,43,44,45,46,47,48]. These studies all include datasets from adult patients, but as radiation minimisation is especially relevant for paediatric patients, a model designed specifically for children would be of value.
Inspired by the studies above, the purpose of this project is to generate synthetic CT images from NAC PET images from paediatric patients scanned on the LAFOV Biograph Vision Quadra PET/CT, thereby eliminating the need for an AC CT acquisition. The developed model will make use of our previously proposed model for the adult cohort [47]. We furthermore explore the need for separate models on lower count-rates to accommodate the research towards reducing scan time or PET radiotracer dose.

2. Materials and Methods

2.1. Patient Cohort

The cohort consisted of 195 paired NAC PET and CT images from 128 consecutively included paediatric patients injected with [18F]FDG acquired from August 2021 to June 2023 from Rigshospitalet (Copenhagen, Denmark). From this cohort, 81 of the images were obtained on a Siemens Biograph Vision 600 PET/CT, and the remaining 114 were obtained from a LAFOV Siemens Vision Quadra PET/CT scanner, Siemens Healthineers, Knoxville, TN, USA. All patients examined with the Vision scanner were used for training, while the patients examined on the Quadra scanner were split into training and test sets. We used stratified sampling, where we first divided the patients based on age and weight (<6 years old or <40 kg) and sampled 25% in each group for the test set. The split was performed on the subject level, ensuring repeat scans of a patient were in the same set. The patient information is shown in Table 1.

2.2. Data Acquisition and Pre-Processing

The dataset included in this study consists of patients from either a Siemens Biograph Vision 600 or a LAFOV Siemens Vision Quadra PET/CT scanner. An ultra-low-dose CT (ULDCT) was acquired (ref mAs 7.0). All scans are without IV contrast. [18F]FDG images were acquired according to the European guidelines [49], with [18F]FDG administered at 1.5 MBq/kg body weight (Quadra data) or 3 MBq/kg body weight (Vision data) 60 min prior to scanning for 5 min (n = 19 Quadra) or 10 min (all Vision, n = 95 Quadra). The dataset includes patients in both arms up and arms down positioning.
All CT images have a matrix size of 512 × 512 and a pixel dimension of 1.52 × 1.52 × 2 mm3. PET data were reconstructed using e7tools (Siemens Healthineers, Knoxville, TN, USA), with parameters identical to the clinical setting. For the purpose of this study, we reconstructed the NAC PET images without attenuation correction using 3D ordinary Poisson OSEM (3D-OP-OSEM), four iterations and five subsets, and point spread modelling (PSF) applied. Post-filtering was set at 2 mm (Quadra data) or 4 mm (Vision data). All PET images had a voxel size of 1.65 × 1.65 × 2 mm3 (440 × 440 matrices).
Additionally, we simulated the effect of reduced scanning time or injected dose by reconstructing the images from the LAFOV scanner in the time frame of 60 s and 90 s.
Image pre-processing was performed identically to that of the adult cohort used to train the original model [47]. In short, ULDCT images were first resampled to 2 mm³ isotropic voxels, cropped to 512 × 512 to exclude background, and then normalised using a linear scaling of the HU values. The NAC PET images were resampled to the pre-processed ULDCT images and normalised using the 0.5% and 99.5% percentiles.

2.3. Model Architecture and Training

In this paper, we utilised a 3D parameter transferred conditional GAN (cGAN) architecture, which is described in detail in our previous work [47]. In short, in the original paper, we proposed to pre-train the generator using a large cohort of 858 patients, followed by training the entire GAN model with a selected subset of the patients for each of the included tracers. In this study, we used an identical architecture and setup with pre-training using the adult cohort but trained the GAN using our paediatric cohort. See the flowchart in Appendix A Figure A1.
The generator consisted of a 3D U-Net with filters [64, 128, 256, 512] that take NAC PET patches of size 128 × 128 × 32 as input and output sCT patches of the same size. The discriminator consisted of a binary classifier with five convolutional layers. It takes paired 3D patches from the NAC PET image and either the real or synthetic CT image as input and outputs a value indicating whether the CT image is real or synthetic.
During each epoch of training, 12 patches were randomly sampled from each subject and subjected to random augmentation (5 degrees rotation, 5 mm translation, and 0.9–1.2 scaling) using TorchIO [50]. The generator was trained using a combination loss that evaluated its performance against the discriminator, the quality of the output images, and a dice loss on the bones. The discriminator was trained on a binary cross-entropy loss. To balance the performance of the pre-trained generator and the discriminator, the discriminator was first trained separately for 50 epochs. Afterward, the generator and discriminator were trained in an adversarial manner for 1450 epochs. The final model was chosen based on visual inspection and the relative mean difference between the generated sCTs and real CTs within organ masks derived from the real CTs from the validation patients.

2.4. PET Reconstruction

For PET reconstruction, two reconstructions were performed for each patient in the test set. The sCT was used for attenuation correction to generate the sPET, and the ULDCT was used to generate a standard PET image for reference. Reconstruction parameters remained identical to the NAC PET reconstruction, except for the application of attenuation correction.
The sCTs were generated following the same procedure as described in the original paper. Overlapping NAC PET images were sampled and inputted into the trained generator. The resulting sCT patches were then combined to form the complete sCT image. Finally, the bed from the original CT image was superimposed onto the sCT image.

2.5. Data Analysis

2.5.1. Qualitative Evaluation

For qualitative evaluation, an experienced paediatric nuclear medicine specialist with >10 y of experience conducted visual inspection of the PET and sPET images blinded to the underlying AC method by presenting the images side by side in syngo.via (Siemens Healthineers). Artefacts were rated using a Likert scale (0 = none, 1 = minor, 2 = medium, 3 = major). Additionally, any observed differences in image quality were noted, with preference given to the superior image based on a scale of 0–2 (0 = same quality, 1 = insignificant difference, 2 = significant difference).

2.5.2. Quantitative Evaluation

Quantitative evaluation was performed by computing the relative mean difference between the sPET and the reference PET across various organs, utilising organ masks generated by TotalSegmentator [51] from the corresponding ULDCT images. The chosen organs were liver, lungs, kidney, heart, aorta, spleen, brain, bones, colon, oesophagus, and pancreas. Visual inspections were performed to ensure the quality of the segmentation. Furthermore, the nuclear medicine specialist delineated up to five FDG-avid lesions in each PET image individually. The delineation was done using the iso-contouring tool in Mirada (Mirada Medical Ltd., Oxford, UK). We measured the Dice coefficient for delineation overlap and relative percentage difference of SUV mean/max between PET and sPET.

2.5.3. Evaluation of Reduced Counts

To evaluate the robustness of the model towards reduced acquisition times, we used the model trained on full acquisition time data to predict sCT images for the 60 s and 90 s NAC PET images and subsequently reconstructed the sPET, denoted sPET60 and sPET90, respectively. In addition, anticipating reduced performance on the 60 s data, we trained a second model. The training setup was the same as the default model; however, only trained using the training patients from the Quadra LAFOV scanner. The predicted sCT for the 60 s NAC PET using this low-count model was also used to reconstruct the low-count PET data for the test patients, denoted sPET60LC, where LC indicates that the model was fine-tuned on the low-count data. We compared the sPET images to the references PET60 and PET90 reconstructed using the ULDCT for AC. Evaluation was performed by calculating the relative difference in the organ and lesion masks.

3. Results

The visual inspection of the images showed no differences (score 0) between PET and sPET in image quality in 19 out of 23 cases. Four cases showed minor insignificant differences without clinical impact (score 1), primarily related to minor respiration artefacts around the liver or spleen, which were given an artefact score of 1 (minor artefact) by the nuclear medicine specialist. The PET image was deemed the better of the two in three of these cases. No cases showed a significant difference.
Figure 1 illustrates one of the cases where minor insignificant differences without clinical impact between the PET image and the sPET image were found, with the PET image rated as superior. The observed artefact is a respiration artefact around the liver (banana artefact). Despite this artefact, the two PET images appear similar in the figure.
Like the findings in the original paper, the synthetic CT images appear to be free of streaking artefacts caused by metal-induced implants, unlike the CT images. Figure 2 highlights this observation, depicting a patient with an implant in the spine resulting in streaking artefacts evident in the CT but not in the sCT.
In Figure 3, the results of the method on a 1-year-old patient are depicted. It is evident that the produced sPET closely resembles the PET image, although differences are observed in the patient’s left arm, as illustrated in the difference map due to motion between the CT and PET acquisition.
A total of 55 FDG-avid lesions were found across 21 of the 23 examinations. All lesions were found in both PET and sPET, with an average Dice coefficient of 0.92 ± 0.08. The average relative difference was 0.3 ± 4.2% for the mean SUV and 1.4 ± 4.9% for the maximum SUV. The relative difference was within ±10% in nearly all lesions; see Figure 4. Two lesions were in lung tissue (relative mean SUV −3.7% and 4.4%, respectively), and 16 lesions were in bone areas (relative mean SUV 1.6 ± 3.4%, range −6.2% to 5.1%).
The quantitative ROI evaluation also showed a mean SUV relative difference of −2.6 ± 5.8% across all organs (Figure 5), with the lung and colon being the organs with the largest difference between PET and sPET, which is likely due to the respiratory motion between PET and CT, whereas sPET is purely based on the PET signal.
The low-count images sPET60 and sPET90 had an overall average relative PET difference within the selected organs that was comparable to the results obtained with the full-count sPET images, albeit with increased standard deviation, which was worst for the sPET60 image. The fine-tuned model (sPET60LC) resulted in identical quantitative results as the full-count model (Figure 6).

4. Discussion

In this study, we extended a previously introduced deep learning method for generating synthetic CT images from PET emission data for use in paediatric patients. Our primary goal was to minimise the radiation exposure to paediatric patients during PET/CT scans. Furthermore, by directly deriving the attenuation map from the PET data, we ensure excellent alignment between the PET and attenuation images, thus addressing concerns regarding motion-induced artefacts.
The quantitative analysis revealed a mean SUV relative difference of −2.6 ± 5.8% across all organs (Figure 5), with the largest discrepancies occurring in the lung and colon, likely due to respiratory motion. This overall error rate is comparable to the error reported in similar studies by Dong et al. and Armanious et al. of 0.1% and −0.8%, respectively [43,44]. Direct comparison between the results is challenged by a difference in the cohort, as both of these works include only adult patients. The observed discrepancies around the lung and diaphragm highlight the potential of this method to reduce motion-related artefacts, including gross motion, as illustrated in Figure 3, consistent with findings from previous studies. This capability is particularly important in paediatric imaging, as children are more susceptible to motion artefacts due to difficulty in remaining still and the more frequent need for sedation. Mitigating these artefacts effectively is, therefore, crucial for enhancing diagnostic accuracy in this patient population, making this method a valuable alternative to other radiation-lowering techniques such as ULDCTs.
Furthermore, 55 FDG-avid lesions were identified across 21 of the 23 examinations, detected in all cases in both PET and sPET images. We achieved a high agreement both regarding delineation (average Dice coefficient of 0.92 ± 0.08) and quantitatively (relative mean SUV 0.3 ± 4.2%, relative max SUV 1.4 ± 4.9%). These results align with those of Dong et al. [43], who reported a mean difference of 1.1% in lesions, and Armanious et al. [44], who noted a mean deviation of 0.9%. We did not observe any decrease in performance for the lesions that were located within bone or lung tissue, areas that are otherwise known to be especially challenging for NAC-PET-based methods [39], mainly attributed to an overall low PET uptake and high tissue heterogeneity.
Qualitative results revealed no differences in image quality between PET and sPET in 19 out of 23 cases. In the remaining four cases, minor insignificant differences with no clinical impact were observed, primarily attributed to minor respiration artefacts around the liver or spleen. No cases exhibited significant differences. Moreover, the method successfully eliminated streaking artefacts caused by metal implants in the observed test patients and yielded promising results for patients as young as 1-year-old. These results indicate that the proposed method is ready for testing in clinical routine.
Finally, since the LAFOV scanners allow for short acquisition times, we also evaluated the generalisability of our model in such a low-count setting. Here, we demonstrated retained quantitative performance as low as 60 s PET acquisition, albeit with the best performance when fine-tuning the model with the low-count data. These results motivate the use of our model in such fast acquisition setups.
This study has certain limitations. Firstly, due to the limited number of very young patients, this demographic of patients has been included to a limited degree in the training and evaluation of the model. As a result, the performance of the model on very young or small patients remains somewhat uncertain. Secondly, the model was trained using ULDCTs, meaning it was designed to output ultra-low-dose CT images, which may be of lower quality compared to traditional low-dose or standard-dose ACCT images.
Despite these limitations, the model demonstrated promising results, both in its potential to generate attenuation-corrected PET images without the need for ACCT, thereby reducing overall radiation exposure, as well as in its ability to mitigate motion artefacts. In the future, we plan to expand the implementation of the model to include more patients, which will provide a better understanding of its performance on very young patients. Additionally, we aim to extend the approach for use with additional radiopharmaceuticals used to examine the paediatric cohort.

5. Conclusions

In this study, we extended our deep learning method to generate synthetic CT images from non-attenuation correction emission data for paediatric patients, successfully reducing radiation exposure by eliminating the need for a CT scan. The method showed excellent qualitative and quantitative performance and effectively mitigated motion-induced artefacts. High agreement in lesion detection between PET and sPET images supports its clinical applicability. Despite limitations, such as the limited inclusion of very young patients, the model demonstrated robust performance even in low-count settings. Our findings suggest this approach is a valuable alternative to traditional methods, ready for further clinical testing to enhance diagnostic accuracy and patient safety in the younger population and reduce the occurrence of late sequelae in children.

Author Contributions

Conceptualization, F.L.A. and C.N.L.; methodology, M.E.M., F.L.A. and C.N.L.; validation, M.E.M., L.B., R.M., K.F.A. and C.N.L.; formal analysis, M.E.M., L.B., R.M., K.F.A. and C.N.L.; investigation, M.E.M., L.B., R.M., K.F.A. and C.N.L.; resources, C.N.L.; data curation, L.B., K.F.A. and C.N.L.; writing—original draft preparation, M.E.M. and C.N.L.; writing—review and editing, all authors; visualization, M.E.M. and C.N.L.; supervision, C.N.L.; project administration, C.N.L.; funding acquisition, C.N.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Danish Childhood Cancer Foundation, 1606 Copenhagen, Denmark, grant number 2022-8148.

Institutional Review Board Statement

The study was conducted in accordance with the Departmental Review Board at the Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen, Denmark (protocol no. 476_21 on 1 September 2021).

Informed Consent Statement

Patient consent was waived due to a retrospective study design and in agreement with the Departmental Review Board. All subjects were anonymised.

Data Availability Statement

Data supporting reported results can be obtained via contact with the corresponding author upon reasonable request and legal approval. The data are not publicly available due to no public data sharing agreement.

Acknowledgments

We would like to thank the scanner staff around the Siemens Vision Quadra scanner for help with data acquisition.

Conflicts of Interest

The department has an ongoing research collaboration with Siemens Healthineers, including research and funding related to the topics of this article. Siemens Healthineers had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.

Appendix A

Figure A1. Flowchart for the proposed method. In the first step, we pretrain just the generator using paired NAC-PET and CT data from adult patients. This resulting generator is identical to the pre-trained generator from [47]. Next, we train a cGAN where the generator is initialised with weights from the pretraining step. The cGAN is optimized with data from n = 172 paediatric examinations. Finally, in the test phase, we use the trained generator to predict synthetic CT (sCT) images from NAC-PET patches, which are combined into full volumes.
Figure A1. Flowchart for the proposed method. In the first step, we pretrain just the generator using paired NAC-PET and CT data from adult patients. This resulting generator is identical to the pre-trained generator from [47]. Next, we train a cGAN where the generator is initialised with weights from the pretraining step. The cGAN is optimized with data from n = 172 paediatric examinations. Finally, in the test phase, we use the trained generator to predict synthetic CT (sCT) images from NAC-PET patches, which are combined into full volumes.
Diagnostics 14 02788 g0a1

References

  1. El-Nachef, L.; Al-Choboq, J.; Restier-Verlet, J.; Granzotto, A.; Berthel, E.; Sonzogni, L.; Ferlazzo, M.L.; Bouchet, A.; Leblond, P.; Combemale, P.; et al. Human Radiosensitivity and Radiosusceptibility: What Are the Differences? Int. J. Mol. Sci. 2021, 22, 7158. [Google Scholar] [CrossRef] [PubMed]
  2. Pierce, D.A.; Shimizu, Y.; Preston, D.L.; Vaeth, M.; Mabuchi, K. Studies of the Mortality of Atomic Bomb Survivors. Report 12, Part I. Cancer: 1950–1990. Radiat. Res. 1996, 146, 1–27. [Google Scholar] [CrossRef] [PubMed]
  3. Brody, A.S.; Frush, D.P.; Huda, W.; Brent, R.L. Radiation Risk to Children From Computed Tomography. Pediatrics 2007, 120, 677–682. [Google Scholar] [CrossRef]
  4. Bosch de Basea Gomez, M.; Thierry-Chef, I.; Harbron, R.; Hauptmann, M.; Byrnes, G.; Bernier, M.-O.; Le Cornet, L.; Dabin, J.; Ferro, G.; Istad, T.S.; et al. Risk of Hematological Malignancies from CT Radiation Exposure in Children, Adolescents and Young Adults. Nat. Med. 2023, 29, 3111–3119. [Google Scholar] [CrossRef]
  5. Kertész, H.; Beyer, T.; London, K.; Saleh, H.; Chung, D.; Rausch, I.; Cal-Gonzalez, J.; Kitsos, T.; Kench, P.L. Reducing Radiation Exposure to Paediatric Patients Undergoing [18F]FDG-PET/CT Imaging. Mol. Imaging Biol. 2021, 23, 775–786. [Google Scholar] [CrossRef]
  6. Schmall, J.P.; Surti, S.; Otero, H.J.; Servaes, S.; Karp, J.S.; States, L.J. Investigating Low-Dose Image Quality in Whole-Body Pediatric 18F-FDG Scans Using Time-of-Flight PET/MRI. J. Nucl. Med. 2021, 62, 123–130. [Google Scholar] [CrossRef]
  7. Parisi, M.T.; Bermo, M.S.; Alessio, A.M.; Sharp, S.E.; Gelfand, M.J.; Shulkin, B.L. Optimization of Pediatric PET/CT. Semin. Nucl. Med. 2017, 47, 258–274. [Google Scholar] [CrossRef] [PubMed]
  8. Matsubara, K.; Ibaraki, M.; Nemoto, M.; Watabe, H.; Kimura, Y. A Review on AI in PET Imaging. Ann. Nucl. Med. 2022, 36, 133–143. [Google Scholar] [CrossRef]
  9. Theruvath, A.J.; Siedek, F.; Yerneni, K.; Muehe, A.M.; Spunt, S.L.; Pribnow, A.; Moseley, M.; Lu, Y.; Zhao, Q.; Gulaka, P.; et al. Validation of Deep Learning–Based Augmentation for Reduced 18F-FDG Dose for PET/MRI in Children and Young Adults with Lymphoma. Radiol. Artif. Intell. 2021, 3, e200232. [Google Scholar] [CrossRef]
  10. Wang, Y.-R.; Baratto, L.; Hawk, K.E.; Theruvath, A.J.; Pribnow, A.; Thakor, A.S.; Gatidis, S.; Lu, R.; Gummidipundi, S.E.; Garcia-Diaz, J.; et al. Artificial Intelligence Enables Whole-Body Positron Emission Tomography Scans with Minimal Radiation Exposure. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 2771–2781. [Google Scholar] [CrossRef]
  11. Prenosil, G.A.; Sari, H.; Fürstner, M.; Afshar-Oromieh, A.; Shi, K.; Rominger, A.; Hentschel, M. Performance Characteristics of the Biograph Vision Quadra PET/CT System with a Long Axial Field of View Using the NEMA NU 2-2018 Standard. J. Nucl. Med. 2022, 63, 476–484. [Google Scholar] [CrossRef] [PubMed]
  12. Spencer, B.A.; Berg, E.; Schmall, J.P.; Omidvari, N.; Leung, E.K.; Abdelhafez, Y.G.; Tang, S.; Deng, Z.; Dong, Y.; Lv, Y.; et al. Performance Evaluation of the UEXPLORER Total-Body PET/CT Scanner Based on NEMA NU 2-2018 with Additional Tests to Characterize PET Scanners with a Long Axial Field of View. J. Nucl. Med. 2021, 62, 861–870. [Google Scholar] [CrossRef] [PubMed]
  13. Alberts, I.; Hünermund, J.N.; Prenosil, G.; Mingels, C.; Bohn, K.P.; Viscione, M.; Sari, H.; Vollnberg, B.; Shi, K.; Afshar-Oromieh, A.; et al. Clinical Performance of Long Axial Field of View PET/CT: A Head-to-Head Intra-Individual Comparison of the Biograph Vision Quadra with the Biograph Vision PET/CT. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 2395–2404. [Google Scholar] [CrossRef] [PubMed]
  14. Zhao, Y.-M.; Li, Y.-H.; Chen, T.; Zhang, W.-G.; Wang, L.-H.; Feng, J.; Li, C.; Zhang, X.; Fan, W.; Hu, Y.-Y. Image Quality and Lesion Detectability in Low-Dose Pediatric 18F-FDG Scans Using Total-Body PET/CT. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 3378–3385. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Q.; Hu, Y.; Zhou, C.; Zhao, Y.; Zhang, N.; Zhou, Y.; Yang, Y.; Zheng, H.; Fan, W.; Liang, D.; et al. Reducing Pediatric Total-Body PET/CT Imaging Scan Time with Multimodal Artificial Intelligence Technology. EJNMMI Phys. 2024, 11, 1. [Google Scholar] [CrossRef]
  16. Chen, W.; Liu, L.; Li, Y.; Li, S.; Li, Z.; Zhang, W.; Zhang, X.; Wu, R.; Hu, D.; Sun, H.; et al. Evaluation of Pediatric Malignancies Using Total-Body PET/CT with Half-Dose [18F]-FDG. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 4145–4155. [Google Scholar] [CrossRef] [PubMed]
  17. Immonen, E.; Wong, J.; Nieminen, M.; Kekkonen, L.; Roine, S.; Törnroos, S.; Lanca, L.; Guan, F.; Metsälä, E. The Use of Deep Learning towards Dose Optimization in Low-Dose Computed Tomography: A Scoping Review. Radiography 2022, 28, 208–214. [Google Scholar] [CrossRef]
  18. Kang, E.; Chang, W.; Yoo, J.; Ye, J.C. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network. IEEE Trans. Med. Imaging 2018, 37, 1358–1369. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, H.; Zhang, Y.; Kalra, M.K.; Lin, F.; Chen, Y.; Liao, P.; Zhou, J.; Wang, G. Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE Trans. Med. Imaging 2017, 36, 2524–2535. [Google Scholar] [CrossRef]
  20. Geng, M.; Deng, Y.; Zhao, Q.; Xie, Q.; Zeng, D.; Zeng, D.; Zuo, W.; Meng, D. Unsupervised/Semi-Supervised Deep Learning for Low-Dose CT Enhancement. arXiv 2018, arXiv:1808.02603. [Google Scholar]
  21. Park, H.S.; Baek, J.; You, S.K.; Choi, J.K.; Seo, J.K. Unpaired Image Denoising Using a Generative Adversarial Network in X-Ray CT. IEEE Access 2019, 7, 110414–110425. [Google Scholar] [CrossRef]
  22. Yi, X.; Babyn, P. Sharpness-Aware Low-Dose CT Denoising Using Conditional Generative Adversarial Network. J. Digit. Imaging 2018, 31, 655–669. [Google Scholar] [CrossRef] [PubMed]
  23. Ng, C.K.C. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. Children 2022, 9, 1044. [Google Scholar] [CrossRef] [PubMed]
  24. Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radiation Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology 2021, 298, 180–188. [Google Scholar] [CrossRef]
  25. Lee, S.; Choi, Y.H.; Cho, Y.J.; Lee, S.B.; Cheon, J.-E.; Kim, W.S.; Ahn, C.K.; Kim, J.H. Noise Reduction Approach in Pediatric Abdominal CT Combining Deep Learning and Dual-Energy Technique. Eur. Radiol. 2021, 31, 2218–2226. [Google Scholar] [CrossRef]
  26. Park, H.S.; Jeon, K.; Lee, J.; You, S.K. Denoising of Pediatric Low Dose Abdominal CT Using Deep Learning Based Algorithm. PLoS ONE 2022, 17, e0260369. [Google Scholar] [CrossRef]
  27. Nagayama, Y.; Goto, M.; Sakabe, D.; Emoto, T.; Shigematsu, S.; Oda, S.; Tanoue, S.; Kidoh, M.; Nakaura, T.; Funama, Y.; et al. Radiation Dose Reduction for 80-KVp Pediatric CT Using Deep Learning–Based Reconstruction: A Clinical and Phantom Study. Am. J. Roentgenol. 2022, 219, 315–324. [Google Scholar] [CrossRef]
  28. Sun, J.; Li, H.; Li, J.; Cao, Y.; Zhou, Z.; Li, M.; Peng, Y. Performance Evaluation of Using Shorter Contrast Injection and 70 KVp with Deep Learning Image Reconstruction for Reduced Contrast Medium Dose and Radiation Dose in Coronary CT Angiography for Children: A Pilot Study. Quant. Imaging Med. Surg. 2021, 11, 4162–4171. [Google Scholar] [CrossRef] [PubMed]
  29. Sun, J.; Li, H.; Wang, B.; Li, J.; Li, M.; Zhou, Z.; Peng, Y. Application of a Deep Learning Image Reconstruction (DLIR) Algorithm in Head CT Imaging for Children to Improve Image Quality and Lesion Detection. BMC Med. Imaging 2021, 21, 108. [Google Scholar] [CrossRef]
  30. Ladefoged, C.N.; Law, I.; Anazodo, U.; Lawrence, K.S.; Izquierdo-Garcia, D.; Catana, C.; Burgos, N.; Cardoso, M.J.; Ourselin, S.; Hutton, B.; et al. A Multi-Centre Evaluation of Eleven Clinically Feasible Brain PET/MRI Attenuation Correction Techniques Using a Large Cohort of Patients. Neuroimage 2016, 147, 346–359. [Google Scholar] [CrossRef]
  31. Sepehrizadeh, T.; Jong, I.; DeVeer, M.; Malhotra, A. PET/MRI in Paediatric Disease. Eur. J. Radiol. 2021, 144, 109987. [Google Scholar] [CrossRef] [PubMed]
  32. Bezrukov, I.; Schmidt, H.; Gatidis, S.; Mantlik, F.; Schäfer, J.F.; Schwenzer, N.; Pichler, B.J. Quantitative Evaluation of Segmentation- and Atlas-Based Attenuation Correction for PET/MR on Pediatric Patients. J. Nucl. Med. 2015, 56, 1067–1074. [Google Scholar] [CrossRef] [PubMed]
  33. Olin, A.B.; Hansen, A.E.; Rasmussen, J.H.; Jakoby, B.; Berthelsen, A.K.; Ladefoged, C.N.; Kjær, A.; Fischer, B.M.; Andersen, F.L. Deep Learning for Dixon MRI-Based Attenuation Correction in PET/MRI of Head and Neck Cancer Patients. EJNMMI Phys. 2022, 9, 20. [Google Scholar] [CrossRef] [PubMed]
  34. Ladefoged, C.N.; Hansen, A.E.; Henriksen, O.M.; Bruun, F.J.; Eikenes, L.; Øen, S.K.; Karlberg, A.; Højgaard, L.; Law, I.; Andersen, F.L. AI-Driven Attenuation Correction for Brain PET/MRI: Clinical Evaluation of a Dementia Cohort and Importance of the Training Group Size. Neuroimage 2020, 222, 117221. [Google Scholar] [CrossRef] [PubMed]
  35. Boulanger, M.; Nunes, J.-C.; Chourak, H.; Largent, A.; Tahri, S.; Acosta, O.; De Crevoisier, R.; Lafond, C.; Barateau, A. Deep Learning Methods to Generate Synthetic CT from MRI in Radiotherapy: A Literature Review. Phys. Medica 2021, 89, 265–281. [Google Scholar] [CrossRef] [PubMed]
  36. Ladefoged, C.N.; Marner, L.; Hindsholm, A.; Law, I.; Højgaard, L.; Andersen, F.L. Deep Learning Based Attenuation Correction of PET/MRI in Pediatric Brain Tumor Patients: Evaluation in a Clinical Setting. Front. Neurosci. 2019, 12, 1005. [Google Scholar] [CrossRef]
  37. Ahangari, S.; Beck Olin, A.; Kinggård Federspiel, M.; Jakoby, B.; Andersen, T.L.; Hansen, A.E.; Fischer, B.M.; Littrup Andersen, F. A Deep Learning-Based Whole-Body Solution for PET/MRI Attenuation Correction. EJNMMI Phys. 2022, 9, 55. [Google Scholar] [CrossRef] [PubMed]
  38. Van Hemmen, H.; Massa, H.; Hurley, S.; Cho, S.; Bradshaw, T.; McMillan, A. A Deep Learning-Based Approach for Direct Whole-Body PET Attenuation Correction. J. Nucl. Med. 2019, 60, 569. [Google Scholar]
  39. Yang, J.; Sohn, J.H.; Behr, S.C.; Gullberg, G.T.; Seo, Y. CT-Less Direct Correction of Attenuation and Scatter in the Image Space Using Deep Learning for Whole-Body FDG PET: Potential Benefits and Pitfalls. Radiol. Artif. Intell. 2021, 3, e200137. [Google Scholar] [CrossRef]
  40. Dong, X.; Lei, Y.; Wang, T.; Higgins, K.; Liu, T.; Curran, W.J.; Mao, H.; Nye, J.A.; Yang, X. Deep Learning-Based Attenuation Correction in the Absence of Structural Information for Whole-Body Positron Emission Tomography Imaging. Phys. Med. Biol. 2020, 65, 055011. [Google Scholar] [CrossRef]
  41. Xue, S.; Bohn, K.P.; Guo, R.; Sari, H.; Viscione, M.; Rominger, A.; Li, B.; Shi, K. Development of a Deep Learning Method for CT-Free Correction for an Ultra-Long Axial Field of View PET Scanner. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 4120–4122. [Google Scholar]
  42. Liu, F.; Jang, H.; Kijowski, R.; Zhao, G.; Bradshaw, T.; McMillan, A.B. A Deep Learning Approach for 18F-FDG PET Attenuation Correction. EJNMMI Phys. 2018, 5, 24. [Google Scholar] [CrossRef]
  43. Dong, X.; Wang, T.; Lei, Y.; Higgins, K.; Liu, T.; Curran, W.J.; Mao, H.; Nye, J.A.; Yang, X. Synthetic CT Generation from Non-Attenuation Corrected PET Images for Whole-Body PET Imaging. Phys. Med. Biol. 2019, 64, 215016. [Google Scholar] [CrossRef]
  44. Armanious, K.; Hepp, T.; Küstner, T.; Dittmann, H.; Nikolaou, K.; La Fougère, C.; Yang, B.; Gatidis, S. Independent Attenuation Correction of Whole Body [18F]FDG-PET Using a Deep Learning Approach with Generative Adversarial Networks. EJNMMI Res. 2020, 10, 53. [Google Scholar] [CrossRef] [PubMed]
  45. Hu, Z.; Li, Y.; Zou, S.; Xue, H.; Sang, Z.; Liu, X.; Yang, Y.; Zhu, X.; Liang, D.; Zheng, H. Obtaining PET/CT Images from Non-Attenuation Corrected PET Images in a Single PET System Using Wasserstein Generative Adversarial Networks. Phys. Med. Biol. 2020, 65, 215010. [Google Scholar] [CrossRef] [PubMed]
  46. Hwang, D.; Kang, S.K.; Kim, K.Y.; Choi, H.; Lee, J.S. Comparison of Deep Learning-Based Emission-Only Attenuation Correction Methods for Positron Emission Tomography. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 1833–1842. [Google Scholar] [CrossRef]
  47. Montgomery, M.E.; Andersen, F.L.; d’Este, S.H.; Overbeck, N.; Cramon, P.K.; Law, I.; Fischer, B.M.; Ladefoged, C.N. Attenuation Correction of Long Axial Field-of-View Positron Emission Tomography Using Synthetic Computed Tomography Derived from the Emission Data: Application to Low-Count Studies and Multiple Tracers. Diagnostics 2023, 13, 3661. [Google Scholar] [CrossRef]
  48. Li, W.; Huang, Z.; Chen, Z.; Jiang, Y.; Zhou, C.; Zhang, X.; Fan, W.; Zhao, Y.; Zhang, L.; Wan, L.; et al. Learning CT-Free Attenuation-Corrected Total-Body PET Images through Deep Learning. Eur. Radiol. 2024, 34, 5578–5587. [Google Scholar] [CrossRef]
  49. Vali, R.; Alessio, A.; Balza, R.; Borgwardt, L.; Bar-Sever, Z.; Czachowski, M.; Jehanno, N.; Kurch, L.; Pandit-Taskar, N.; Parisi, M.; et al. SNMMI Procedure Standard/EANM Practice Guideline on Pediatric 18F-FDG PET/CT for Oncology 1.0. J. Nucl. Med. 2021, 62, 99–110. [Google Scholar] [CrossRef]
  50. Pérez-García, F.; Sparks, R.; Ourselin, S. TorchIO: A Python Library for Efficient Loading, Preprocessing, Augmentation and Patch-Based Sampling of Medical Images in Deep Learning. Comput. Methods Programs Biomed. 2021, 208, 106236. [Google Scholar] [CrossRef]
  51. Wasserthal, J.; Breit, H.-C.; Meyer, M.T.; Pradella, M.; Hinck, D.; Sauter, A.W.; Heye, T.; Boll, D.T.; Cyriac, J.; Yang, S.; et al. TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiol. Artif. Intell. 2023, 5, e230024. [Google Scholar] [CrossRef]
Figure 1. Illustrative sample patient with banana artifact presented. Panels (a,b) show the normal CT and corresponding PET. The synthetic CT (sCT) and corresponding sPET are seen in (c,d). PET is fused on top of the CT scan in (e), illustrating the mismatch between CT and emission data. The blue line represents the superior part of the liver at the time of CT scanning. Panel (f) shows the sPET fused on top of the sCT.
Figure 1. Illustrative sample patient with banana artifact presented. Panels (a,b) show the normal CT and corresponding PET. The synthetic CT (sCT) and corresponding sPET are seen in (c,d). PET is fused on top of the CT scan in (e), illustrating the mismatch between CT and emission data. The blue line represents the superior part of the liver at the time of CT scanning. Panel (f) shows the sPET fused on top of the sCT.
Diagnostics 14 02788 g001
Figure 2. Sample patient with metal implant exhibiting streaking artefacts in the CT image (a), which are absent in the sCT image (b). The corresponding PET images are seen for PET and sPET, respectively (c,d). The zoom panels have been magnified by a factor of 2.3.
Figure 2. Sample patient with metal implant exhibiting streaking artefacts in the CT image (a), which are absent in the sCT image (b). The corresponding PET images are seen for PET and sPET, respectively (c,d). The zoom panels have been magnified by a factor of 2.3.
Diagnostics 14 02788 g002
Figure 3. Representation of a 1-year-old patient featuring the CT, sCT, and corresponding PET images PET and sPET (ad). Additionally, a relative percent difference map between the PET and sPET images (e) highlights that the discrepancies in the PET images are localised in the patient’s cranium and left arm.
Figure 3. Representation of a 1-year-old patient featuring the CT, sCT, and corresponding PET images PET and sPET (ad). Additionally, a relative percent difference map between the PET and sPET images (e) highlights that the discrepancies in the PET images are localised in the patient’s cranium and left arm.
Diagnostics 14 02788 g003
Figure 4. Relative max difference (a) and mean difference (b) between the PET and sPET for lesions found in the examinations. The colour and size of each point represent the lesion type and size, respectively.
Figure 4. Relative max difference (a) and mean difference (b) between the PET and sPET for lesions found in the examinations. The colour and size of each point represent the lesion type and size, respectively.
Diagnostics 14 02788 g004
Figure 5. Violin-plot showing the mean relative percent difference between PET and sPET for selected organs. The white dot in each presents the median value, and the solid black box represents the interquartile range, whereas the line extends to 1.5 times the interquartile range.
Figure 5. Violin-plot showing the mean relative percent difference between PET and sPET for selected organs. The white dot in each presents the median value, and the solid black box represents the interquartile range, whereas the line extends to 1.5 times the interquartile range.
Diagnostics 14 02788 g005
Figure 6. Violin-plot showing the mean relative difference between PET, PET60, sPET90, and sPET60LC for selected organs.
Figure 6. Violin-plot showing the mean relative difference between PET, PET60, sPET90, and sPET60LC for selected organs.
Diagnostics 14 02788 g006
Table 1. The three cohorts used in this study from the Siemens Biograph Vision 600 and LAFOV Siemens Biograph Vision Quadra scanner, Siemens Healthineers, Knoxville, TN, USA.
Table 1. The three cohorts used in this study from the Siemens Biograph Vision 600 and LAFOV Siemens Biograph Vision Quadra scanner, Siemens Healthineers, Knoxville, TN, USA.
CohortScannerInclusion Periodn Total Examinations [M/F]n ≤ 6 YearsWeight (kg)Age (Years)
TrainSiemens Biograph Vision 600 PET/CTAugust 2021 to March 202281 (45/36)21/818.5–780.7–19
TrainLAFOV Siemens Biograph Vision Quadra PET/CTNovember 2021 to June 202391 (48/43)23/914–920–18
TestLAFOV Siemens Biograph Vision Quadra PET/CTMay 2022 to June 202323 (9/14)3/2313–941–18
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Montgomery, M.E.; Andersen, F.L.; Mathiasen, R.; Borgwardt, L.; Andersen, K.F.; Ladefoged, C.N. CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data. Diagnostics 2024, 14, 2788. https://doi.org/10.3390/diagnostics14242788

AMA Style

Montgomery ME, Andersen FL, Mathiasen R, Borgwardt L, Andersen KF, Ladefoged CN. CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data. Diagnostics. 2024; 14(24):2788. https://doi.org/10.3390/diagnostics14242788

Chicago/Turabian Style

Montgomery, Maria Elkjær, Flemming Littrup Andersen, René Mathiasen, Lise Borgwardt, Kim Francis Andersen, and Claes Nøhr Ladefoged. 2024. "CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data" Diagnostics 14, no. 24: 2788. https://doi.org/10.3390/diagnostics14242788

APA Style

Montgomery, M. E., Andersen, F. L., Mathiasen, R., Borgwardt, L., Andersen, K. F., & Ladefoged, C. N. (2024). CT-Free Attenuation Correction in Paediatric Long Axial Field-of-View Positron Emission Tomography Using Synthetic CT from Emission Data. Diagnostics, 14(24), 2788. https://doi.org/10.3390/diagnostics14242788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop