Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Self-supervised deep learning for joint 3D low-dose PET/CT image denoising

Published: 01 October 2023 Publication History

Abstract

Deep learning (DL)-based denoising of low-dose positron emission tomography (LDPET) and low-dose computed tomography (LDCT) has been widely explored. However, previous methods have focused only on single modality denoising, neglecting the possibility of simultaneously denoising LDPET and LDCT using only one neural network, i.e., joint LDPET/LDCT denoising. Moreover, DL-based denoising methods generally require plenty of well-aligned LD-normal-dose (LD-ND) sample pairs, which can be difficult to obtain. To this end, we propose a self-supervised two-stage training framework named MAsk-then-Cycle (MAC), to achieve self-supervised joint LDPET/LDCT denoising. The first stage of MAC is masked autoencoder (MAE)-based pre-training and the second stage is self-supervised denoising training. Specifically, we propose a self-supervised denoising strategy named cycle self-recombination (CSR), which enables denoising without well-aligned sample pairs. Unlike other methods that treat noise as a homogeneous whole, CSR disentangles noise into signal-dependent and independent noises. This is more in line with the actual imaging process and allows for flexible recombination of noises and signals to generate new samples. These new samples contain implicit constraints that can improve the network’s denoising ability. Based on these constraints, we design multiple loss functions to enable self-supervised training. Then we design a CSR-based denoising network to achieve joint 3D LDPET/LDCT denoising. Existing self-supervised methods generally lack pixel-level constraints on networks, which can easily lead to additional artifacts. Before denoising training, we perform MAE-based pre-training to indirectly impose pixel-level constraints on networks. Experiments on an LDPET/LDCT dataset demonstrate its superiority over existing methods. Our method is the first self-supervised joint LDPET/LDCT denoising method. It does not require any prior assumptions and is therefore more robust.

Graphical abstract

Display Omitted

Highlights

A two-stage self-supervised framework named MAsk-then-Cycle (MAC) is proposed to enable self-supervised denoising of LDPET/LDCT images. MAC includes a masked autoencoder-based pre-training stage and a self-supervised denoising training stage
A self-supervised denoising training strategy called cycle self-recombination (CSR) is proposed to adapt the characteristics of tomographic imaging
CSR-based unified denoising network for joint denoising of 3D LDPET/LDCT image is proposed.
Signal-guided channel attention module is designed to better disentangle anatomy-dependent noise from entangled noise.
Experimental results demonstrate the generalizability of MAC to joint PET/CT denoising task as well as single modality denoising tasks.

References

[1]
Schöder H., Gönen M., Screening for cancer with pet and pet/ct: potential and limitations, J. Nucl. Med. 48 (2007) 4–18.
[2]
Chen Y.K., Ding H.J., Su C.T., Shen Y.Y., Chen L.K., Liao A.C., Hung T.Z., Hu F.L., Kao C.H., Application of pet and pet/ct imaging for cancer screening, Anticancer Res. 24 (2004) 4103–4108.
[3]
Rice H.E., Frush D.P., Farmer D., Waldhausen J.H., Committee A.E., et al., Review of radiation risks from computed tomography: essentials for the pediatric surgeon, J. Pediatr. Surg. 42 (2007) 603–607.
[4]
Zhang X., Han Z., Shangguan H., Han X., Cui X., Wang A., Artifact and detail attention generative adversarial networks for low-dose ct denoising, IEEE Trans. Med. Imaging 40 (2021) 3901–3918,.
[5]
Boussion N., Cheze Le Rest C., Hatt M., Visvikis D., Incorporation of wavelet-based denoising in iterative deconvolution for partial volume correction in whole-body pet imaging, Eur. J. Nucl. Med. Mol. Imaging 36 (2009) 1064–1075.
[6]
Arabi H., Zaidi H., Non-local mean denoising using multiple pet reconstructions, Ann. Nucl. Med. 35 (2021) 176–186.
[7]
Manduca A., Yu L., Trzasko J.D., Khaylova N., Kofler J.M., McCollough C.M., Fletcher J.G., Projection space denoising with bilateral filtering and ct noise modeling for dose reduction in ct, Med. Phys. 36 (2009) 4911–4919.
[8]
Tu J., Chen H., Wang M., Gandomi A.H., The colony predation algorithm, J. Bionic Eng. 18 (2021) 674–710.
[9]
Su Q., Wang F., Chen D., Chen G., Li C., Wei L., Deep convolutional neural networks with ensemble learning and transfer learning for automated detection of gastrointestinal diseases, Comput. Biol. Med. 150 (2022),.
[10]
Zhou J., Wu Z., Jiang Z., Huang K., Guo K., Zhao S., Background selection schema on deep learning-based classification of dermatological disease, Comput. Biol. Med. 149 (2022),.
[11]
Wang Y., Bai T., Li T., Huang L., Osteoporotic vertebral fracture classification in x-rays based on a multi-modal semantic consistency network, J. Bionic Eng. 19 (2022) 1816–1829.
[12]
Jiang X., Liu M., Zhao F., Liu X., Zhou H., A novel super-resolution ct image reconstruction via semi-supervised generative adversarial network, Neural Comput. Appl. 32 (2020) 14563–14578.
[13]
Wu Z., Chen X., Xie S., Shen J., Zeng Y., Super-resolution of brain mri images based on denoising diffusion probabilistic model, Biomed. Signal Process. Control 85 (2023),.
[14]
Y. Nan, Y. Quan, H. Ji, Variational-em-based deep learning for noise-blind image deblurring, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 3626–3635.
[15]
Zeng X., Dong Q., Li Y., Mg-cnfnet: A multiple grained channel normalized fusion networks for medical image deblurring, Biomed. Signal Process. Control 82 (2023),.
[16]
R. Neshatavar, M. Yavartanoo, S. Son, K.M. Lee, Cvf-sid: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image, in: IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
[17]
Immonen E., Wong J., Nieminen M., Kekkonen L., Roine S., Törnroos S., Lanca L., Guan F., Metsälä E., The use of deep learning towards dose optimization in low-dose computed tomography: A scoping review, Radiography 28 (2022) 208–214.
[18]
Chaturvedi S., Karthikeyan R., Vijayaraj M., Kumar N., Sangeetha M., et al., Medical image denoising and classification based on machine learning: A review, ECS Trans. 107 (2022) 6111.
[19]
Zhou L., Schaefferkoetter J.D., Tham I.W., Huang G., Yan J., Supervised learning with cyclegan for low-dose fdg pet image denoising, Med. Image Anal. 65 (2020),.
[20]
Fu Y., Dong S., Niu M., Xue L., Guo H., Huang Y., Xu Y., Yu T., Shi K., Yang Q., Shi Y., Zhang H., Tian M., Zhuo C., Aigan: Attention–encoding integrated generative adversarial network for the reconstruction of low-dose ct and low-dose pet images, Med. Image Anal. 86 (2023),.
[21]
Li M., Hsu W., Xie X., Cong J., Gao W., Sacnn: Self-attention convolutional neural network for low-dose ct denoising with self-supervised perceptual loss network, IEEE Trans. Med. Imaging 39 (2020) 2289–2301,.
[22]
Meng M., Li S., Yao L., Li D., Zhu M., Gao Q., Xie Q., Zhao Q., Bian Z., Huang J., Meng D., Zeng D., Sr J.M., Semi-supervised learned sinogram restoration network for low-dose CT image reconstruction, in: Medical Imaging 2020: Physics of Medical Imaging, International Society for Optics and Photonics, SPIE, 2020,.
[23]
Sharma V., Khurana A., Yenamandra S., Awate S.P., Semi-supervised deep expectation–maximization for low-dose pet-ct, in: 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 2022, pp. 1–5,.
[24]
Liu X., Zhang F., Hou Z., Mian L., Wang Z., Zhang J., Tang J., Self-supervised learning: Generative or contrastive, IEEE Trans. Knowl. Data Eng. 35 (2021) 857–876.
[25]
LeCun Y., Misra I., Self-supervised learning: The dark matter of intelligence, 2021, https://ai.facebook.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/.
[26]
OpenAI, Gpt-4 technical report, 2023, arXiv:2303.08774.
[27]
Wang J., Tang Y., Wu Z., Du Q., Yao L., Yang X., Li M., Zheng J., A self-supervised guided knowledge distillation framework for unpaired low-dose ct image denoising, Comput. Med. Imaging Graph. 107 (2023).
[28]
Choi K., Lim J.S., Kim S., Self-supervised inter- and intra-slice correlation learning for low-dose ct image restoration without ground truth, Expert Syst. Appl. 209 (2022),.
[29]
He K., Chen X., Xie S., Li Y., Dollár P., Girshick R., Masked autoencoders are scalable vision learners, in: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, 2022, pp. 15979–15988.
[30]
Duan H., Shen W., Min X., Tu D., Teng L., Wang J., Zhai G., Masked autoencoders as image processors, 2023, arXiv:2303.17316.
[31]
Zhou L., Liu H., Bae J., He J., Samaras D., Prasanna P., Self pre-training with masked autoencoders for medical image classification and segmentation, 2023, arXiv:2203.05573.
[32]
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, N. Houlsby, An image is worth 16x16 words: Transformers for image recognition at scale, in: International Conference on Learning Representations, 2021.
[33]
Tong Z., Song Y., Wang J., Wang L., Videomae: Masked autoencoders are data-efficient learners for self-supervised video pre-training, in: Advances in Neural Information Processing Systems, 2022.
[34]
Pan Z., Cai J., Zhuang B., Fast vision transformers with hilo attention, in: Advances in Neural Information Processing Systems, 2022.
[35]
Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Springer, 2015, pp. 234–241.
[36]
Chen H., Zhang Y., Kalra M.K., Lin F., Chen Y., Liao P., Zhou J., Wang G., Low-dose ct with a residual encoder–decoder convolutional neural network, IEEE Trans. Med. Imaging 36 (2017) 2524–2535,.
[37]
Jin K.H., McCann M.T., Froustey E., Unser M., Deep convolutional neural network for inverse problems in imaging, IEEE Trans. Image Process. 26 (2017) 4509–4522,.
[38]
Shan H., Zhang Y., Yang Q., Kruger U., Kalra M.K., Sun L., Cong W., Wang G., 3-d convolutional encoder–decoder network for low-dose ct via transfer learning from a 2-d trained network, IEEE Trans. Med. Imaging 37 (2018) 1522–1534,.
[39]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2014, arXiv preprint arXiv:1409.1556.
[40]
Häggström I., Schmidtlein C.R., Campanella G., Fuchs T.J., Deeppet: A deep encoder–decoder network for directly solving the pet image reconstruction inverse problem, Med. Image Anal. 54 (2019) 253–262,.
[41]
Sanaat A., Arabi H., Mainta I., Garibotto V., Zaidi H., Projection space implementation of deep learning–guided low-dose brain pet imaging improves performance over implementation in image space, J. Nucl. Med. 61 (2020) 1388–1396.
[42]
Sanaat A., Shooli H., Ferdowsi S., Shiri I., Arabi H., Zaidi H., Deeptofsino: A deep learning model for synthesizing full-dose time-of-flight bin sinograms from their corresponding low-dose sinograms, NeuroImage 245 (2021),.
[43]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[44]
G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in: Proceedings of the IEEE Conference on Computer Vision and Aattern Recognition, 2017, pp. 4700–4708.
[45]
Ma L., Xue H., Yang G., Zhang Z., Li C., Yao Y., Teng Y., Scrdn: Residual dense network with self-calibrated convolutions for low dose ct image denoising, Nucl. Instrum. Methods Phys. Res. A 1045 (2023),.
[46]
Yan R., Liu Y., Liu Y., Wang L., Zhao R., Bai Y., Gui Z., Image denoising for low-dose ct via convolutional dictionary learning and neural network, IEEE Trans. Comput. Imaging 9 (2023) 83–93,.
[47]
Yi X., Walia E., Babyn P., Generative adversarial network in medical imaging: A review, Med. Image Anal. 58 (2019).
[48]
Yang Q., Yan P., Zhang Y., Yu H., Shi Y., Mou X., Kalra M.K., Zhang Y., Sun L., Wang G., Low-dose ct image denoising using a generative adversarial network with wasserstein distance and perceptual loss, IEEE Trans. Med. Imaging 37 (2018) 1348–1357.
[49]
Wolterink J.M., Leiner T., Viergever M.A., Išgum I., Generative adversarial networks for noise reduction in low-dose ct, IEEE Trans. Med. Imaging 36 (2017) 2536–2545,.
[50]
Yi X., Babyn P., Sharpness-aware low-dose ct denoising using conditional generative adversarial network, J. Digit. Imaging 31 (2018) 655–669.
[51]
Mirza M., Osindero S., Conditional generative adversarial nets, 2014, arXiv preprint arXiv:1411.1784.
[52]
Wang Y., Yu B., Wang L., Zu C., Lalush D.S., Lin W., Wu X., Zhou J., Shen D., Zhou L., 3D conditional generative adversarial networks for high-quality pet image estimation at low dose, NeuroImage 174 (2018) 550–562,.
[53]
Liao H., Huo Z., Sehnert W.J., Zhou S.K., Luo J., Adversarial sparse-view cbct artifact reduction, in: Frangi A.F., Schnabel J.A., Davatzikos C., Alberola-López C., Fichtinger G. (Eds.), Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, 2018, pp. 154–162.
[54]
X. Mao, Q. Li, H. Xie, R.Y. Lau, Z. Wang, S. Paul Smolley, Least squares generative adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2794–2802.
[55]
Demir U., Unal G., Patch-based image inpainting with generative adversarial networks, 2018, arXiv preprint arXiv:1803.07422.
[56]
Ramachandran P., Parmar N., Vaswani A., Bello I., Levskaya A., Shlens J., Stand-alone self-attention in vision models, 2019.
[57]
Zhang J., Niu Y., Shangguan Z., Gong W., Cheng Y., A novel denoising method for ct images based on u-net and multi-attention, Comput. Biol. Med. 152 (2023),.
[58]
Wang Y., Ma G., An L., Shi F., Zhang P., Lalush D.S., Wu X., Pu Y., Zhou J., Shen D., Semisupervised tripled dictionary learning for standard-dose pet image prediction using low-dose pet and multimodal mri, IEEE Trans. Biomed. Eng. 64 (2017) 569–579,.
[59]
Jiang C., Pan Y., Cui Z., Nie D., Shen D., Semi-supervised standard-dose pet image generation via region-adaptive normalization and structural consistency constraint, IEEE Trans. Med. Imaging 1 (2023),.
[60]
Xia W., Lu Z., Huang Y., Shi Z., Liu Y., Chen H., Chen Y., Zhou J., Zhang Y., Magic: Manifold and graph integrative convolutional network for low-dose ct reconstruction, IEEE Trans. Med. Imaging 40 (2021) 3459–3472,.
[61]
J.Y. Zhu, T. Park, P. Isola, A.A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232.
[62]
Lee K., Jeong W.K., Iscl: Interdependent self-cooperative learning for unpaired image denoising, IEEE Trans. Med. Imaging 40 (2021) 3238–3248.
[63]
Gu J., Yang T.S., Ye J.C., Yang D.H., Cyclegan denoising of extreme low-dose cardiac ct using wavelet-assisted noise disentanglement, Med. Image Anal. 74 (2021).
[64]
Zhao F., Liu M., Gao Z., Jiang X., Wang R., Zhang L., Dual-scale similarity-guided cycle generative adversarial network for unsupervised low-dose ct denoising, Comput. Biol. Med. 161 (2023),.
[65]
Niu C., Li M., Fan F., Wu W., Guo X., Lyu Q., Wang G., Noise suppression with similarity-based self-supervised deep learning, IEEE Trans. Med. Imaging (2022),.
[66]
Liang K., Zhang L., Xing Y., Training a low-dose ct denoising network with only low-dose ct dataset: comparison of ddln and noise2void, in: Medical Imaging 2021: Physics of Medical Imaging, 2021, pp. 118–126.
[67]
A. Stergiou, R. Poppe, G. Kalliatakis, Refining activation downsampling with softpool, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 10357–10366.
[68]
Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A.N., Kaiser L.u., Polosukhin I., Attention is all you need, in: Advances in Neural Information Processing Systems, 2017.
[69]
Yu L., Shiung M., Jondal D., McCollough C.H., Development and validation of a practical lower-dose-simulation tool for optimizing computed tomography scan protocols, J. Comput. Assist. Tomogr. 36 (2012) 477–487.
[70]
AAPM, The 2016 low dose CT grand challenge, 2016, https://www.aapm.org/grandchallenge/lowdosect/.
[71]
Paszke A., Gross S., Massa F., Lerer A., Bradbury J., Chanan G., Killeen T., Lin Z., Gimelshein N., Antiga L., et al., Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems, Vol. 32, 2019.
[72]
Loshchilov I., Hutter F., SGDR: Stochastic gradient descent with warm restarts, in: International Conference on Learning Representations, 2017, URL https://openreview.net/forum?id=Bkg6RiCqY7.
[73]
Loshchilov I., Hutter F., SGDR: Stochastic gradient descent with warm restarts, in: International Conference on Learning Representations, 2017, URL https://openreview.net/forum?id=Skq89Scxx.
[74]
Xing Y., Qiao W., Wang T., Wang Y., Li C., Lv Y., Xi C., Liao S., Qian Z., Zhao J., Deep learning-assisted pet imaging achieves fast scan/low-dose examination, EJNMMI Phys. 9 (2022) 1–17.
[75]
Song T.A., Yang F., Dutta J., Noise2void: unsupervised denoising of pet images, Phys. Med. Biol. 66 (2021),.
[76]
Jung C., Lee J., You S., Ye J.C., Patch-wise deep metric learning for unsupervised low-dose ct denoising, in: Medical Image Computing and Computer Assisted Intervention–MICCAI 2022, Springer, 2022, pp. 634–643.
[77]
Wang D., Xu Y., Han S., Yu H., Masked autoencoders for low dose ct denoising, 2022, arXiv preprint arXiv:2210.04944.
[78]
Liu Z., Kettimuthu R., Foster I., Masked sinogram model with transformer for ill-posed computed tomography reconstruction: a preliminary study, 2022, arXiv preprint arXiv:2209.01356.

Index Terms

  1. Self-supervised deep learning for joint 3D low-dose PET/CT image denoising
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Computers in Biology and Medicine
        Computers in Biology and Medicine  Volume 165, Issue C
        Oct 2023
        1378 pages

        Publisher

        Pergamon Press, Inc.

        United States

        Publication History

        Published: 01 October 2023

        Author Tags

        1. Low-dose PET/CT image
        2. 3D PET/CT denoising
        3. Self-supervised learning
        4. Deep learning

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 01 Nov 2024

        Other Metrics

        Citations

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media