Nothing Special   »   [go: up one dir, main page]

Skip to main content

Low-to-High Frequency Progressive K-Space Learning for MRI Reconstruction

  • Conference paper
  • First Online:
Machine Learning in Medical Imaging (MLMI 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15241))

Included in the following conference series:

  • 55 Accesses

Abstract

Magnetic Resonance Imaging (MRI) is a crucial non-invasive diagnostic tool. The image quality, however, is often limited by k-space under-sampling and noise, which is exacerbated for low-field systems. K-space learning has the potential to support high quality MRI reconstruction by exploiting correlation in the raw data domain and recovering the noise-corrupted or under-sampled measurements. However, the magnitude of the low-frequency (LF) component is usually thousands of times higher than the high-frequency (HF) component in the k-space, thus the network training might be dominated by LF learning while ignoring the recovery of the HF component. To support the effective recovery of all frequency components on the k-space data, we propose a Low-to-High Frequency Progressive (LHFP) learning framework, consisting of a cascade of k-space learning networks. In the first round, the model focuses on the learning of the LF component. Starting from the second round, we propose a High-Frequency Enhancement (HFE) module to emphasize the HF learning based on a predicted patient-specific low-high frequency boundary. To avoid degradation in LF learning during the subsequent rounds that focus on the HF component, we propose a Low-Frequency Compensation (LFC) module that compensates the current prediction of LF component by the last-round prediction with an estimated weight. For the reconstruction of fully-sampled and 4X under-sampled low-field brain MRI on the BraTs dataset, our method demonstrates superior performance than existing k-space learning methods, and surpasses dual-domain learning methods when combined with a simple image domain denoiser. The source codes will be released upon acceptance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Weishaupt, D., Köchli, V.D., Marincek, B., Froehlich, J.M., Nanz, D., Pruessmann, K.P.: How does MRI work?: An introduction to the physics and function of magnetic resonance imaging, vol. 2. Springer (2006). https://doi.org/10.1007/978-3-540-37845-7

  2. Mengye Lyu, et al.: M4Raw: a multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research. Sci. Data 10(1), 264 (2023)

    Google Scholar 

  3. Tsao, J., Kozerke, S.: MRI temporal acceleration techniques. J. Magn. Reson. Imaging 36(3), 543–560 (2012)

    Article  Google Scholar 

  4. Feng, C.-M., Fu, H., Yuan, S., Xu, Y.: Multi-contrast MRI super-resolution via a multi-stage integration network. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 140–149. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_14

    Chapter  Google Scholar 

  5. Feng, C.-M., et al.: Multi-modal transformer for accelerated MR imaging. IEEE Trans. Med. Imaging 42(10), 2804–2816 (2022)

    Article  Google Scholar 

  6. Pan, J., et al.: Global k-space interpolation for dynamic MRI reconstruction using masked image modeling. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention - MICCAI 2023. MICCAI 2023. LNCS, vol. 14229, pp. 228–238. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-43999-5_22

  7. Han, Y., Sunwoo, L., Ye, J.C.: k-space deep learning for accelerated MRI. IEEE Trans. Med. Imaging 39(2), 377–386 (2019)

    Google Scholar 

  8. Zhao, Z., Zhang, T., Xie, W., Wang, Y.-F., Zhang, Y.: K-space transformer for undersampled MRI reconstruction. In: BMVC, pp. 473 (2022)

    Google Scholar 

  9. Ding, Q., Zhang, X.: MRI reconstruction by completing under-sampled K-space data with learnable Fourier interpolation. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2022. MICCAI 2022. LNCS, vol. 13436, pp. 676–685. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16446-0_64

  10. Zhou, B., Zhou, S.K.: DuDoRNet: learning a dual-domain recurrent network for fast MRI reconstruction with deep T1 prior. In: Proceedings of CVPR, pp. 4273–4282 (2020)

    Google Scholar 

  11. Lyu, J., Sui, B., Wang, C., Tian, Y., Dou, Q., Qin, J.: DuDoCAF: dual-domain cross-attention fusion with recurrent transformer for fast multi-contrast MR imaging. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds.) Medical Image Computing and Computer Assisted Intervention - MICCAI 2022. MICCAI 2022. LNCS, vol. 13436. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-16446-0_45

  12. Schlemper, J., Caballero, J., Hajnal, J.V., Price, A.N., Rueckert, D.: A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans. Med. Imaging 37(2), 491–503 (2017)

    Google Scholar 

  13. Fabian, Z., Tinaz, B., Soltanolkotabi, M.: HUMUS-Net: hybrid unrolled multi-scale network architecture for accelerated MRI reconstruction. Proc. NeurIPS 35, 25306–25319 (2022)

    Google Scholar 

  14. Liu, X., Pang, Y., Ruiqi Jin, Yu., Liu, and Zhenchang Wang.: Dual-domain reconstruction network with v-net and k-net for fast mri. Magn. Reson. Med. 88(6), 2694–2708 (2022)

    Google Scholar 

  15. Menze, B.H., et al.: The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 34(10), 1993–2024 (2014)

    Google Scholar 

  16. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of NeurIPS, vol. 32 (2019)

    Google Scholar 

  17. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  18. Bernstein, M.A., Fain, S.B., Riederer, S.J.: Effect of windowing and zero-filled reconstruction of MRI data on spatial resolution and acquisition strategy. J. Magn. Reson. Imaging 14(3), 270–280 (2001)

    Google Scholar 

Download references

Acknowledgement

The authors would like to thank the support from the National Institutes of Health (NIH) (5R01CA256890 and 1R01CA275772).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lianli Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xing, X., Qiu, L., Yu, L., Zhu, L., Xing, L., Liu, L. (2025). Low-to-High Frequency Progressive K-Space Learning for MRI Reconstruction. In: Xu, X., Cui, Z., Rekik, I., Ouyang, X., Sun, K. (eds) Machine Learning in Medical Imaging. MLMI 2024. Lecture Notes in Computer Science, vol 15241. Springer, Cham. https://doi.org/10.1007/978-3-031-73284-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73284-3_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73283-6

  • Online ISBN: 978-3-031-73284-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics