Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Self-supervised learning for automated anatomical tracking in medical image data with minimal human labeling effort

Published: 01 October 2022 Publication History

Highlights

Self-Supervised learning (SSL) enables organ tracking with minimal labeling effort.
SSL-based tracking yields superior results compared to conventional methods.
SSL enables cross-subject anatomical matching.

Abstract

Background and Objective: Tracking of anatomical structures in time-resolved medical image data plays an important role for various tasks such as volume change estimation or treatment planning. State-of-the-art deep learning techniques for automated tracking, while providing accurate results, require large amounts of human-labeled training data making their wide-spread use time- and resource-intensive. Our contribution in this work is the implementation and adaption of a self-supervised learning (SSL) framework that addresses this bottleneck of training data generation. Methods: To this end we adapted and implemented an SSL framework that allows for automated anatomical tracking without the necessity for human-labeled training data. We evaluated this method by comparison to conventional- and deep learning optical flow (OF)-based tracking methods. We applied all methods on three different time-resolved medical image datasets (abdominal MRI, cardiac MRI, and echocardiography) and assessed their accuracy regarding tracking of pre-defined anatomical structures within and across individuals. Results: We found that SSL-based tracking as well as OF-based methods provide accurate results for simple, rigid and smooth motion patterns. However, regarding more complex motion, e.g. non-rigid or discontinuous motion patterns in the cardiac region, and for cross-subject anatomical matching, SSL-based tracking showed markedly superior performance. Conclusion: We conclude that automated tracking of anatomical structures on time-resolved medical image data with minimal human labeling effort is feasible using SSL and can provide superior results compared to conventional and deep learning OF-based methods.

References

[1]
V.Y. Wang, H. Lam, D.B. Ennis, B.R. Cowan, A.A. Young, M.P. Nash, Modelling passive diastolic mechanics with quantitative MRI of cardiac structure and function, Med. Image Anal. 13 (5) (2009) 773–784.
[2]
J.M. Pollard, Z. Wen, R. Sadagopan, J. Wang, G.S. Ibbott, The future of image-guided radiotherapy will be MR guided, Br. J. Radiol. 90 (1073) (2017) 20160667.
[3]
S. Corradini, F. Alongi, N. Andratschke, C. Belka, L. Boldrini, F. Cellini, J. Debus, M. Guckenberger, J. Hörner-Rieber, F. Lagerwaard, et al., MR-guidance in clinical reality: current treatment challenges and future perspectives, Radiat. Oncol. 14 (1) (2019) 1–12.
[4]
M.J. Menten, M.F. Fast, A. Wetscherek, C.M. Rank, M. Kachelrieß, D.J. Collins, S. Nill, U. Oelfke, The impact of 2D cine MR imaging parameters on automated tumor and organ localization for MR-guided real-time adaptive radiotherapy, Phys. Med. Biol. 63 (23) (2018) 235005.
[5]
S. Al-Ward, M. Wronski, S.B. Ahmad, S. Myrehaug, W. Chu, A. Sahgal, B.M. Keller, The radiobiological impact of motion tracking of liver, pancreas and kidney SBRT tumors in a MR-linac, Phys. Med. Biol. 63 (21) (2018) 215022.
[6]
J.S. Witt, S.A. Rosenberg, M.F. Bassetti, MRI-guided adaptive radiotherapy for liver tumours: visualising the future, Lancet Oncol. 21 (2) (2020) e74–e82.
[7]
N.R. Huttinga, C.A. van den Berg, P.R. Luijten, A. Sbrizzi, MR-MOTUS: model-based non-rigid motion estimation for MR-guided radiotherapy using a reference image and minimal k-space data, Phys. Med. Biol. 65 (1) (2020) 015004.
[8]
C.T. Metz, S. Klein, M. Schaap, T. van Walsum, W.J. Niessen, Nonrigid registration of dynamic medical imaging data using nD+ t B-splines and a groupwise optimization approach, Med. Image Anal. 15 (2) (2011) 238–249.
[9]
T.D. Keiper, A. Tai, X. Chen, E. Paulson, F. Lathuilière, S. Bériault, F. Hébert, D.T. Cooper, M. Lachaine, X.A. Li, Feasibility of real-time motion tracking using cine MRI during MR-guided radiation therapy for abdominal targets, Med. Phys. 47 (8) (2020) 3554–3566.
[10]
C. Zachiu, N. Papadakis, M. Ries, C. Moonen, B.D. de Senneville, An improved optical flow tracking technique for real-time MR-guided beam therapies in moving organs, Phys. Med. Biol. 60 (23) (2015) 9003.
[11]
M.S. Hosseini, M.H. Moradi, M. Tabassian, J. D’hooge, Non-rigid image registration using a modified fuzzy feature-based inference system for 3D cardiac motion estimation, Comput. Methods Programs Biomed. 205 (2021) 106085,.
[12]
T. Pock, M. Urschler, C. Zach, R. Beichel, H. Bischof, A duality based algorithm for TV-L 1-optical-flow image registration, International Conference on Medical Image Computing and Computer-assisted Intervention, Springer, 2007, pp. 511–518.
[13]
M. Dawood, F. Buther, X. Jiang, K.P. Schafers, Respiratory motion correction in 3-D pet data with advanced optical flow algorithms, IEEE Trans. Med. Imaging 27 (8) (2008) 1164–1175.
[14]
O. Ronneberger, P. Fischer, T. Brox, U-Net: convolutional networks for biomedical image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2015, pp. 234–241.
[15]
E. Gibson, F. Giganti, Y. Hu, E. Bonmati, S. Bandula, K. Gurusamy, B. Davidson, S.P. Pereira, M.J. Clarkson, D.C. Barratt, Automatic multi-organ segmentation on abdominal CT with dense V-networks, IEEE Trans. Med. Imaging 37 (8) (2018) 1822–1834.
[16]
M. Havaei, A. Davy, D. Warde-Farley, A. Biard, A. Courville, Y. Bengio, C. Pal, P.-M. Jodoin, H. Larochelle, Brain tumor segmentation with deep neural networks, Med. Image Anal. 35 (2017) 18–31.
[17]
A. Akselrod-Ballin, L. Karlinsky, S. Alpert, S. Hasoul, R. Ben-Ari, E. Barkan, A Region Based Convolutional Network for Tumor Detection and Classification in Breast Mammography, Deep Learning and Data Labeling for Medical Applications, Springer, 2016, pp. 197–205.
[18]
A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, T. Brox, FlowNet: learning optical flow with convolutional networks, Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2758–2766.
[19]
Z. Teed, J. Deng, RAFT: recurrent all-pairs field transforms for optical flow, CoRR (2020) arXiv preprint arXiv:2003.12039.
[20]
T. Kȭstner, J. Pan, H. Qi, G. Cruz, C. Gilliam, T. Blu, B. Yang, S. Gatidis, R. Botnar, C. Prieto, LAPNet: non-rigid registration derived in k-space for magnetic resonance imaging, IEEE Trans. Med. Imaging 40 (12) (2021),.
[21]
X. Bian, X. Luo, C. Wang, W. Liu, X. Lin, DDA-Net: unsupervised cross-modality medical image segmentation via dual domain adaptation, Comput. Methods Programs Biomed. 213 (2022) 106531.
[22]
R. Ito, K. Nakae, J. Hata, H. Okano, S. Ishii, Semi-supervised deep learning of brain tissue segmentation, Neural Netw. 116 (2019) 25–34.
[23]
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921–2929.
[24]
M. Früh, M. Fischer, A. Schilling, S. Gatidis, T. Hepp, Weakly supervised segmentation of tumor lesions in PET-CT hybrid imaging, J. Med. Imaging 8 (5) (2021) 054003.
[25]
L. Chen, P. Bentley, K. Mori, K. Misawa, M. Fujiwara, D. Rueckert, Self-supervised learning for medical image analysis using image context restoration, Med. Image Anal. 58 (2019) 101539.
[26]
N. Wang, W. Zhou, Y. Song, C. Ma, W. Liu, H. Li, Unsupervised deep representation learning for real-time tracking, Int. J. Comput. Vis. 129 (2) (2021) 400–418.
[27]
X. Li, W. Pei, Z. Zhou, Z. He, H. Lu, M.-H. Yang, Crop-transform-paste: self-supervised learning for visual tracking, arXiv preprint arXiv:2106.10900(2021).
[28]
Z. Lai, E. Lu, W. Xie, MAST: a memory-augmented self-supervised tracker, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 6479–6488.
[29]
M. Chung, J. Lee, M. Lee, J. Lee, Y.-G. Shin, Deeply self-supervised contour embedded neural network applied to liver segmentation, Comput. Methods Programs Biomed. 192 (2020) 105447,.
[30]
S. Azizi, B. Mustafa, F. Ryan, Z. Beaver, J. Freyberg, J. Deaton, A. Loh, A. Karthikesalingam, S. Kornblith, T. Chen, et al., Big self-supervised models advance medical image classification, Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3478–3488.
[31]
M. Frueh, A. Schilling, S. Gatidis, T. Kuestner, Real time landmark detection for within- and cross subject tracking with minimal human supervision, IEEE Access (2022).
[32]
H. Li, Y. Fan, Non-rigid image registration using self-supervised fully convolutional networks without training data, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, 2018, pp. 1075–1078.
[33]
T. Schmidt, R. Newcombe, D. Fox, Self-supervised visual descriptor learning for dense correspondence, IEEE Rob. Autom. Lett. 2 (2) (2016) 420–427.
[34]
A. Jabri, A. Owens, A. Efros, Space-time correspondence as a contrastive random walk, Adv. Neural Inf. Process. Syst. 33 (2020) 19545–19560.
[35]
G. Farnebäck, Two-frame motion estimation based on polynomial expansion, Scandinavian Conference on Image Analysis, Springer, 2003, pp. 363–370.
[36]
J. Wulff, M.J. Black, Efficient sparse-to-dense optical flow estimation using a learned basis and layers, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2015, 2015.
[37]
E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, T. Brox, FlowNet 2.0: evolution of optical flow estimation with deep networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2462–2470.
[38]
M. Werlberger, W. Trobin, T. Pock, A. Wedel, D. Cremers, H. Bischof, Anisotropic Huber-L1 optical flow, BMVC, vol. 1, 2009, p. 3.
[39]
N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, T. Brox, A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4040–4048.
[40]
D.J. Butler, J. Wulff, G.B. Stanley, M.J. Black, A naturalistic open source movie for optical flow evaluation, in: A. Fitzgibbon (Ed.), European Conf. on Computer Vision (ECCV), in: Part IV, LNCS 7577, Springer-Verlag, 2012, pp. 611–625.
[41]
S. Gidaris, P. Singh, N. Komodakis, Unsupervised representation learning by predicting image rotations, arXiv preprint arXiv:1803.07728(2018).
[42]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[43]
Data science bowl cardiac challenge data, (https://www.kaggle.com/c/second-annual-data-science-bowl/data), Accessed: 2010-09-30.
[44]
D. Ouyang, B. He, A. Ghorbani, N. Yuan, J. Ebinger, C.P. Langlotz, P.A. Heidenreich, R.A. Harrington, D.H. Liang, E.A. Ashley, et al., Video-based ai for beat-to-beat assessment of cardiac function, Nature 580 (7802) (2020) 252–256.

Index Terms

  1. Self-supervised learning for automated anatomical tracking in medical image data with minimal human labeling effort
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image Computer Methods and Programs in Biomedicine
          Computer Methods and Programs in Biomedicine  Volume 225, Issue C
          Oct 2022
          572 pages

          Publisher

          Elsevier North-Holland, Inc.

          United States

          Publication History

          Published: 01 October 2022

          Author Tags

          1. Anatomical tracking
          2. Self-supervised learning
          3. MR-LINAC
          4. Image-Guided radiation therapy
          5. Cardiac MRI

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • 0
            Total Citations
          • 0
            Total Downloads
          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 22 Sep 2024

          Other Metrics

          Citations

          View Options

          View options

          Get Access

          Login options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media