Abstract
Probe-based confocal laser endomicroscopy (pCLE) allows in-situ visualisation of cellular morphology for intraoperative tissue characterization. Robotic manipulation of the pCLE probe can maintain the probe-tissue contact within micrometre working range to achieve the precision and stability required to capture good quality microscopic information. In this paper, we propose the first approach to automatically regress the distance between a pCLE probe and the tissue surface during robotic tissue scanning. The Spatial-Frequency Feature Coupling network (SFFC-Net) was designed to regress probe-tissue distance by extracting an enhanced data representation based on the fusion of spatial and frequency domain features. Image-level supervision is used in a novel fashion in regression to enable the network to effectively learn the relationship between the sharpness of the pCLE image and its distance from the tissue surface. Consequently, a novel Feedback Training (FT) module has been designed to synthesise unseen images to incorporate feedback into the training process. The first pCLE regression dataset (PRD) was generated which includes ex-vivo images with corresponding probe-tissue distance. Our performance evaluation verifies that the proposed network outperforms other state-of-the-art (SOTA) regression networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Capuano, A., et al.: The probe based confocal laser endomicroscopy (pCLE) in locally advanced gastric cancer: a powerful technique for real-time analysis of vasculature. Front. Oncol. 9, 513 (2019)
Chi, L., Jiang, B., Mu, Y.: Fast fourier convolution. Adv. Neural. Inf. Process. Syst. 33, 4479–4488 (2020)
Cooley, J.W., Tukey, J.W.: An algorithm for the machine calculation of complex fourier series. Math. Comput. 19(90), 297–301 (1965)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale (2020)
Godard, C., Aodha, O.M., Firman, M., Brostow, G.J.: Digging into self-supervised monocular depth prediction (2019)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Jiang, S., Liao, J., Bian, Z., Guo, K., Zhang, Y., Zheng, G.: Transform-and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging. Biomed. Opt. Express 9(4), 1601–1612 (2018)
Krizhevsky, A.: One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997 (2014)
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
Paszke, A., et al.: Automatic differentiation in pyTorch (2017)
Pitkäaho, T., Manninen, A., Naughton, T.J.: Performance of autofocus capability of deep convolutional neural networks in digital holographic microscopy. In: Digital Holography and Three-Dimensional Imaging, pp. W2A–5. Optical Society of America (2017)
Ren, Z., Xu, Z., Lam, E.Y.: Learning-based nonparametric autofocusing for digital holography. Optica 5(4), 337–344 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Smith, L.N.: Cyclical learning rates for training neural networks. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464–472. IEEE (2017)
Spessotto, P., et al.: Probe-based confocal laser endomicroscopy for in vivo evaluation of the tumor vasculature in gastric and rectal carcinomas. Sci. Rep. 7(1), 1–9 (2017)
Triantafyllou, P., Wisanuvej, P., Giannarou, S., Liu, J., Yang, G.Z.: A framework for sensorless tissue motion tracking in robotic endomicroscopy scanning. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 2694–2699. IEEE (2018)
Wallace, M.B., Fockens, P.: Probe-based confocal laser endomicroscopy. Gastroenterology 136(5), 1509–1513 (2009)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492–1500 (2017)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)
Zhang, C., Gu, Y., Yang, J., Yang, G.Z.: Diversity-aware label distribution learning for microscopy auto focusing. IEEE Robot. Autom. Lett. 6(2), 1942–1949 (2021)
Acknowledgements
This work was supported by the Royal Society [URF\(\setminus \)R\(\setminus \)2 01014] , EPSRC [EP/W004798/1] and the NIHR Imperial Biomedical Research Centre.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, C., Roddan, A., Davids, J., Weld, A., Xu, H., Giannarou, S. (2022). Deep Regression with Spatial-Frequency Feature Coupling and Image Synthesis for Robot-Assisted Endomicroscopy. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13437. Springer, Cham. https://doi.org/10.1007/978-3-031-16449-1_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-16449-1_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16448-4
Online ISBN: 978-3-031-16449-1
eBook Packages: Computer ScienceComputer Science (R0)