Nothing Special   »   [go: up one dir, main page]

Skip to main content

Reconstructing the Hemodynamic Response Function via a Bimodal Transformer

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2023 (MICCAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14221))

  • 4425 Accesses

Abstract

The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies. At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels. This study introduces the first predictive model that addresses this issue directly at the explicit neuronal population level. Using in vivo recordings in awake mice, we employ a novel spatiotemporal bimodal transformer architecture to infer current blood flow based on both historical blood flow and ongoing spontaneous neuronal activity. Our findings indicate that incorporating neuronal activity significantly enhances the model’s ability to predict blood flow values. Through analysis of the model’s behavior, we propose hypotheses regarding the largely unexplored nature of the hemodynamic response to neuronal activity.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Asl, M.E., Koohbanani, N.A., Frangi, A.F., Gooya, A.: Tracking and diameter estimation of retinal vessels using gaussian process and radon transform. J. Med. Imaging 4(3), 034006 (2017)

    Article  Google Scholar 

  2. Buxton, R.B.: Thermodynamic limitations on brain oxygen metabolism: physiological implications. bioRxiv, pp. 2023–01 (2023)

    Google Scholar 

  3. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. arXiv preprint arXiv:2005.12872 (2020)

  4. Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111 (2014)

    Google Scholar 

  5. Drew, P.J.: Vascular and neural basis of the bold signal. Current Opin. Neurobiol. 58, 61–69 (2019). https://doi.org/10.1016/J.CONB.2019.06.004

  6. Drew, P.J., Blinder, P., Cauwenberghs, G., Shih, A.Y., Kleinfeld, D.: Rapid determination of particle velocity from space-time images using the radon transform. J. Comput. Neurosci. 29(1), 5–11 (2010)

    Article  MATH  Google Scholar 

  7. Fazlollahi, A., et al.: Efficient machine learning framework for computer-aided detection of cerebral microbleeds using the radon transform. In: 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), pp. 113–116. IEEE (2014)

    Google Scholar 

  8. Gao, Y.R., Drew, P.J.: Determination of vessel cross-sectional area by thresholding in radon space. J. Cereb. Blood Flow Metab. 34(7), 1180–1187 (2014)

    Article  Google Scholar 

  9. Golgher, L.: Rapid volumetric imaging of numerous neuro-vascular interactions in awake mammalian brain. Ph.D. thesis, Sagol School of Neuroscience, Tel Aviv University (2022)

    Google Scholar 

  10. Gur, S., Wolf, L., Golgher, L., Blinder, P.: Microvascular dynamics from 4d microscopy using temporal segmentation. In: Pacific Symposium on Biocomputing 2020, pp. 331–342. World Scientific (2019)

    Google Scholar 

  11. Har-Gil, H., et al.: Pysight: plug and play photon counting for fast continuous volumetric intravital microscopy. Optica 5(9), 1104–1112 (2018)

    Article  Google Scholar 

  12. Har-Gil, H., Golgher, L., Kain, D., Blinder, P.: Versatile software and hardware combo enabling photon counting acquisition and real-time display for multiplexing, 2d and continuous 3d two-photon imaging applications. Neurophotonics 9(3), 031920 (2022)

    Article  Google Scholar 

  13. Kim, S.G., Ogawa, S.: Biophysical and physiological origins of blood oxygenation level-dependent FMRI signals. J. Cereb. Blood Flow Metab. Off. J. Int. Soc. Cereb. Blood Flow Metab. 32, 1188–206 (2012). https://doi.org/10.1038/jcbfm.2012.23

  14. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. arXiv (2014)

    Google Scholar 

  15. Klein, G., Kim, Y., et al.: Open-source toolkit for neural machine translation. In: ACL (2017)

    Google Scholar 

  16. Levy, W.B., Calvert, V.G.: Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. Proc. Natl. Acad. Sci. 118(18), e2008173118 (2021)

    Article  Google Scholar 

  17. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461 (2019)

  18. Li, L.H., Yatskar, M., Yin, D., Hsieh, C.J., Chang, K.W.: VisualBERT: a simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557 (2019)

  19. Li, X., et al.: Oscar: object-semantics aligned pre-training for vision-language tasks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12375, pp. 121–137. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58577-8_8

    Chapter  Google Scholar 

  20. Logothetis, N.K.: What we can do and what we cannot do with FMRI. Nature. 453, 869–78 (2008). https://doi.org/10.1038/nature06976

  21. Lu, J., Batra, D., Parikh, D., Lee, S.: VilBERT: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In: Advances in Neural Information Processing Systems, pp. 13–23 (2019)

    Google Scholar 

  22. Mookiah, M.R.K., et al.: A review of machine learning methods for retinal blood vessel segmentation and artery/vein classification. Med. Image Anal. 68, 101905 (2020)

    Article  Google Scholar 

  23. Paul, M., Danelljan, M., Van Gool, L., Timofte, R.: Local memory attention for fast video semantic segmentation. arXiv preprint arXiv:2101.01715 (2021)

  24. Pourreza, R., Banaee, T., Pourreza, H., Kakhki, R.D.: A radon transform based approach for extraction of blood vessels in conjunctival images. In: Gelbukh, A., Morales, E.F. (eds.) MICAI 2008. LNCS (LNAI), vol. 5317, pp. 948–956. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-88636-5_89

    Chapter  Google Scholar 

  25. Shazeer, N.: GLU variants improve transformer. arXiv:2002.05202 (2020)

  26. Tan, H., Bansal, M.: LXMERT: learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 (2019)

  27. Tancik, M., et al.: Fourier features let networks learn high frequency functions in low dimensional domains. Adv. Neural. Inf. Process. Syst. 33, 7537–7547 (2020)

    Google Scholar 

  28. Tavakoli, M., Mehdizadeh, A., Pourreza, R., Pourreza, H.R., Banaee, T., Toosi, M.B.: Radon transform technique for linear structures detection: application to vessel detection in fluorescein angiography fundus images. In: 2011 IEEE Nuclear Science Symposium Conference Record, pp. 3051–3056. IEEE (2011)

    Google Scholar 

  29. Uludağ, K., Blinder, P.: Linking brain vascular physiology to hemodynamic response in ultra-high field MRI. NeuroImage. 168, 279–295 (2018). https://doi.org/10.1016/j.neuroimage.2017.02.063

  30. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  31. Wang, H., Zhu, Y., Adam, H., Yuille, A., Chen, L.C.: Max-deeplab: end-to-end panoptic segmentation with mask transformers. arXiv preprint arXiv:2012.00759 (2020)

  32. Wang, Y., et al.: End-to-end video instance segmentation with transformers. arXiv preprint arXiv:2011.14503 (2020)

  33. Xiong, R., et al.: On layer normalization in the transformer architecture. arXiv:2002.04745 (2020)

  34. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable DETR: deformable transformers for end-to-end object detection. In: ICLR (2021)

    Google Scholar 

Download references

Acknowledgements

The authors thank David Kain for conducting the mouse surgery. This project has received funding from the ISRAEL SCIENCE FOUNDATION (grant No. 2923/20) within the Israel Precision Medicine Partnership program. It was also supported by a grant from the Tel Aviv University Center for AI and Data Science (TAD). It was also supported by the European Research Council, grant No 639416, and the Israel Science Foundation, grant No 2342/21. The contribution of the first author is part of a PhD thesis research conducted at Tel Aviv University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoni Choukroun .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 865 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Choukroun, Y., Golgher, L., Blinder, P., Wolf, L. (2023). Reconstructing the Hemodynamic Response Function via a Bimodal Transformer. In: Greenspan, H., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2023. MICCAI 2023. Lecture Notes in Computer Science, vol 14221. Springer, Cham. https://doi.org/10.1007/978-3-031-43895-0_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43895-0_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43894-3

  • Online ISBN: 978-3-031-43895-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics