ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data
<p>Experimental setup and acquisition process of the original images. (<b>a</b>) Imaging hardware setup. (<b>b</b>) Microscopic images obtained under a different aperture diaphragm. (<b>c</b>) The LR microscopic image captured directly in the experiment. (<b>d</b>) The HR microscopic image captured directly in the experiment. (<b>e</b>) The reconstructed HR quantitative phase image by TIE. (<b>f</b>) The reconstructed LR quantitative phase image by TIE.</p> "> Figure 2
<p>(<b>a</b>) Training flow diagram. (<b>b</b>) By segmenting the LR microscopic images with different resolutions, the sub-images with an FOV approximately equal to the HR quantitative phase images are obtained for training the ContransGAN. Orange and green rectangles are Region of Interest (ROI).</p> "> Figure 3
<p>(<b>a</b>) The feature extraction process and the specific structure of ViT. (<b>b</b>) Calculation flow chart of the self-attention mechanism in ViT.</p> "> Figure 4
<p>Detailed schematic of the ContransGAN architecture. (<b>a</b>) The schematic of the generator. (<b>b</b>) The schematic of the discriminator.</p> "> Figure 5
<p>(<b>a</b>) The downsampling process of Swin-Transformer. (<b>b</b>) Schematic diagram of the SW-MAS operation flow. A, B and C are the regions which need to be moved in the previous feature map.</p> "> Figure 6
<p>Test results of PSMs by the ContransGAN-All. (<b>a</b>) Microscopic images of the same FOV under a different NA objective and the quantitative phase images reconstructed by TIE. (<b>b</b>) Results for the corresponding region. Ground truth labels are the quantitative phase images under the <math display="inline"><semantics> <mrow> <mn>40</mn> <mo>×</mo> <mo>/</mo> <mn>0.65</mn> <mrow> <mtext> </mtext> <mi>NA</mi> </mrow> </mrow> </semantics></math> objective in (<b>a</b>); SSIM values and PSNR reflect the quantitative relationship between ground truth labels and Output1.</p> "> Figure 7
<p>The violin plot of the PSMs’ phase height. The thick black line in the middle indicates the interquartile range, the thin black line extending from it represents the 95% confidence interval, the white dot is the median value, and the red spots indicate the distribution of the PSMs’ phase heights.</p> "> Figure 8
<p>Test results of Hela cells by the ContransGAN-Hela. Amplitude represents the LR microscopic images; ground truth represents the HR quantitative phase images reconstructed by TIE; output represents the output images of the ContransGAN-Hela; SSIM and PSNR reflect the quantitative relationship between the ground truth and output; the dotted frame below is the three-dimensional visual phase distribution in the corresponding FOVs.</p> "> Figure 9
<p>Test results of Hela cells by the CycleGAN-Hela and S-Transformer. SSIM and PSNR reflect the quantitative relationship between the ground truth and the network output phase images; the curve on the right is the phase value curve of the realization part in the dotted line box in the corresponding FOV. I, II, III and IV are different ROIs.</p> "> Figure 10
<p>Test results of microscopic images captured by different NA objectives using the ContransGAN trained by microscopic images captured by a certain NA objective. (<b>a</b>) The network trained with the microscopic images under the <math display="inline"><semantics> <mrow> <mn>4</mn> <mo>×</mo> <mo>/</mo> <mn>0.1</mn> <mrow> <mtext> </mtext> <mi>NA</mi> </mrow> </mrow> </semantics></math> objective generated HR phase images of the microscopic images captured by the higher NA objective. (<b>b</b>) The network trained with the microscopic images under the <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> <mo>/</mo> <mn>0.25</mn> <mrow> <mtext> </mtext> <mi>NA</mi> </mrow> </mrow> </semantics></math> objective generated HR phase images of the microscopic images captured by the higher NA objective. (<b>c</b>) The network trained with the microscopic images under the <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> <mo>/</mo> <mn>0.25</mn> <mrow> <mtext> </mtext> <mi>NA</mi> </mrow> </mrow> </semantics></math> objective generated HR phase images of the microscopic images captured by the other NA objectives.</p> "> Figure 11
<p>(<b>a</b>) Test results of the LR out-of-focus microscopic images. (<b>b</b>) Test results of the LR microscopic images with different contrast captured at different apertures of the concentrator.</p> ">
Abstract
:1. Introduction
2. Methods
2.1. Imaging Hardware and TIE Phase Extraction
2.2. Creation of Datasets and Networks Training Details
2.3. Vision Transformer and Self-Attention Mechanism
2.4. Generator and Discriminator
3. Results and Discussion
3.1. Results of the Proposed Network
3.2. Comparison of Network Performance
3.3. Generalization Capability and Accuracy Analysis
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Betzig, E.; Patterson, G.H.; Sougrat, R.; Lindwasser, O.W.; Olenych, S.; Bonifacino, J.S.; Michael, W.D.; Jennifer, L.S.; Hess, H.F. Imaging Intracellular Fluorescent Proteins at Nanometer Resolution. Science 2006, 313, 1642–1645. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Heintzmann, R.; Gustafsson, M.G.L. Subdiffraction resolution in continuous samples. Nat. Photonics 2009, 3, 362–364. [Google Scholar] [CrossRef]
- Gao, P.; Yuan, C. Resolution enhancement of digital holographic microscopy via synthetic aperture: A review. Light Adv. Manuf. 2022, 3, 105–120. [Google Scholar] [CrossRef]
- Meng, Z.; Pedrini, G.; Lv, X.; Ma, J.; Nie, S.; Yuan, C. DL-SI-DHM: A deep network generating the high-resolution phase and amplitude images from wide-field images. Opt. Express 2021, 29, 19247–19261. [Google Scholar] [CrossRef]
- Wang, Z.; Xie, Y.; Ji, S. Global voxel transformer networks for augmented microscopy. Nat. Mach. Intell. 2021, 3, 161–171. [Google Scholar] [CrossRef]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep learning for computer vision: A brief review. Comput. Intell. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef]
- Young, T.; Hazarika, D.; Poria, S.; Cambria, E. Recent trends in deep learning based natural language processing. IEEE Comput. Intell. Mag. 2018, 13, 55–75. [Google Scholar] [CrossRef]
- Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.R.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.; et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
- Chitchian, S.; Mayer, M.A.; Boretsky, A.; Van Kuijk, F.J.; Motamedi, M. Retinal optical coherence tomography image enhancement via shrinkage denoising using double-density dual-tree complex wavelet transform. J. Biomed. Opt. 2012, 17, 116009. [Google Scholar] [CrossRef]
- Huang, Y.; Lu, Z.; Shao, Z.; Ran, M.; Zhou, J.; Fang, L.; Zhang, Y. Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network. Opt. Express 2019, 27, 12289–12307. [Google Scholar] [CrossRef]
- Rahmani, B.; Loterie, D.; Konstantinou, G.; Psaltis, D.; Moser, C. Multimode optical fiber transmission with a deep learning network. Light Sci. Appl. 2018, 7, 1–11. [Google Scholar] [CrossRef]
- He, Y.; Wang, G.; Dong, G.; Psaltis, D.; Moser, C. Ghost imaging based on deep learning. Sci. Rep. 2018, 8, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Xue, Y.; Tian, L. Deep speckle correlation: A deep learning approach toward scalable imaging through scattering media. Optica 2018, 5, 1181–1190. [Google Scholar] [CrossRef]
- Goy, A.; Arthur, K.; Li, S.; Barbastathis, G. Low photon count phase retrieval using deep learning. Phys. Rev. Lett. 2018, 121, 243902. [Google Scholar] [CrossRef] [Green Version]
- Rivenson, Y.; Zhang, Y.; Günaydın, H.; Teng, D.; Ozcan, A. Phase recovery and holographic image reconstruction using deep learning in neural networks. Light Sci. Appl. 2018, 7, 17141. [Google Scholar] [CrossRef]
- Wu, Y.; Rivenson, Y.; Zhang, Y.; Wei, Z.; Günaydin, H.; Lin, X.; Ozcan, A. Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery. Optica 2018, 5, 704–710. [Google Scholar] [CrossRef] [Green Version]
- Sinha, A.; Lee, J.; Li, S.; Barbastathis, G. Lensless computational imaging through deep learning. Optica 2017, 4, 1117–1125. [Google Scholar] [CrossRef] [Green Version]
- Castaneda, R.; Trujillo, C.; Doblas, A. Video-Rate Quantitative Phase Imaging Using a Digital Holographic Microscope and a Generative Adversarial Network. Sensors 2021, 21, 8021. [Google Scholar] [CrossRef]
- Liu, T.; De Haan, K.; Rivenson, Y.; Wei, Z.; Zeng, X.; Zhang, Y.; Ozcan, A. Deep learning-based super-resolution in coherent imaging systems. Sci. Rep. 2019, 9, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Yang, W.; Zhang, X.; Tian, Y.; Wang, W.; Xue, J.H.; Liao, Q. Deep learning for single image super-resolution: A brief review. IEEE Trans. Multimed. 2019, 21, 3106–3121. [Google Scholar] [CrossRef] [Green Version]
- Wang, H.; Rivenson, Y.; Jin, Y.; Wei, Z.; Gao, R.; Günaydın, H.; Bentolila, L.A.; Kural, C.; Ozcan, A. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 2019, 16, 103–110. [Google Scholar] [CrossRef]
- Jin, L.; Liu, B.; Zhao, F.; Hahn, S.; Dong, B.; Song, R.; Elston, T.C.; Xu, Y.; Hahn, K.M. Deep learning enables structured illumination microscopy with low light levels and enhanced speed. Nat. Commun. 2020, 11, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Xypakis, E.; Gosti, G.; Giordani, T.; Santagati, R.; Ruocco, G.; Leonetti, M. Deep learning for blind structured illumination microscopy. Sci. Rep. 2022, 12, 8623. [Google Scholar] [CrossRef]
- Dardikman, G.; Shaked, N.T. Phase unwrapping using residual neural networks. In Computational Optical Sensing and Imaging; Optical Society of America: Orlando, FL, USA, 2018. [Google Scholar]
- Wang, K.; Li, Y.; Kemao, Q.; Di, J.; Zhao, J. One-step robust deep learning phase unwrapping. Opt. Express 2019, 27, 15100–15115. [Google Scholar] [CrossRef]
- Yin, W.; Chen, Q.; Feng, S.; Tao, T.; Huang, L.; Trusiak, M.; Asundi, A.; Zuo, C. Temporal phase unwrapping using deep learning. Sci. Rep. 2019, 9, 1–12. [Google Scholar] [CrossRef]
- Huang, W.; Mei, X.; Wang, Y.; Fan, Z.; Chen, C.; Jiang, G. Two-dimensional phase unwrapping by a high-resolution deep learning network. Measurement 2022, 111566. [Google Scholar] [CrossRef]
- Göröcs, Z.; Tamamitsu, M.; Bianco, V.; Wolf, P.; Roy, S.; Shindo, K.; Yanny, K.; Wu, Y.; Koydemir, H.C.; Rivenson, Y.; et al. A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples. Light Sci. Appl. 2018, 7, 1–12. [Google Scholar] [CrossRef]
- Rivenson, Y.; Liu, T.; Wei, Z.; Zhang, Y.; De Haan, K.; Ozcan, A. PhaseStain: The digital staining of label-free quantitative phase microscopy images using deep learning. Light Sci. Appl. 2019, 8, 1–11. [Google Scholar] [CrossRef]
- Nygate, Y.N.; Levi, M.; Mirsky, S.K.; Turko, N.A.; Rubin, M.; Barnea, I.; Dardikman-Yoffe, G.; Haifler, M.; Shaked, N.T. Holographic virtual staining of individual biological cells. Proc. Natl. Acad. Sci. USA 2020, 117, 9223–9231. [Google Scholar] [CrossRef] [Green Version]
- Bian, Y.; Jiang, Y.; Deng, W.; Shen, R.; Shen, H.; Kuang, C. Deep learning virtual Zernike phase contrast imaging for singlet microscopy. AIP Adv. 2021, 11, 065311. [Google Scholar] [CrossRef]
- Wu, Y.; Luo, Y.; Chaudhari, G.; Rivenson, Y.; Calis, A.; De Haan, K.; Ozcan, A. Bright-field holography: Cross-modality deep learning enables snapshot 3D imaging with bright-field contrast using a single hologram. Light Sci. Appl. 2019, 8, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zomet, A.; Peleg, S. Multi-sensor super-resolution. In Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision (WACV 2002), Orlando, FL, USA, 3–4 December 2002; pp. 27–31. [Google Scholar]
- Glasner, D.; Bagon, S.; Irani, M. Super-resolution from a single image. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 349–356. [Google Scholar]
- Zhang, H.; Fang, C.; Xie, X.; Yang, Y.; Mei, W.; Jin, D.; Fei, P. High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network. Biomed. Opt. Express 2019, 10, 1044–1063. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Han, Z.; Dai, E.; Jia, X.; Liu, Z.; Xing, L.; Zou, X.; Xu, C.; Liu, J.; Tian, Q. Unsupervised image super-resolution with an indirect supervised path. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Nashville, TN, USA, 19–25 June 2021; pp. 468–469. [Google Scholar]
- Yuan, Y.; Liu, S.; Zhang, J.; Zhang, Y.; Dong, C.; Lin, L. Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 701–710. [Google Scholar]
- Lugmayr, A.; Danelljan, M.; Timofte, R. Unsupervised learning for real-world super-resolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27 October 2019; pp. 3408–3416. [Google Scholar]
- Terbe, D.; Orzó, L.; Zarándy, Á. Deep-learning-based bright-field image generation from a single hologram using an unpaired dataset. Opt. Lett. 2021, 46, 5567–5570. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Noack, M.A.; Vagovic, P.; Fezzaa, K.; Garcia-Moreno, F.; Ritschel, T.; Villanueva-Perez, P. PhaseGAN: A deep-learning phase-retrieval approach for unpaired datasets. Opt. Express 2021, 29, 19593–19604. [Google Scholar] [CrossRef]
- Ding, H.; Li, F.; Meng, Z.; Feng, S.; Ma, J.; Nie, S.; Yuan, C. Auto-focusing and quantitative phase imaging using deep learning for the incoherent illumination microscopy system. Opt. Express 2021, 29, 26385–26403. [Google Scholar] [CrossRef]
- Ptak, R. The frontoparietal attention network of the human brain: Action, saliency, and a priority map of the environment. Neurosci. 2012, 18, 502–515. [Google Scholar] [CrossRef]
- Huang, L.; Pashler, H. A Boolean map theory of visual attention. Psychol. Rev. 2007, 114, 599. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Liu, L.; Phonevilay, V.; Gu, K.; Xia, R.; Xie, J.; Zhang, Q.; Yang, K. Image super-resolution reconstruction based on feature map attention mechanism. Appl. Intell. 2021, 51, 4367–4380. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Wang, K.; Di, J.; Li, Y.; Ren, Z.; Kemao, Q.; Zhao, J. Transport of intensity equation from a single intensity image via deep learning. Opt. Lasers Eng. 2020, 134, 106233. [Google Scholar] [CrossRef]
- Paganin, D.; Nugent, K.A. Noninterferometric phase imaging with partially coherent light. Phys. Rev. Lett. 1998, 80, 2586. [Google Scholar] [CrossRef]
- Gureyev, T.E.; Nugent, K.A. Rapid quantitative phase imaging using the transport of intensity equation. Opt. Commun. 1997, 133, 339–346. [Google Scholar] [CrossRef]
- Allen, L.J.; Oxley, M.P. Phase retrieval from series of images obtained by defocus variation. Opt. Commun. 2001, 199, 65–75. [Google Scholar] [CrossRef]
- Teague, M.R. Deterministic phase retrieval: A Green’s function solution. JOSA 1983, 73, 1434–1441. [Google Scholar] [CrossRef]
- Rong, L.; Wang, S.; Wang, D.; Tan, F.; Zhang, Y.; Zhao, J.; Wang, Y. Transport of intensity equation-based terahertz lensless full-field phase imaging. Opt. Lett. 2021, 46, 5846–5849. [Google Scholar] [CrossRef]
- Zuo, C.; Li, J.; Sun, J.; Fan, Y.; Zhang, J.; Lu, L.; Zhang, R.; Wang, B.; Huang, L.; Chen, Q. Transport of intensity equation: A tutorial. Opt. Lasers Eng. 2020, 135, 106187. [Google Scholar] [CrossRef]
- Zhang, J.; Chen, Q.; Sun, J.; Tian, L.; Zuo, C. On a universal solution to the transport-of-intensity equation. Opt. Lett. 2020, 45, 3649–3652. [Google Scholar] [CrossRef]
- Zuo, C.; Sun, J.; Li, J.; Zhang, J.; Asundi, A.; Chen, Q. High-resolution transport-of-intensity quantitative phase microscopy with annular illumination. Sci. Rep. 2017, 7, 1–22. [Google Scholar] [CrossRef]
- Zuo, C.; Chen, Q.; Qu, W.; Asundi, A. High-speed transport-of-intensity phase microscopy with an electrically tunable lens. Opt. Express 2013, 21, 24060–24075. [Google Scholar] [CrossRef] [Green Version]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 60, 84–90. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar] [CrossRef]
- Rodriguez, P.; Wiles, J.; Elman, J.L. A recurrent neural network that learns to count. Connect. Sci. 1999, 11, 5–40. [Google Scholar] [CrossRef] [Green Version]
- Girosi, F.; Jones, M.; Poggio, T. Regularization theory and neural networks architectures. Neural Comput. 1995, 7, 219–269. [Google Scholar] [CrossRef]
- Tang, H.; Xue, J.; Han, J. A Method of Multi-Scale Forward Attention Model for Speech Recognition. Acta Electonica Sin. 2020, 48, 1255. [Google Scholar]
- Wang, W.; Shen, J.; Yu, Y.; Ma, K.L. Stereoscopic thumbnail creation via efficient stereo saliency detection. IEEE Trans. Vis. Comput. Graph. 2016, 23, 2014–2027. [Google Scholar] [CrossRef]
- Wang, M.; Lu, S.; Zhu, D.; Lin, J.; Wang, Z. A high-speed and low-complexity architecture for softmax function in deep learning. In Proceedings of the 2018 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), Chengdu, China, 26–30 October 2018; pp. 223–226. [Google Scholar]
- Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Zhang, X.; Zou, Y.; Shi, W. Dilated convolution neural network with LeakyReLU for environmental sound classification. In Proceedings of the 2017 22nd International Conference on Digital Signal Processing (DSP), London, UK, 23–25 August 2017; pp. 1–5. [Google Scholar]
- Heintzmann, R.; Ficz, G. Breaking the resolution limit in light microscopy. Brief. Funct. Genom. 2006, 5, 289–301. [Google Scholar] [CrossRef] [PubMed]
- Lindeberg, T. Scale Invariant Feature Transform. Scholarpedia 2012, 7, 10491. [Google Scholar] [CrossRef]
- Wang, Z.; Chen, J.; Hoi, S.C.H. Deep learning for image super-resolution: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3365–3387. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Winkler, S.; Mohandas, P. The evolution of video quality measurement: From PSNR to hybrid metrics. IEEE Trans. Broadcasting 2008, 54, 660–668. [Google Scholar] [CrossRef]
Objective | 4 × 0.1/NA | 10 × 0.25/NA | 20 × 0.4/NA | 40 × 0.65/NA | ||||
---|---|---|---|---|---|---|---|---|
Index | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR |
(std) | (std) | (std) | (std) | (std) | (std) | (std) | (std) | |
PSMs | 0.933 (±0.0107) | 34.032 (±1.0231) | 0.952 (±0.0113) | 35.214 (±0.9892) | 0.975 (±0.0118) | 38.963 (±0.9153) | 0.976 (±0.0097) | 39.331 (±0.8322) |
Hela cells | 0.898 (±0.0172) | 31.751 (±1.2048) | 0.922 (±0.0151) | 32.171 (±1.1920) | 0.939 (±0.0135) | 34.751 (±1.0723) | 0.943 (±0.0129) | 35.380 (±1.0133) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ding, H.; Li, F.; Chen, X.; Ma, J.; Nie, S.; Ye, R.; Yuan, C. ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data. Cells 2022, 11, 2394. https://doi.org/10.3390/cells11152394
Ding H, Li F, Chen X, Ma J, Nie S, Ye R, Yuan C. ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data. Cells. 2022; 11(15):2394. https://doi.org/10.3390/cells11152394
Chicago/Turabian StyleDing, Hao, Fajing Li, Xiang Chen, Jun Ma, Shouping Nie, Ran Ye, and Caojin Yuan. 2022. "ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data" Cells 11, no. 15: 2394. https://doi.org/10.3390/cells11152394
APA StyleDing, H., Li, F., Chen, X., Ma, J., Nie, S., Ye, R., & Yuan, C. (2022). ContransGAN: Convolutional Neural Network Coupling Global Swin-Transformer Network for High-Resolution Quantitative Phase Imaging with Unpaired Data. Cells, 11(15), 2394. https://doi.org/10.3390/cells11152394