Edge Detection Attention Module in Pure Vision Transformer for Low-Dose X-Ray Computed Tomography Image Denoising
<p>Not-well-preserved anatomical details in the highlighted blue ROI, produced using a pure vision transformer for LDCT denoising. Image generated from running the PViT model in [<a href="#B19-algorithms-18-00134" class="html-bibr">19</a>] using the piglet dataset for this study. (<b>a</b>) Piglet LDCT (15 mAs) slice. (<b>b</b>) Piglet NDCT (300 mAs) slice. (<b>c</b>) Generated output from PViT.</p> "> Figure 2
<p>Pure transformer block for LDCT denoising with the integration of gradient–Laplacian attention module.</p> "> Figure 3
<p>Attention modules in the pure vision transformer for LDCT denoising. (<b>a</b>) Multi-head self attention module for PViT. (<b>b</b>) Proposed gradient–Laplacian attention module for PViT.</p> "> Figure 4
<p>Edge enhancement at each upsampling and downsampling checkpoint labeled in <a href="#algorithms-18-00134-f002" class="html-fig">Figure 2</a> using different filters in the attention module. (<b>a</b>–<b>c</b>) The feature maps generated from checkpoint at the encoder; (<b>d</b>–<b>f</b>) the feature maps generated from the checkpoint at the decoder.</p> "> Figure 5
<p>Benchmark test visual results for piglet data: (<b>a</b>) input LDCT image reference with blue ROI, (<b>b</b>) ROI of LDCT image, (<b>c</b>) ROI of NDCT image, and ROI of output image using (<b>d</b>) RED-CNN, (<b>e</b>) PViT, (<b>f</b>) DSC-GAN, (<b>g</b>) DRLEMP, (<b>h</b>)TED-Net, (<b>i</b>) GLAM-PViT.</p> "> Figure 6
<p>Loss, PSNR, and SSIM trend over 150 epochs using piglet dataset. (<b>a</b>) Loss. (<b>b</b>) PSNR. (<b>c</b>) SSIM.</p> "> Figure 7
<p>PSNR and SSIM trend over 150 epochs using thoracic dataset. (<b>a</b>) Loss. (<b>b</b>) PSNR. (<b>c</b>) SSIM.</p> "> Figure 8
<p>Benchmark test visual results for thoracic data: (<b>a</b>) input LDCT image reference with blue ROI, (<b>b</b>) ROI of LDCT image, (<b>c</b>) ROI of NDCT image, and ROI of output image using (<b>d</b>) RED-CNN, (<b>e</b>) PViT, (<b>f</b>) DSC-GAN, (<b>g</b>) DRLEMP, (<b>h</b>) TED-Net, (<b>i</b>) GLAM-PViT.</p> "> Figure 9
<p>PSNR and SSIM trend over 150 epochs using head dataset. (<b>a</b>) Loss. (<b>b</b>) PSNR. (<b>c</b>) SSIM.</p> "> Figure 10
<p>Benchmark test visual results for dead data: (<b>a</b>) input LDCT image reference with blue ROI, (<b>b</b>) ROI of LDCT image, (<b>c</b>) ROI of NDCT image, and ROI of output image using (<b>d</b>) RED-CNN, (<b>e</b>) PViT, (<b>f</b>) DSC-GAN, (<b>g</b>) DRLEMP, (<b>h</b>) TED-Net, (<b>i</b>) GLAM-PViT.</p> "> Figure 11
<p>PSNR and SSIM trend over 150 epochs using abdomen dataset. (<b>a</b>) Loss. (<b>b</b>) PSNR. (<b>c</b>) SSIM.</p> "> Figure 12
<p>Benchmark test visual results for abdomen data: (<b>a</b>) input LDCT image reference with blue ROI, (<b>b</b>) ROI of LDCT image, (<b>c</b>) ROI of NDCT image, and ROI of output image using (<b>d</b>) RED-CNN, (<b>e</b>) PViT, (<b>f</b>) DSC-GAN, (<b>g</b>) DRLEMP, (<b>h</b>) TED-Net, (<b>i</b>) GLAM-PViT.</p> "> Figure 13
<p>PSNR and SSIM trend over 150 epochs using chest dataset. (<b>a</b>) Loss. (<b>b</b>) PSNR. (<b>c</b>) SSIM.</p> "> Figure 14
<p>Benchmark test visual results for chest data: (<b>a</b>) input LDCT image reference with blue ROI, (<b>b</b>) ROI of LDCT image, (<b>c</b>) ROI of NDCT image, and ROI of output image using (<b>d</b>) RED-CNN, (<b>e</b>) PViT, (<b>f</b>) DSC-GAN, (<b>g</b>) DRLEMP, (<b>h</b>) TED-Net, (<b>i</b>) GLAM-PViT.</p> ">
Abstract
:1. Introduction
- There have been few transformer-based studies for LDCT denoising, so we aimed to address this gap by introducing a novel pure vision transformer with an edge detection attention module.
- We leveraged the concept of edge detection attention modules in vision transformers that target the preservation of CT image edges and textural details.
- We investigated the effectiveness of each edge detection attention module in LDCT denoising by performing ablation experiments using various kernels, datasets, and objective functions.
- Benchmark testing with state-of-the-art LDCT denoising models was also performed to compare the proposed model with the recent developments in the LDCT denoising task in the research community.
2. Materials and Methods
2.1. System Model
2.2. Gradient–Laplacian Attention Module
2.3. Evaluation Metrics and Loss Functions
2.4. Dataset & Training Details
3. Results
4. Discussion
4.1. Optimal Balance for Loss Function
4.2. Piglet Dataset
4.3. Thoracic Dataset
4.4. Head Dataset
4.5. Abdomen Dataset
4.6. Comparison Across Datasets
4.7. Training Performance
4.8. Visual and Clinical Implications
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Shin, E.; Lee, S.; Kang, H.; Kim, J.; Kim, K.; Youn, H.; Jin, Y.W.; Seo, S.; Youn, B. Organ-specific Effects of Low Dose Radiation Exposure: A Comprehensive Review. Front. Genet. 2020, 11, 566244. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Gong, W.; Ye, L.; Wang, F.; Shangguan, Z.; Cheng, Y. A Review of deep learning methods for denoising of medical low-dose CT images. Comput. Biol. Med. 2024, 171, 108112. [Google Scholar] [CrossRef]
- Tun, M.T.; Sugiura, Y.; Shimamura, T. Poisson–Gaussian Noise Removal for Low-Dose CT Images by Integrating Noisy Image Patch and Impulse Response of Low-Pass Filter in CNN. J. Signal Process. 2024, 28, 57–67. [Google Scholar] [CrossRef]
- Li, Z.; Liu, Y.; Zhang, P.; Lu, J.; Ren, S.; Gui, Z. Adaptive Weighted Total Variation Expansion and Gaussian curvature Guided Low-dose CT Image Denoising Network. Biomed. Signal Process. Control 2024, 94, 106329. [Google Scholar] [CrossRef]
- Hu, Y.; Ren, J.; Yang, J.; Bai, R.; Liu, J. Noise Reduction by Adaptive-sin Filtering for Retinal OCT Images. Sci. Rep. 2021, 11, 19498. [Google Scholar] [CrossRef] [PubMed]
- Lepcha, D.C.; Dogra, A.; Goyal, B.; Goyal, V.; Kukreja, V.; Bavirisetti, D.P. A Constructive Non-local Means Algorithm for Low-dose Computed Tomography Denoising with Morphological Residual Processing. PLoS ONE 2023, 18, e0291911. [Google Scholar] [CrossRef]
- Chyophel Lepcha, D.; Goyal, B.; Dogra, A. Low-dose CT Image Denoising using Sparse 3D Transformation with Probabilistic Non-local Means for Clinical Applications. Imaging Sci. J. 2023, 71, 97–109. [Google Scholar] [CrossRef]
- Wang, L.; Liu, Y.; Wu, R.; Liu, Y.; Yan, R.; Ren, S.; Gui, Z. Image Processing for Low-dose CT via Novel Anisotropic Fourth-order Diffusion Model. IEEE Access 2022, 10, 50114–50124. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, Y.; Gui, Z.; Chen, Y.; Jia, L. A Region-adaptive Non-local Denoising Algorithm for Low-dose Computed Tomography Images. Math. Biosci. Eng. 2023, 20, 2831–2846. [Google Scholar] [CrossRef]
- Vasu, G.T.; Palanisamy, P. CT and MRI Multi-modal Medical Image Fusion using Weight-Optimized Anisotropic Diffusion Filtering. Soft Comput. 2023, 27, 9105–9117. [Google Scholar] [CrossRef]
- Wang, Z.; Ma, F.; Ji, P.; Fu, C. Image Denoising Based on an Improved Wavelet Threshold and Total Variation Model. In Proceedings of the International Conference on Intelligent Computing, Tianjin, China, 5–8 August 2024; Springer: Singapore, 2024; pp. 142–154. [Google Scholar]
- He, Y.; Zeng, L.; Chen, W.; Gong, C.; Shen, Z. Bilateral Weighted Relative Total Variation for Low-Dose CT Reconstruction. J. Digit. Imaging 2023, 36, 458–467. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Y.; Kong, Z.; Huang, T.; Ahn, E.; Li, H.; Ding, L. WaveletDFDS-Net: A Dual Forward Denoising Stream Network for Low-Dose CT Noise Reduction. Electronics 2024, 13, 1906. [Google Scholar] [CrossRef]
- Li, Z.; Liu, Y.; Shu, H.; Lu, J.; Kang, J.; Chen, Y.; Gui, Z. Multi-scale Feature Fusion Network for Low-dose CT Denoising. J. Digit. Imaging 2023, 36, 1808–1825. [Google Scholar] [CrossRef]
- Zhang, Y.; Yang, X.; Gong, G.; Meng, X.; Wang, X.; Zhang, Z. FMUnet: Frequency Feature Enhancement Multi-level U-Net for Low-Dose CT Denoising with a Real Collected LDCT Image Dataset. In Proceedings of the International Conference on Intelligent Computing, Tianjin, China, 5–8 August 2024; Springer: Singapore, 2024; pp. 172–183. [Google Scholar]
- Elad, M.; Aharon, M. Image Denoising via Sparse and Redundant Representations Over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
- Gao, Y.; Lu, S.; Shi, Y.; Chang, S.; Zhang, H.; Hou, W.; Li, L.; Liang, Z. A Joint-parameter Estimation and Bayesian Reconstruction Approach to Low-dose CT. Sensors 2023, 23, 1374. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations (ICLR), Vienna, Austria, 4 May 2021. [Google Scholar]
- Marcos, L.; Babyn, P.; Alirezaie, J. Pure Vision Transformer (CT-ViT) with Noise2Neighbors Interpolation for Low-Dose CT Image Denoising. J. Imaging Inform. Med. 2024, 37, 2669–2687. [Google Scholar] [CrossRef]
- Yan, H.; Fang, C.; Qiao, Z. A Multi-Attention Uformer for Low-Lose CT Image Denoising. Signal Image Video Process. 2024, 18, 1429–1442. [Google Scholar] [CrossRef]
- Kim, W.; Jeon, S.Y.; Byun, G.; Yoo, H.; Choi, J.H. A Systematic Review of Deep Learning-based Denoising for Low-dose Computed Tomography from a Perceptual Quality Perspective. Biomed. Eng. Lett. 2024, 14, 1153–1173. [Google Scholar] [CrossRef]
- Zubair, M.; Rais, H.M.; Al-Tashi, Q.; Ullah, F.; Faheeem, M.; Khan, A.A. Enabling Predication of the Deep Learning Algorithms for Low-Dose CT Scan Image Denoising Models: A Systematic Literature Review. IEEE Access 2024, 12, 79025–79050. [Google Scholar] [CrossRef]
- Barde, M.P.; Barde, P.J. What to use to express the variability of data: Standard deviation or standard error of mean? Perspect. Clin. Res. 2012, 3, 113–116. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Yi, X.; Babyn, P. Sharpness-Aware Low Dose CT Denoising Using Conditional Generative Adversarial Network. J. Digit. Imaging 2018, 31, 655–669. [Google Scholar] [CrossRef] [PubMed]
- McCollough, C.; Chen, B.; Holmes, D.; III Duan, X.; Yu, Z.; Yu, L.; Leng, S.; Fletcher, J. Data from Low Dose CT Image and Projection Data [Data Set]; The Cancer Imaging Archive: Rochester, MN, USA, 2021. [Google Scholar] [CrossRef]
- Gholizadeh-Ansari, M.; Alirezaie, J.; Babyn, P. Deep Learning for Low-Dose CT Denoising using Perceptual Loss and Edge Detection Layer. J. Digit. Imaging 2019, 33, 505–514. [Google Scholar] [CrossRef] [PubMed]
- Makinen, Y.; Azzari, L.; Foi, A. Collaborative Filtering of Correlated Noise: Exact Transform-Domain Variance for Improved Shrinkage and Patch Matching. IEEE Trans. Image Process. 2020, 29, 8339–8354. [Google Scholar] [CrossRef]
- Wang, D.; Wu, Z.; Yu, H. TED-Net: Convolution-Free T2T Vision Transformer-Based Encoder-Decoder Dilation Network for Low-Dose CT Denoising. In Machine Learning in Medical Imaging; Springer: Cham, Switherland, 2021; pp. 416–425. [Google Scholar] [CrossRef]
- Zhao, F.; Liu, M.; Gao, Z.; Jiang, X.; Wang, R.; Zhang, L. Dual-scale similarity-guided cycle generative adversarial network for unsupervised low-dose CT denoising. Comput. Biol. Med. 2023, 161, 107029. [Google Scholar] [CrossRef]
- Batson, J.; Royer, L. Noise2self: Blind denoising by self-supervision. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 524–533. [Google Scholar]
- Fadnavis, S.; Chowdhury, A.; Batson, J.; Drineas, P.; Garyfallidis, E. Patch2Self2: Self-supervised Denoising on Coresets via Matrix Sketching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 27641–27651. [Google Scholar]
Dataset | NDCT | LDCT | WL [HU] | WW [HU] |
---|---|---|---|---|
Piglet | 300 mAs | 15 mAs | 40 | 400 |
Thoracic | 480 mAs | 60 mAs | −650 to −700 | 1500 to 2000 |
Abdomen | 502 mAs | 498 mAs | 40 | 40 |
Chest | 712 mAs | 690 mAs | 40 | 400 |
Head | 202 mAs | 180 mAs | 30 to 50 | 80 to 100 |
Model | SSIM ↑ | PSNR ↑ | #Param | Training Time |
---|---|---|---|---|
LDCT | 0.6152 ± 0.0023 | 25.12 ± 0.03 | — | — |
PViT | 0.9631 ± 0.0015 | 43.34 ± 0.02 | 1.68 M | 15 min |
GAM-PViT (DVH) | 0.9730 ± 0.0011 | 43.62 ± 0.01 | 1.66 M | 15 min |
LAM-PViT | 0.9622 ± 0.0017 | 42.12 ± 0.03 | 1.66 M | 15 min |
GLAM-PViT (MSE) | 0.9728 ± 0.0013 | 43.51 ± 0.02 | 1.70 M | 18 min |
GLAM-PViT (SSIM) | 0.9727 ± 0.0015 | 43.49 ± 0.02 | 1.71 M | 18 min |
GLAM-PViT (PL) | 0.9732 ± 0.0010 | 43.62 ± 0.01 | 1.75 M | 20 min |
GLAM-PViT (proposed) | 0.9756 ± 0.0010 | 43.84 ± 0.01 | 1.82 M | 30 min |
Benchmark 1: RED-CNN | 0.9321 ± 0.0021 | 39.68 ± 0.04 | 3.33 M | 50 min |
Benchmark 2: TED-Net | 0.9536 ± 0.0019 | 43.15 ± 0.03 | 3.28 M | 80 min |
Benchmark 3: DSC-GAN | 0.9602 ± 0.0018 | 42.23 ± 0.03 | 4.16 M | 120 min |
Benchmark 4: BM3D * | 0.8263 ± 0.0027 | 36.55 ± 0.04 | n/a | 60 min |
Benchmark 5: DRLEMP | 0.9529 ± 0.0018 | 41.66 ± 0.03 | 2.80 M | 60 min |
Model | SSIM ↑ | PSNR ↑ | #Param | Training Time |
---|---|---|---|---|
LDCT | 0.2617 ± 0.0028 | 19.75 ± 0.03 | — | — |
PViT | 0.6651 ± 0.0019 | 31.26 ± 0.02 | 1.68 M | 15 min |
GAM-PViT (DVH) | 0.6712 ± 0.0015 | 32.11 ± 0.01 | 1.66 M | 15 min |
LAM-PViT | 0.6601 ± 0.0019 | 29.81 ± 0.03 | 1.66 M | 15 min |
GLAM-PViT (MSE) | 0.6756 ± 0.0014 | 32.36 ± 0.02 | 1.70 M | 18 min |
GLAM-PViT (SSIM) | 0.6742 ± 0.0015 | 32.25 ± 0.02 | 1.71 M | 18 min |
GLAM-PViT (PL) | 0.6769 ± 0.0013 | 32.43 ± 0.01 | 1.75 M | 20 min |
GLAM-PViT (proposed) | 0.6788 ± 0.0012 | 32.92 ± 0.01 | 1.82 M | 30 min |
Benchmark 1: RED-CNN | 0.4926 ± 0.0030 | 26.85 ± 0.04 | 3.33 M | 50 min |
Benchmark 2: TED-Net | 0.6231 ± 0.0024 | 30.28 ± 0.03 | 3.28 M | 80 min |
Benchmark 3: DSC-GAN | 0.6433 ± 0.0021 | 29.25 ± 0.03 | 4.16 M | 120 min |
Benchmark 4: BM3D * | 0.4621 ± 0.0035 | 26.85 ± 0.04 | n/a | 60 min |
Benchmark 5: DRLEMP | 0.5283 ± 0.0028 | 27.31 ± 0.04 | 2.80 M | 60 min |
Model | SSIM ↑ | PSNR ↑ | #Param | Training Time |
---|---|---|---|---|
LDCT | 0.2264 ± 0.015 | 30.43 ± 0.08 | — | — |
PViT | 0.6888 ± 0.010 | 42.66 ± 0.07 | 1.68 M | 15 min |
GAM-PViT (DVH) | 0.6811 ± 0.005 | 43.04 ± 0.07 | 1.66 M | 15 min |
LAM-PViT | 0.6504 ± 0.015 | 41.68 ± 0.09 | 1.66 M | 15 min |
GLAM-PViT (MSE) | 0.6820 ± 0.007 | 43.23 ± 0.06 | 1.70 M | 18 min |
GLAM-PViT (SSIM) | 0.6822 ± 0.007 | 43.21 ± 0.08 | 1.71 M | 18 min |
GLAM-PViT (PL) | 0.6813 ± 0.005 | 44.02 ± 0.05 | 1.75 M | 20 min |
GLAM-PViT (proposed) | 0.6893 ± 0.005 | 44.16 ± 0.04 | 1.82 M | 30 min |
Benchmark 1: RED-CNN | 0.4721 ± 0.012 | 36.88 ± 0.10 | 3.33 M | 50 min |
Benchmark 2: TED-Net | 0.5713 ± 0.010 | 41.75 ± 0.07 | 3.28 M | 80 min |
Benchmark 3: DSC-GAN | 0.5632 ± 0.015 | 39.62 ± 0.08 | 4.16 M | 120 min |
Benchmark 4: BM3D * | 0.3156 ± 0.015 | 34.21 ± 0.09 | n/a | 60 min |
Benchmark 5: DRLEMP | 0.4893 ± 0.015 | 38.64 ± 0.08 | 2.80 M | 60 min |
Model | SSIM ↑ | PSNR ↑ | #Param | Training Time |
---|---|---|---|---|
LDCT | 0.6234 ± 0.0035 | 30.14 ± 0.05 | — | — |
PViT | 0.8609 ± 0.0012 | 41.21 ± 0.08 | 1.68 M | 15 min |
GAM-PViT (DVH) | 0.8421 ± 0.0025 | 38.23 ± 0.08 | 1.66 M | 15 min |
LAM-PViT | 0.8613 ± 0.0018 | 40.66 ± 0.06 | 1.66 M | 15 min |
GLAM-PViT (MSE) | 0.8428 ± 0.0022 | 39.05 ± 0.07 | 1.70 M | 18 min |
GLAM-PViT (SSIM) | 0.8452 ± 0.0019 | 39.16 ± 0.10 | 1.71 M | 18 min |
GLAM-PViT (PL) | 0.8645 ± 0.0010 | 40.28 ± 0.05 | 1.75 M | 20 min |
GLAM-PViT (proposed) | 0.8742 ± 0.0009 | 43.25 ± 0.03 | 1.82 M | 30 min |
Benchmark 1: RED-CNN | 0.8413 ± 0.0030 | 36.92 ± 0.09 | 3.33 M | 50 min |
Benchmark 2: TED-Net | 0.8603 ± 0.0016 | 41.03 ± 0.07 | 3.28 M | 80 min |
Benchmark 3: DSC-GAN | 0.8524 ± 0.0025 | 40.35 ± 0.08 | 4.16 M | 120 min |
Benchmark 4: BM3D * | 0.8152 ± 0.0040 | 34.27 ± 0.10 | n/a | 60 min |
Benchmark 5: DRLEMP | 0.8579 ± 0.0020 | 37.64 ± 0.06 | 2.80 M | 60 min |
Model | SSIM ↑ | PSNR ↑ | #Param | Training Time |
---|---|---|---|---|
LDCT | 0.3489 ± 0.0040 | 26.32 ± 0.08 | — | — |
PViT | 0.7932 ± 0.0015 | 36.19 ± 0.05 | 1.68 M | 15 min |
GAM-PViT (DVH) | 0.8064 ± 0.0025 | 34.72 ± 0.06 | 1.66 M | 15 min |
LAM-PViT | 0.8107 ± 0.0010 | 35.28 ± 0.05 | 1.66 M | 15 min |
GLAM-PViT (MSE) | 0.8012 ± 0.0035 | 34.17 ± 0.09 | 1.70 M | 18 min |
GLAM-PViT (SSIM) | 0.8056 ± 0.0028 | 35.19 ± 0.03 | 1.71 M | 18 min |
GLAM-PViT (PL) | 0.8124 ± 0.0018 | 35.82 ± 0.04 | 1.75 M | 20 min |
GLAM-PViT (proposed) | 0.8224 ± 0.0010 | 36.71 ± 0.02 | 1.82 M | 30 min |
Benchmark 1: RED-CNN | 0.6423 ± 0.0035 | 29.63 ± 0.10 | 3.33 M | 50 min |
Benchmark 2: TED-Net | 0.7725 ± 0.0020 | 34.92 ± 0.07 | 3.28 M | 80 min |
Benchmark 3: DSC-GAN | 0.6234 ± 0.0045 | 33.25 ± 0.06 | 4.16 M | 120 min |
Benchmark 4: BM3D * | 0.5673 ± 0.0050 | 27.31 ± 0.09 | n/a | 60 min |
Benchmark 5: DRLEMP | 0.6636 ± 0.0030 | 31.33 ± 0.05 | 2.80 M | 60 min |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Marcos, L.; Babyn, P.; Alirezaie, J. Edge Detection Attention Module in Pure Vision Transformer for Low-Dose X-Ray Computed Tomography Image Denoising. Algorithms 2025, 18, 134. https://doi.org/10.3390/a18030134
Marcos L, Babyn P, Alirezaie J. Edge Detection Attention Module in Pure Vision Transformer for Low-Dose X-Ray Computed Tomography Image Denoising. Algorithms. 2025; 18(3):134. https://doi.org/10.3390/a18030134
Chicago/Turabian StyleMarcos, Luella, Paul Babyn, and Javad Alirezaie. 2025. "Edge Detection Attention Module in Pure Vision Transformer for Low-Dose X-Ray Computed Tomography Image Denoising" Algorithms 18, no. 3: 134. https://doi.org/10.3390/a18030134
APA StyleMarcos, L., Babyn, P., & Alirezaie, J. (2025). Edge Detection Attention Module in Pure Vision Transformer for Low-Dose X-Ray Computed Tomography Image Denoising. Algorithms, 18(3), 134. https://doi.org/10.3390/a18030134