Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet
<p>Description of forged image detection. (<b>a</b>) Classified t-SNE images of the dataset in the ResNet50 network. (<b>b</b>) Examples of AI tampering with images. (<b>c</b>) Schematic diagram of image multi-level label division.</p> "> Figure 2
<p>General structure of the HPUNet network. It combines multiple types of image features for detection and localization, and the dual-branch attention mechanism amplifies strongly relevant features while suppressing weakly relevant features. Combined with UNet to construct a hierarchical network, it achieves accurate detection and localization of forged images in a coarse-to-fine cognitive order.</p> "> Figure 3
<p>Two-branch attention fusion module.</p> "> Figure 4
<p>Diagram of feature fusion for branch <math display="inline"><semantics> <msub> <mi>θ</mi> <mn>2</mn> </msub> </semantics></math>.</p> "> Figure 5
<p>Soft-threshold dual-attention module.</p> "> Figure 6
<p>t-SNE visual comparison.</p> "> Figure 7
<p>Comparison picture of HPUNet and DA-HFNet.</p> "> Figure 8
<p>Comparison of large-scale fake image localization results.</p> "> Figure 9
<p>Comparison of small-scale fake image localization results.</p> ">
Abstract
:1. Introduction
- This article combines external attention and channel attention as a dual-branch attention feature enhancement module, using the feedback results of the detection and localization as dynamic thresholds to enhance the strongly related features and suppress the weakly related features.
- This article proposes a combination of a hierarchical network and UNet network structure with soft-threshold attention, and it establishes hierarchical dependency relations.
- This article proposes a hierarchical and progressive forged image detection method called HPUNet, which successfully achieves the accurate detection and localization of AI-generated forged images and further improves the accuracy of detection and localization compared to the baseline methods.
2. Related Works
2.1. Image Forgery Generation
2.2. Image Forgery Detection
2.3. Image Forgery Localization
3. Methods
3.1. Context Feature Enhancement Module
3.2. Multi-Scale Feature Interaction Module
3.3. Progressive Detection and Localization Module
3.4. Loss Function
4. Experiments
4.1. Dataset and Experimental Settings
4.2. Fake Image Detection
4.3. Fake Image Localization
4.4. Cross-Dataset Validation
4.5. Ablation Experiment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kawar, B.; Zada, S.; Lang, O.; Tov, O.; Chang, H.; Dekel, T.; Mosseri, I.; Irani, M. Imagic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 6007–6017. [Google Scholar]
- Manukyan, H.; Sargsyan, A.; Atanyan, B.; Wang, Z.; Navasardyan, S.; Shi, H. Hd-painter: High-resolution and prompt-faithful text-guided image inpainting with diffusion models. arXiv 2023, arXiv:2312.14091. [Google Scholar]
- Bar-Tal, O.; Yariv, L.; Lipman, Y.; Dekel, T. Multidiffusion: Fusing diffusion paths for controlled image generation. In Proceedings of the ICML’23: International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Sohl-Dickstein, J.; Weiss, E.; Maheswaranathan, N.; Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the International Conference on Machine Learning, PMLR, Lille, France, 6–11 July 2015; pp. 2256–2265. [Google Scholar]
- Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
- Zhang, Q.; Tao, M.; Chen, Y. gddim: Generalized denoising diffusion implicit models. arXiv 2022, arXiv:2206.05564. [Google Scholar]
- Cozzolino, D.; Verdoliva, L. Noiseprint: A cnn-based camera model fingerprint. IEEE Trans. Inf. Forensics Secur. 2019, 15, 144–159. [Google Scholar] [CrossRef]
- Guillaro, F.; Cozzolino, D.; Sud, A.; Dufour, N.; Verdoliva, L. Trufor: Leveraging all-round clues for trustworthy image forgery detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 20606–20615. [Google Scholar]
- Corvi, R.; Cozzolino, D.; Zingarini, G.; Poggi, G.; Nagano, K.; Verdoliva, L. On the detection of synthetic images generated by diffusion models. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
- Huang, Y.; Bian, S.; Li, H.; Wang, C.; Li, K. Ds-unet: A dual streams unet for refined image forgery localization. Inf. Sci. 2022, 610, 73–89. [Google Scholar] [CrossRef]
- Xi, Z.; Huang, W.; Wei, K.; Luo, W.; Zheng, P. Ai-generated image detection using a cross-attention enhanced dual-stream network. arXiv 2023, arXiv:2306.07005. [Google Scholar]
- Sha, Z.; Li, Z.; Yu, N.; Zhang, Y. De-fake: Detection and attribution of fake images generated by text-to-image generation models. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 26–30 November 2023; pp. 3418–3432. [Google Scholar]
- Guo, X.; Liu, X.; Ren, Z.; Grosz, S.; Masi, I.; Liu, X. Hierarchical fine-grained image forgery detection and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 14–24 June 2023; pp. 3155–3165. [Google Scholar]
- Niloy, F.F.; Bhaumik, K.K.; Woo, S.S. Cfl-net: Image forgery localization using contrastive learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 4642–4651. [Google Scholar]
- Guo, M.-H.; Liu, Z.-N.; Mu, T.-J.; Hu, S.-M. Beyond self-attention: External attention using two linear layers for visual tasks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 5436–5447. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Liu, Y.; Li, X.; Zhang, J.; Hu, S.; Lei, J. Da-hfnet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention. 2024. Available online: https://api.semanticscholar.org/CorpusID:270214687 (accessed on 5 June 2024).
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Sauer, A.; Karras, T.; Laine, S.; Geiger, A.; Aila, T. Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis. arXiv 2023, arXiv:2301.09515. [Google Scholar]
- Nichol, A.; Dhariwal, P.; Ramesh, A.; Shyam, P.; Mishkin, P.; McGrew, B.; Sutskever, I.; Chen, M. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv 2021, arXiv:2112.10741. [Google Scholar]
- Rombach, R.; Blattmann, A.; Lorenz, D.; Esser, P.; Ommer, B. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10684–10695. [Google Scholar]
- Pan, X.; Tewari, A.; Leimkühler, T.; Liu, L.; Meka, A.; Theobalt, C. Drag your gan: Interactive point-based manipulation on the generative image manifold. In Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings, Los Angeles, CA, USA, 6–10 August 2023; pp. 1–11. [Google Scholar]
- Yang, B.; Gu, S.; Zhang, B.; Zhang, T.; Chen, X.; Sun, X.; Chen, D.; Wen, F. Paint by example: Exemplar-based image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 18381–18391. [Google Scholar]
- Patashnik, O.; Wu, Z.; Shechtman, E.; Cohen-Or, D.; Lischinski, D. Styleclip: Text-driven manipulation of stylegan imagery. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 2085–2094. [Google Scholar]
- Li, H.; Luo, W.; Qiu, X.; Huang, J. Image forgery localization via integrating tampering possibility maps. IEEE Trans. Inf. Forensics Secur. 2017, 12, 1240–1252. [Google Scholar] [CrossRef]
- Arshed, M.A.; Alwadain, A.; Ali, R.F.; Mumtaz, S.; Ibrahim, M.; Muneer, A. Unmasking deception: Empowering deepfake detection with vision transformer network. Mathematics 2023, 11, 3710. [Google Scholar] [CrossRef]
- Ojha, U.; Li, Y.; Lee, Y.J. Towards universal fake image detectors that generalize across generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 24480–24489. [Google Scholar]
- Ramirez-Rodriguez, A.E.; Arevalo-Ancona, R.E.; Perez-Meana, H.; Cedillo-Hernandez, M.; Nakano-Miyatake, M. Aismsnet: Advanced image splicing manipulation identification based on siamese networks. Appl. Sci. 2024, 14, 5545. [Google Scholar] [CrossRef]
- Wan, D.; Cai, M.; Peng, S.; Qin, W.; Li, L. Deepfake detection algorithm based on dual-branch data augmentation and modified attention mechanism. Appl. Sci. 2023, 13, 8313. [Google Scholar] [CrossRef]
- Epstein, D.C.; Jain, I.; Wang, O.; Zhang, R. Online detection of ai-generated images. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Paris, France, 2–6 October 2023; pp. 382–392. [Google Scholar]
- Verdoliva, L. Media forensics and deepfakes: An overview. IEEE J. Sel. Top. Signal Process. 2020, 14, 910–932. [Google Scholar] [CrossRef]
- Wu, H.; Zhou, J.; Zhang, S. Generalizable synthetic image detection via language-guided contrastive learning. arXiv 2023, arXiv:2305.13800. [Google Scholar]
- Wang, Z.; Bao, J.; Zhou, W.; Wang, W.; Hu, H.; Chen, H.; Li, H. Dire for diffusion-generated image detection. arXiv 2023, arXiv:2303.09295. [Google Scholar]
- Zhong, N.; Xu, Y.; Qian, Z.; Zhang, X. Rich and poor texture contrast: A simple yet effective approach for ai-generated image detection. arXiv 2023, arXiv:2311.12397. [Google Scholar]
- Wu, Y.; AbdAlmageed, W.; Natarajan, P. Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9543–9552. [Google Scholar]
- Dong, C.; Chen, X.; Hu, R.; Cao, J.; Li, X. Mvss-net: Multi-view multi-scale supervised networks for image manipulation detection. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 3539–3553. [Google Scholar] [CrossRef]
- Liu, X.; Liu, Y.; Chen, J.; Liu, X. Pscc-net: Progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7505–7517. [Google Scholar] [CrossRef]
- Zhang, J.; Tohidypour, H.; Wang, Y.; Nasiopoulos, P. Shallow-and deep-fake image manipulation localization using deep learning. In Proceedings of the 2023 International Conference on Computing, Networking and Communications (ICNC), Honolulu, HI, USA, 20–22 February 2023; pp. 468–472. [Google Scholar]
- Zhou, Y.; Wang, H.; Zeng, Q.; Zhang, R.; Meng, S. Exploring weakly-supervised image manipulation localization with tampering edge-based class activation map. Expert Syst. Appl. 2024, 249, 123501. [Google Scholar] [CrossRef]
- Liu, Q.; Li, H.; Liu, Z. Image forgery localization based on fully convolutional network with noise feature. Multimed. Tools Appl. 2022, 81, 17919–17935. [Google Scholar] [CrossRef]
- Brock, A.; Donahue, J.; Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019; Available online: https://openreview.net/forum?id=B1xsqj09Fm (accessed on 5 June 2024).
- Liu, X.; Gong, C.; Wu, L.; Zhang, S.; Su, H.; Liu, Q. Fusedream: Training-free text-to-image generation with improved clip+ gan space optimization. arXiv 2021, arXiv:2112.01573. [Google Scholar]
- Yu, T.; Feng, R.; Feng, R.; Liu, J.; Jin, X.; Zeng, W.; Chen, Z. Inpaint anything: Segment anything meets image inpainting. arXiv 2023, arXiv:2304.06790. [Google Scholar]
- Wang, S.-Y.; Wang, O.; Zhang, R.; Owens, A.; Efros, A.A. Cnn-generated images are surprisingly easy to spot… for now. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8695–8704. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Method | Model | Forgery Region | Guidance | Num | ||
---|---|---|---|---|---|---|
GAN | Diffusion | Full | Partial | |||
BigGAN [43] | ✓ | × | ✓ | × | image | 3k |
DDPM [7] | × | ✓ | ✓ | × | image | 3k |
FuseDream [44] | ✓ | × | ✓ | × | text | 3k |
GLIDE [22] | × | ✓ | ✓ | × | text | 3k |
Inpaint Anything [45] | × | ✓ | × | ✓ | text | 3k |
Paint by Example [25] | × | ✓ | × | ✓ | image | 3k |
StyleCLIP [26] | ✓ | × | × | ✓ | text | 3k |
Copy-Move | - | - | - | - | - | 3k |
Method | GANs | DMs | Artificial | AVG | ||||
---|---|---|---|---|---|---|---|---|
ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | |
Trufor * [10] | 65.42 | 62.47 | 62.15 | 61.27 | 59.75 | 52.84 | 62.44 | 58.86 |
PSCC-Net * [39] | 47.24 | 51.36 | 46.31 | 53.28 | 47.61 | 52.16 | 47.05 | 52.27 |
HIFI-IFDL * [15] | 65.37 | 60.19 | 62.31 | 56.18 | 70.24 | 64.29 | 65.97 | 60.22 |
CNN-det. [46] | 81.29 | 77.42 | 79.38 | 65.77 | 82.56 | 71.48 | 81.08 | 71.56 |
ResNet50 [47] | 84.50 | 70.19 | 77.69 | 65.82 | 78.49 | 70.93 | 80.23 | 68.98 |
Mantra-Net [37] | 82.95 | 77.27 | 84.52 | 78.94 | 81.39 | 77.38 | 82.95 | 77.86 |
DIRE [35] | 71.25 | 60.08 | 90.59 | 89.64 | 62.18 | 58.39 | 74.67 | 69.37 |
PSCC-Net [39] | 94.16 | 97.26 | 92.38 | 97.41 | 92.67 | 96.42 | 93.07 | 97.03 |
HIFI-IFDL [15] | 95.67 | 97.13 | 95.19 | 96.28 | 96.37 | 97.19 | 95.74 | 96.87 |
DA-HFNet [19] | 98.14 | 97.09 | 97.92 | 97.61 | 98.42 | 98.10 | 98.16 | 97.60 |
HPUNet (ours) | 99.69 | 98.54 | 99.70 | 98.13 | 99.29 | 97.43 | 99.56 | 98.03 |
Method | GANs | DMs | Artificial | AVG | ||||
---|---|---|---|---|---|---|---|---|
ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | |
Trufor * [10] | 77.18 | 78.49 | 74.27 | 76.34 | 68.23 | 61.71 | 73.23 | 72.18 |
PSCC-Net * [39] | 55.39 | 51.67 | 58.24 | 54.89 | 57.42 | 61.58 | 57.02 | 56.05 |
HIFI-IFDL * [15] | 65.27 | 42.88 | 54.36 | 51.94 | 57.81 | 49.82 | 59.15 | 48.21 |
Unet [18] | 65.49 | 59.91 | 68.35 | 58.32 | 59.48 | 50.16 | 64.44 | 56.13 |
Mantra-Net [37] | 84.34 | 79.61 | 81.92 | 74.38 | 86.94 | 80.11 | 84.40 | 78.03 |
PSCC-Net [39] | 86.15 | 62.37 | 87.04 | 59.67 | 88.45 | 67.89 | 87.21 | 63.31 |
HIFI-IFDL [15] | 89.94 | 88.26 | 87.22 | 86.39 | 90.18 | 91.06 | 88.57 | 88.57 |
DA-HFNet [19] | 92.01 | 90.28 | 90.92 | 91.49 | 92.79 | 90.36 | 91.91 | 90.71 |
HPUNet (ours) | 93.81 | 91.62 | 92.90 | 92.34 | 93.11 | 92.65 | 93.27 | 92.20 |
Method | CoCoGLIDE | HIFI-IFDL Dataset | GenImage | Casia | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Detection | Localization | Detection | Localization | Detection | Localization | |||||||
ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | |
PSCC-Net | 44.26 | 39.57 | 58.76 | 43.59 | 74.21 | 68.55 | 68.19 | 71.26 | 62.84 | 57.23 | 75.34 | 70.29 |
HIFI-IFDL | 48.75 | 40.33 | 66.89 | 50.17 | 83.84 | 79.91 | 73.59 | 79.48 | 72.58 | 69.81 | 74.38 | 79.54 |
DA-HFNet | 52.18 | 52.36 | 72.15 | 55.47 | 81.67 | 82.36 | 71.10 | 77.68 | 73.05 | 69.54 | 72.82 | 69.93 |
HPUNet (ours) | 54.19 | 48.97 | 78.57 | 61.09 | 82.97 | 81.85 | 74.99 | 81.53 | 80.28 | 74.62 | 77.16 | 75.80 |
Image Features | Detection | Localization | ||||
---|---|---|---|---|---|---|
RGB | Noise | Frequency | ACC (%) | F1 (%) | ACC (%) | F1 (%) |
✓ | ✓ | 93.51 | 92.18 | 87.64 | 85.27 | |
✓ | ✓ | 92.67 | 91.28 | 83.95 | 80.39 | |
✓ | ✓ | 95.81 | 93.47 | 89.16 | 82.83 | |
× | × | × | 94.59 | 93.18 | 82.91 | 81.49 |
✓ | ✓ | ✓ | 99.56 | 98.03 | 93.27 | 92.20 |
Method | Detection | Localization | ||
---|---|---|---|---|
ACC (%) | F1 (%) | ACC (%) | F1 (%) | |
No edgeloss | 94.58 | 93.72 | 90.28 | 89.49 |
HPUNet | 99.56 | 98.03 | 93.27 | 92.20 |
Attention Modules | Detection | Localization | |||
---|---|---|---|---|---|
DAM | t-CBAM | ACC (%) | F1 (%) | ACC (%) | F1 (%) |
✓ | 84.76 | 82.19 | 88.63 | 82.41 | |
✓ | 92.94 | 89.96 | 90.47 | 87.52 | |
✓ | ✓ | 99.56 | 98.03 | 93.27 | 92.20 |
Training Task | Branches | GANs | DMs | Artificial | |||
---|---|---|---|---|---|---|---|
ACC (%) | F1 (%) | ACC (%) | F1 (%) | ACC (%) | F1 (%) | ||
Detection | 4 branches | 94.51 | 93.47 | 95.39 | 94.67 | 96.28 | 94.19 |
Localization | 4 branches | 91.83 | 90.27 | 89.67 | 90.15 | 92.18 | 91.12 |
All tasks | 1 branch | 95.74 | 95.13 | 94.55 | 94.31 | 94.28 | 93.94 |
2 branches | 97.92 | 96.75 | 96.86 | 96.48 | 97.12 | 95.24 | |
3 branches | 98.59 | 97.46 | 97.21 | 96.84 | 98.25 | 96.18 | |
4 branches | 99.69 | 98.54 | 99.70 | 98.13 | 99.29 | 97.43 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Li, X.; Zhang, J.; Li, S.; Hu, S.; Lei, J. Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet. Big Data Cogn. Comput. 2024, 8, 119. https://doi.org/10.3390/bdcc8090119
Liu Y, Li X, Zhang J, Li S, Hu S, Lei J. Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet. Big Data and Cognitive Computing. 2024; 8(9):119. https://doi.org/10.3390/bdcc8090119
Chicago/Turabian StyleLiu, Yang, Xiaofei Li, Jun Zhang, Shuohao Li, Shengze Hu, and Jun Lei. 2024. "Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet" Big Data and Cognitive Computing 8, no. 9: 119. https://doi.org/10.3390/bdcc8090119
APA StyleLiu, Y., Li, X., Zhang, J., Li, S., Hu, S., & Lei, J. (2024). Hierarchical Progressive Image Forgery Detection and Localization Method Based on UNet. Big Data and Cognitive Computing, 8(9), 119. https://doi.org/10.3390/bdcc8090119