Abstract
Image inpainting can fill regions of images with plausible content and can also be used to remove specific objects and leave only faint traces in inpainted images, which pose serious security issues. At present, there are relatively few forensic works on image inpainting. Moreover, there is a problem of poor generalization. Therefore, This paper proposes a dual-domain encoder-decoder network (DDEDNet) based on different input types, which is a two-branch network. The first branch is a spatial domain-based encoder network (S-Encoder) used to capture the tampering traces left by image inpainting in the spatial domain; the second branch is an encoder network based on the frequency domain (F-Encoder), which is used to mine the subtle artifacts left in the frequency domain. Then a cross-modal attention fusion module (CMAF) is used to fuse the features of the two encoder networks to obtain rich fused features. Finally, attention-gated (AG) skip connections are utilized to improve localization performance by properly incorporating multi-scale features in the decoder. Experimental results show that in the face of data sets with both deep inpainting and traditional schemes, DDEDNet can locate the inpainting area more accurately, effectively resist JPEG compression and Gaussian noise attacks, and performs better generalization.
This work is supported by the National Natural Science Foundation of China (62172059, 62072055, 62102046, 62072056), the Natural Science Foundation of Hunan Province (2022JJ50318, 2022JJ30621, 2023JJ50331, 2022JJ30618, 2020JJ2029), the Hunan Provincial Key Research and Development Program (2022GK2019), the Scientific Research Fund of Hunan Provincial Education Department (22A0200, 22B0300).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Azad, R., et al.: Medical image segmentation review: The success of u-net. arXiv preprint arXiv:2211.14830 (2022)
Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 24 (2009)
Bayar, B., Stamm, M.C.: Constrained convolutional neural networks: a new approach towards general purpose image manipulation detection. IEEE Trans. Inf. Forensics Secur. 13(11), 2691–2706 (2018)
Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001. IEEE (2001)
Chan, T.F., Shen, J.: Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 12(4), 436–449 (2001)
Chen, M., Sedighi, V., Boroumand, M., Fridrich, J.: Jpeg-phase-aware convolutional neural network for steganalysis of jpeg images. In: Proceedings of the 5th ACM Workshop on Information Hiding and Multimedia Security, pp. 75–84 (2017)
Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)
Gloe, T., Böhme, R.: The’dresden image database’for benchmarking digital image forensics. In: Proceedings of the 2010 ACM Symposium on Applied Computing, pp. 1584–1590 (2010)
Guo, Q., Gao, S., Zhang, X., Yin, Y., Zhang, C.: Patch-based image inpainting via two-stage low rank approximation. IEEE Trans. Visual Comput. Graph. 24(6), 2023–2036 (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
He, X., Li, W., Zhang, S., Li, K.: Efficient control of unscheduled packets for credit-based proactive transport. In: 2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS), pp. 593–600. IEEE (2023)
Herling, J., Broll, W.: High-quality real-time video in painting with pixmix. IEEE Trans. Visual Comput. Graph. 20(6), 866–879 (2014)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
Huang, J.B., Kang, S.B., Ahuja, N., Kopf, J.: Image completion using planar structure guidance. ACM Trans. Graph. (TOG) 33(4), 1–10 (2014)
Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 1–14 (2017)
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Levin, A., Zomet, A., Weiss, Y.: Learning how to inpaint from global image statistics. In: ICCV, vol. 1, pp. 305–312 (2003)
Li, A., et al.: Noise doesn’t lie: towards universal detection of deep inpainting. arXiv preprint arXiv:2106.01532 (2021)
Li, H., Huang, J.: Localization of deep inpainting using high-pass fully convolutional network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 8301–8310 (2019)
Li, H., Luo, W., Huang, J.: Localization of diffusion-based inpainting in digital images. IEEE Trans. Inf. Forensics Secur. 12(12), 3050–3064 (2017)
Li, W., Chen, S., Li, K., Qi, H., Xu, R., Zhang, S.: Efficient online scheduling for coflow-aware machine learning clusters. IEEE Trans. Cloud Comput. 10(4), 2564–2579 (2020)
Li, W., Yuan, X., Li, K., Qi, H., Zhou, X.: Leveraging endpoint flexibility when scheduling coflows across geo-distributed datacenters. In: IEEE INFOCOM 2018-IEEE Conference on Computer Communications, pp. 873–881. IEEE (2018)
Liang, Z., Yang, G., Ding, X., Li, L.: An efficient forgery detection algorithm for object removal by exemplar-based image inpainting. J. Vis. Commun. Image Represent. 30, 75–85 (2015)
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Liu, G., Reda, F.A., Shih, K.J., Wang, T.-C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 89–105. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_6
Liu, X., Liu, Y., Chen, J., Liu, X.: Pscc-net: progressive spatio-channel correlation network for image manipulation detection and localization. IEEE Trans. Circuits Syst. Video Technol. 32(11), 7505–7517 (2022)
Liu, Y., Li, W., Qu, W., Qi, H.: Bulb: lightweight and automated load balancing for fast datacenter networks. In: Proceedings of the 51st International Conference on Parallel Processing, pp. 1–11 (2022)
Lu, M., Niu, S.: A detection approach using lstm-cnn for object removal caused by exemplar-based image inpainting. Electronics 9(5), 858 (2020)
Nazeri, K., Ng, E., Joseph, T., Qureshi, F.Z., Ebrahimi, M.: Edgeconnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212 (2019)
Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill 1(10), e3 (2016)
Oktay, O., et al.: Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999 (2018)
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
Qian, Y., Yin, G., Sheng, L., Chen, Z., Shao, J.: Thinking in frequency: face forgery detection by mining frequency-aware clues. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 86–103. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_6
Roy, A.G., Navab, N., Wachinger, C.: concurrent spatial and channel ‘Squeeze & excitation’ in fully convolutional networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 421–429. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_48
Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23–34 (2004)
Vaswani, A., et al.: Attention is all you need. Advances in neural information processing systems 30 (2017)
Wang, J., Liu, Y., Rao, S., Sherratt, R.S., Hu, J.: Enhancing security by using gift and ecc encryption method in multi-tenant datacenters. Comput. Mater. Continua 75(2), 3849–3865 (2023)
Wang, J., Liu, Y., Rao, S., Zhou, X., Hu, J.: A novel self-adaptive multi-strategy artificial bee colony algorithm for coverage optimization in wireless sensor networks. Ad Hoc Netw. 103284 (2023)
Wang, J., Rao, S., Liu, Y., Sharma, P.K., Hu, J.: Load balancing for heterogeneous traffic in datacenter networks. J. Netw. Comput. Appl. 217, 103692 (2023)
Wang, J., Yuan, D., Luo, W., Rao, S., Sherratt, R.S., Hu, J.: Congestion control using in-network telemetry for lossless datacenters. Comput. Mater. Continua 75(1), 1195–1212 (2023)
Wei, W., Gu, H., Deng, W., Xiao, Z., Ren, X.: Abl-tc: a lightweight design for network traffic classification empowered by deep learning. Neurocomputing 489, 333–344 (2022)
Wei, W., Gu, H., Wang, K., Li, J., Zhang, X., Wang, N.: Multi-dimensional resource allocation in distributed data centers using deep reinforcement learning. IEEE Trans. Netw. Serv. Manag. (2022)
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1
Wu, H., Zhou, J.: Iid-net: image inpainting detection network via neural architecture search and attention. IEEE Trans. Circ. Syst. Video Technol. 32(3), 1172–1185 (2021)
Wu, H., Zhou, J., Li, Y.: Deep generative model for image inpainting with local binary pattern learning and spatial attention. IEEE Trans. Multimedia 24, 4016–4027 (2021)
Wu, Q., Sun, S.J., Zhu, W., Li, G.H., Tu, D.: Detection of digital doctoring in exemplar-based inpainted images. In: 2008 International Conference on Machine Learning and Cybernetics, vol. 3, pp. 1222–1226. IEEE (2008)
Wu, Y., AbdAlmageed, W., Natarajan, P.: Mantra-net: manipulation tracing network for detection and localization of image forgeries with anomalous features. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9543–9552 (2019)
Xu, R., Li, W., Li, K., Zhou, X., Qi, H.: Darkte: towards dark traffic engineering in data center networks with ensemble learning. In: 2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS), pp. 1–10. IEEE (2021)
Yan, Z., Li, X., Li, M., Zuo, W., Shan, S.: Shift-Net: image inpainting via deep feature rearrangement. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_1
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514 (2018)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Free-form image inpainting with gated convolution. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4471–4480 (2019)
Yu, T., et al.: Region normalization for image inpainting. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 12733–12740 (2020)
Zhang, X., Karaman, S., Chang, S.F.: Detecting and simulating artifacts in gan fake images. In: 2019 IEEE International Workshop on Information Forensics and Security (WIFS), pp. 1–6. IEEE (2019)
Zhang, Z., Qian, Y., Zhao, Y., Zhu, L., Wang, J.: Noise and edge based dual branch image manipulation detection. arXiv preprint arXiv:2207.00724 (2022)
Zheng, J., Du, Z., Zha, Z., Yang, Z., Gao, X., Chen, G.: Learning to configure converters in hybrid switching data center networks. IEEE/ACM Trans. Netw. (2023)
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., Torralba, A.: Places: a 10 million image database for scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 40(6), 1452–1464 (2017)
Zhou, P., Han, X., Morariu, V.I., Davis, L.S.: Learning rich features for image manipulation detection. In: Proceedings of the IEEE Conference on computer vision and Pattern Recognition, pp. 1053–1061 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, D., Tan, E., Li, F., Liu, S., Wang, J., Hu, J. (2024). Image Inpainting Forensics Algorithm Based on Dual-Domain Encoder-Decoder Network. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14491. Springer, Singapore. https://doi.org/10.1007/978-981-97-0808-6_6
Download citation
DOI: https://doi.org/10.1007/978-981-97-0808-6_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0807-9
Online ISBN: 978-981-97-0808-6
eBook Packages: Computer ScienceComputer Science (R0)