Dual-Domain Fusion Convolutional Neural Network for Contrast Enhancement Forensics
<p>Histogram of uncompressed image, contrast-enhanced image with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>, contrast-enhanced image in the case of antiforensic attack, and JPEG image with a quality factor equal to 70, respectively.</p> "> Figure 2
<p>GLCM of uncompressed image, contrast-enhanced image with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>, contrast-enhanced image in the case of antiforensic attack, and JPEG image with a quality factor is equal to 70.</p> "> Figure 3
<p>The proposed dual-domain fusion convolutional neural network.</p> "> Figure 4
<p>The curve of function of <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>/</mo> <mn>255</mn> </mrow> </semantics></math> on <math display="inline"><semantics> <mi>γ</mi> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>,</mo> <mi>C</mi> <mo>,</mo> <mi>D</mi> </mrow> </semantics></math> are <math display="inline"><semantics> <mi>γ</mi> </semantics></math> = 0.6, 0.8, 1.2, 1.4, respectively.</p> "> Figure 5
<p>The architecture of proposed pixel-domain convolutional neural networks.</p> "> Figure 6
<p>The statistical distribution of gap bins for original and contrast-enhanced images with different parameters. The images are from BOSSBase data-set and centrally cropped into 128 × 128 patches.</p> "> Figure 7
<p>The architecture of the proposed histogram-domain convolutional neural networks.</p> "> Figure 8
<p>Performance on P-CNN with/without preprocessing and with a powerful network. NON means the case of P-CNN without preprocessing. The others represent the P-CNN with LAP, V2, H2, V1, and H1 filters in the preprocessing. Res_H1 denotes the P-CNN with H1 filter and residual blocks.</p> "> Figure 9
<p>Effect of the scale of training data.</p> "> Figure 10
<p>Performance of the P-CNN and the P-CNN with fine-tuning (P-CNN-FT).</p> ">
Abstract
:1. Introduction
2. Related Works
3. Problem Formulation
4. Proposed Method
4.1. Framework Overview
4.2. Pixel-Domain Convolutional Neural Network
4.3. Histogram-Domain Convolutional Neural Network
4.4. Dual-Domain Fusion Convolutional Neural Network
5. Experimental Results
5.1. Contrast Enhancement Detection: ORG vs. PCE
5.2. Robustness against Pre-JPEG Compressed and Antiforensic Attacked Contrast-Enhanced Images
5.3. Exploration on the Strategy to Improve Performance of CNN-Based CE Forensics
5.3.1. Preprocessing
5.3.2. Powerful Convolutional Neural Networks
5.3.3. Training Strategy
6. Conclusions, Limitations, and Future Research
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yang, P.; Baracchi, D.; Ni, R.; Zhao, Y.; Argenti, F.; Piva, A. A Survey of Deep Learning-Based Source Image Forensics. J. Imaging 2020, 6, 9. [Google Scholar] [CrossRef] [Green Version]
- Camacho, I.; Wang, K. A Comprehensive Review of Deep-Learning-Based Methods for Image Forensics. J. Imaging 2021, 7, 69. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.L.; Yau, H.T.; Yang, G.J. A Maximum Entropy-Based Chaotic Time-Variant Fragile Watermarking Scheme for Image Tampering Detection. Entropy 2013, 15, 3170–3185. [Google Scholar] [CrossRef]
- Bo, Z.; Qin, G.; Liu, P. A Robust Image Tampering Detection Method Based on Maximum Entropy Criteria. Entropy 2015, 17, 7948–7966. [Google Scholar]
- Stamm, M.; Liu, K.R. Blind forensics of contrast enhancement in digital images. In Proceedings of the 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008. [Google Scholar]
- Stamm, M.C.; Liu, K.R. Forensic detection of image manipulation using statistical intrinsic fingerprints. IEEE Trans. Inf. Forensics Secur. 2010, 5, 492–506. [Google Scholar] [CrossRef]
- Stamm, M.C.; Liu, K.R. Forensic estimation and reconstruction of a contrast enhancement mapping. In Proceedings of the 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 14–19 March 2010. [Google Scholar]
- Cao, G.; Zhao, Y.; Ni, R.; Li, X. Contrast enhancement-based forensics in digital images. IEEE Trans. Inf. Forensics Secur. 2014, 9, 515–525. [Google Scholar] [CrossRef]
- Li, H.; Luo, W.; Qiu, X.; Huang, J. Identification of various image operations using residual-based features. IEEE Trans. Circuits Syst. Video Technol. 2016, 28, 31–45. [Google Scholar] [CrossRef]
- Lin, X.; Li, C.T.; Hu, Y. Exposing image forgery through the detection of contrast enhancement. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013. [Google Scholar]
- Lin, X.; Wei, X.; Li, C.T. Two improved forensic methods of detecting contrast enhancement in digital images. In Proceedings of the IS&T/SPIE Electronic Imaging 2014, San Francisco, CA, USA, 19 February 2014. [Google Scholar]
- Wen, L.; Qi, H.; Lyu, S. Contrast enhancement estimation for digital image forensics. ACM Trans. Multimed. Comput. Commun. Appl. 2018, 14, 49. [Google Scholar] [CrossRef] [Green Version]
- De Rosa, A.; Fontani, M.; Massai, M.; Piva, A.; Barni, M. Second-order statistics analysis to cope with contrast enhancement counter-forensics. IEEE Signal Process. Lett. 2015, 22, 1132–1136. [Google Scholar] [CrossRef]
- Farid, H. Blind inverse gamma correction. IEEE Trans. Image Process. 2001, 10, 1428–1433. [Google Scholar] [CrossRef]
- Popescu, A.C.; Farid, H. Statistical tools for digital forensics. In Proceedings of the International Workshop on Information Hiding; Springer: Berlin, Germany, 2004; pp. 128–147. [Google Scholar]
- Cao, G.; Zhao, Y.; Ni, R. Forensic estimation of gamma correction in digital images. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010. [Google Scholar]
- Wang, P.; Liu, F.; Yang, C.; Luo, X. Parameter estimation of image gamma transformation based on zero-value histogram bin locations. Signal Process. Image Commun. 2018, 64, 33–45. [Google Scholar] [CrossRef]
- Barni, M.; Fontani, M.; Tondi, B. A universal technique to hide traces of histogram-based image manipulations. In Proceedings of the on Multimedia and Security; ACM: New York, NY, USA, 2012; pp. 97–104. [Google Scholar]
- Cao, G.; Zhao, Y.; Ni, R.; Tian, H. Anti-forensics of contrast enhancement in digital images. In Proceedings of the 12th ACM Workshop on Multimedia and Security; ACM: New York, NY, USA, 2010; pp. 25–34. [Google Scholar]
- Kwok, C.W.; Au, O.C.; Chui, S.H. Alternative anti-forensics method for contrast enhancement. In Proceedings of the International Workshop on Digital Watermarking; Springer: Berlin, Germany, 2011; pp. 398–410. [Google Scholar]
- Comesana-Alfaro, P.; Pérez-González, F. Optimal counterforensics for histogram-based forensics. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
- Cao, G.; Zhao, Y.; Ni, R.; Tian, H.; Yu, L. Attacking contrast enhancement forensics in digital images. Sci. China Inf. Sci. 2014, 57, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Ravi, H.; Subramanyam, A.V.; Emmanuel, S. ACE—An effective anti-forensic contrast enhancement technique. IEEE Signal Process. Lett. 2015, 23, 212–216. [Google Scholar] [CrossRef]
- Barni, M.; Costanzo, A.; Nowroozi, E.; Tondi, B. CNN-based detection of generic contrast adjustment with jpeg post-processing. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018. [Google Scholar]
- Zhang, C.; Du, D.; Ke, L.; Qi, H.; Lyu, S. Global Contrast Enhancement Detection via Deep Multi-Path Network. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018. [Google Scholar]
- Sun, J.Y.; Kim, S.W.; Lee, S.W.; Ko, S.J. A novel contrast enhancement forensics based on convolutional neural networks. Signal Process. Image Commun. 2018, 63, 149–160. [Google Scholar] [CrossRef]
- Shan, W.; Yi, Y.; Huang, R.; Xie, Y. Robust contrast enhancement forensics based on convolutional neural networks. Signal Process. Image Commun. 2019, 71, 138–146. [Google Scholar] [CrossRef]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Yang, P.; Ni, R.; Zhao, Y. Recapture image forensics based on Laplacian convolutional neural networks. In Proceedings of the International Workshop on Digital Watermarking; Springer: Berlin, Germany, 2016; pp. 119–128. [Google Scholar]
- Yang, P.; Ni, R.; Zhao, Y.; Zhao, W. Source camera identification based on content-adaptive fusion residual networks. arXiv 2017, arXiv:1703.04856. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [Green Version]
- Mangai, U.G.; Samanta, S.; Das, S.; Chowdhury, P.R. A survey of decision fusion and feature fusion strategies for pattern classification. IETE Tech. Rev. 2010, 27, 293–307. [Google Scholar] [CrossRef]
- Fontani, M.; Bianchi, T.; De Rosa, A.; Piva, A.; Barni, M. A framework for decision fusion in image forensics based on Dempster–Shafer theory of evidence. IEEE Trans. Inf. Forensics Secur. 2013, 8, 593–607. [Google Scholar] [CrossRef] [Green Version]
- Stegodata. Available online: http://agents.fel.cvut.cz/stegodata/ (accessed on 7 November 2014).
- Caffe. Available online: http://caffe.berkeleyvision.org (accessed on 30 April 2017).
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
Method | AVE | ||||
---|---|---|---|---|---|
De Rosa [13] | 94.02% | 84.85% | 78.37% | 74.12% | 82.84% |
Cao [8] | 93.89% | 93.90% | 80.26% | 81.40% | 87.36% |
Li [9] | 93.63% | 89.48% | 90.76% | 93.44% | 91.83% |
Sun [26] | 99.35% | 99.21% | 98.45% | 98.80% | 98.95% |
P-CNN | 94.70% | 89.00% | 78.00% | 86.00% | 86.93% |
H-CNN | 99.48% | 99.45% | 99.40% | 99.07% | 99.35% |
DM-CNN | 99.80% | 99.72% | 99.36% | 99.41% | 99.57% |
QF | Method | AVE | ||||
---|---|---|---|---|---|---|
De Rosa [13] | 81.50% | 79.69% | 75.16% | 72.70% | 77.26% | |
Cao [8] | 93.96% | 93.75% | 80.36% | 81.57% | 87.41% | |
Li [9] | 99.11% | 98.59% | 97.75% | 98.43% | 98.47% | |
50 | Sun [26] | 99.73% | 99.62% | 99.40% | 99.75% | 99.63% |
P-CNN | 98.20% | 98.25% | 96.70% | 97.30% | 97.61% | |
H-CNN | 99.90% | 99.80% | 99.50% | 99.78% | 99.75% | |
DM-CNN | 99.97% | 99.90% | 99.86% | 99.96% | 99.92% | |
De Rosa [13] | 83.99% | 82.27% | 77.47% | 72.95% | 80.67% | |
Cao [8] | 94.06% | 93.77% | 80.55% | 81.56% | 87.49% | |
Li [9] | 98.54% | 97.42% | 96.22% | 97.79% | 97.49% | |
70 | Sun [26] | 99.32% | 99.12% | 99.14% | 98.89% | 99.12% |
P-CNN | 98.60% | 97.00% | 95.70% | 96.50% | 96.95% | |
H-CNN | 98.86% | 99.03% | 98.27% | 97.68% | 98.46% | |
DM-CNN | 99.68% | 99.51% | 99.06% | 99.40% | 99.41% |
Attack | Method | AVE | ||||
---|---|---|---|---|---|---|
De Rosa [13] | 61.67% | 58.83% | 55.32% | 59.33% | 58.79% | |
Cao [8] | − | − | − | − | − | |
Li [9] | 96.30% | 95.54% | 95.72% | 96.55% | 96.03% | |
[16] | Sun [26] | 95.53% | 89.94% | 90.55% | 92.42% | 92.11% |
P-CNN | 97.90% | 96.00% | 96.50% | 96.55% | 96.74% | |
H-CNN | 88.77% | 73.65% | 74.85% | 78.42% | 78.92% | |
DM-CNN | 97.85% | 95.97% | 96.68% | 97.18% | 96.92% | |
De Rosa [13] | 69.85% | 66.03% | 62.29% | 64.42% | 65.65% | |
Cao [8] | − | − | − | − | − | |
Li [9] | 99.57% | 99.38% | 99.33% | 99.51% | 99.48% | |
[18] | Sun [26] | 99.48% | 99.07% | 99.08% | 99.19% | 99.21% |
P-CNN | 98.60% | 98.50% | 97.80% | 98.00% | 98.21% | |
H-CNN | 98.82% | 97.59% | 97.57% | 97.09% | 97.77% | |
DM-CNN | 99.72% | 99.78% | 99.70% | 99.59% | 99.70% |
QF | Method | AVE | ||||
---|---|---|---|---|---|---|
De Rosa [13] | 70.26% | 67.85% | 65.38% | 66.52% | 67.50% | |
Cao [8] | − | − | − | − | − | |
Li [9] | 99.90% | 99.90% | 99.90% | 99.90% | 99.90% | |
50 | Sun [26] | 99.75% | 99.63% | 99.68% | 99.57% | 99.66% |
P-CNN | 99.90% | 99.90% | 99.90% | 99.90% | 99.90% | |
H-CNN | 99.45% | 99.40% | 99.20% | 99.20% | 99.31% | |
DM-CNN | 99.93% | 99.96% | 99.97% | 99.94% | 99.95% | |
De Rosa [13] | 68.68% | 65.61% | 62.24% | 63.93% | 65.12% | |
Cao [8] | − | − | − | − | − | |
Li [9] | 99.90% | 99.90% | 99.90% | 99.90% | 99.90% | |
70 | Sun [26] | 99.32% | 99.34% | 98.60% | 99.03% | 99.07% |
P-CNN | 99.80% | 99.75% | 99.55% | 99.80% | 99.73% | |
H-CNN | 97.35% | 98.35% | 97.80% | 98.15% | 97.91% | |
DM-CNN | 99.92% | 99.94% | 99.95% | 99.90% | 99.93% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, P. Dual-Domain Fusion Convolutional Neural Network for Contrast Enhancement Forensics. Entropy 2021, 23, 1318. https://doi.org/10.3390/e23101318
Yang P. Dual-Domain Fusion Convolutional Neural Network for Contrast Enhancement Forensics. Entropy. 2021; 23(10):1318. https://doi.org/10.3390/e23101318
Chicago/Turabian StyleYang, Pengpeng. 2021. "Dual-Domain Fusion Convolutional Neural Network for Contrast Enhancement Forensics" Entropy 23, no. 10: 1318. https://doi.org/10.3390/e23101318
APA StyleYang, P. (2021). Dual-Domain Fusion Convolutional Neural Network for Contrast Enhancement Forensics. Entropy, 23(10), 1318. https://doi.org/10.3390/e23101318