Steganalysis of Neural Networks Based on Symmetric Histogram Distribution
<p>Various types of neural networks for different intelligent tasks.</p> "> Figure 2
<p>Neural networks have a deep network structure and a large number of parameters, for example, ResNet-34 and ResNet-50.</p> "> Figure 3
<p>Additional data are embedded into the neural network during the process of training.</p> "> Figure 4
<p>Flow of traditional image steganalysis methods.</p> "> Figure 5
<p>Construction of the proposed ensemble classifier.</p> "> Figure 6
<p>Parameter distributions of cover and CapsNets: (<b>a</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>b</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>s</mi> </mstyle> <mi>j</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>u</mi> </mstyle> <mi>i</mi> </msub> </mrow> </semantics></math>; (<b>d</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>u</mi> <mo>^</mo> </mover> </mstyle> <mrow> <mi>j</mi> <mo>|</mo> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>e</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>v</mi> </mstyle> <mi>j</mi> </msub> </mrow> </semantics></math>; (<b>f</b>) cover <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>w</mi> </mstyle> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>g</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>c</mi> </mstyle> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>h</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>s</mi> </mstyle> <mi>j</mi> </msub> </mrow> </semantics></math>; (<b>i</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>u</mi> </mstyle> <mi>i</mi> </msub> </mrow> </semantics></math>; (<b>j</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mover accent="true"> <mi>u</mi> <mo>^</mo> </mover> </mstyle> <mrow> <mi>j</mi> <mo>|</mo> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>k</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>v</mi> </mstyle> <mi>j</mi> </msub> </mrow> </semantics></math>; (<b>l</b>) stego <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>w</mi> </mstyle> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math>.</p> "> Figure 7
<p>Label segmentation and determining the optimal classification threshold in a highly symmetrical manner.</p> "> Figure 8
<p>Architecture of AlexNet for the MNIST dataset.</p> "> Figure 9
<p>Architecture of CapsNets for the MNIST dataset.</p> "> Figure 10
<p>Detection accuracies of SVM with different values of C and kernel functions: (<b>a</b>) 100 embedded bits; (<b>b</b>) 600 embedded bits; (<b>c</b>) 6000 embedded bits.</p> "> Figure 11
<p>Histograms and fit density curves of segments for the case of fixed embedded bits: (<b>a</b>) Seg1; (<b>b</b>) Seg2; … (<b>c</b>) Seg100.</p> "> Figure 12
<p>Histograms and fit density curves of segments for the case of changing number of embedded bits: (<b>a</b>) Seg1; (<b>b</b>) Seg2; … (<b>c</b>) Seg100.</p> "> Figure 13
<p>Histograms and fit density curves of segments for the case of changing embedding capacities and image datasets: (<b>a</b>) Seg1; (<b>b</b>) Seg2; … (<b>c</b>) Seg100.</p> "> Figure 14
<p>Histograms and fit density curves of segments for the case of changing embedding capacities, image datasets, and network structures: (<b>a</b>) Seg1; (<b>b</b>) Seg2; … (<b>c</b>) Seg100.</p> ">
Abstract
:1. Introduction
- This paper focuses on a new form of steganalysis to detect the presence of hidden data in deep neural networks trained for image classification tasks. By extending steganalysis from multimedia content to deep neural network models, we can protect neural networks from being exploited to transmit secret data;
- This paper proposes a steganalysis scheme using a well-designed symmetric method based on histogram distribution to determine the optimal classification thresholds. Since the neural network can be treated as a black box, the practicality of the proposed scheme is satisfactory;
- This paper performs comprehensive experiments on a diverse dataset of massive neural networks in a progressive way. Experimental results verified the effectiveness of our proposed steganalysis scheme.
2. Related Work
2.1. Data Hiding in Neural Networks
2.2. Steganalysis of Digital Images
3. Proposed Scheme
3.1. General Framework
3.2. Steganalysis Scheme
3.2.1. Datasets and Feature Extraction
3.2.2. Feature Preprocessing and Fitting
3.2.3. Determining the Optimal Threshold
4. Experimental Results
4.1. Experiment Setup
4.2. Parameter Determination
4.3. Dataset of Neural Networks
4.4. Fixed Embedding Capacity
4.5. Changing Embedding Capacities and Image Datasets
4.6. Steganalysis by Ensemble Classifiers
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Shehab, D.A.; Alhaddad, M.J. Comprehensive Survey of Multimedia Steganalysis: Techniques, Evaluations, and Trends in Future Research. Symmetry 2022, 14, 117. [Google Scholar] [CrossRef]
- Chan, C.K.; Cheng, L.M. Hiding data in images by simple LSB substitution. Pattern Recognit. 2004, 37, 469–474. [Google Scholar] [CrossRef]
- Mielikainen, J. LSB matching revisited. IEEE Signal Process. Lett. 2006, 13, 285–287. [Google Scholar] [CrossRef]
- Pevný, T.; Filler, T.; Bas, P. Using high-dimensional image models to perform highly undetectable steganography. In Proceedings of the International Workshop on Information Hiding; Springer: Berlin/Heidelberg, Germany, 2010; pp. 161–177. [Google Scholar]
- Holub, V.; Fridrich, J. Designing steganographic distortion using directional filters. In Proceedings of the 2012 IEEE International Workshop on Information Forensics and Security (WIFS), Costa Adeje, Spain, 2–5 December 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 234–239. [Google Scholar]
- Holub, V.; Fridrich, J. Digital image steganography using universal distortion. In Proceedings of the 1st ACM Workshop on Information Hiding and Multimedia Security, Montpellier, France, 17–19 June 2013; pp. 59–68. [Google Scholar]
- Li, B.; Tan, S.; Wang, M.; Huang, J. Investigation on cost assignment in spatial image steganography. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1264–1277. [Google Scholar] [CrossRef]
- Filler, T.; Judas, J.; Fridrich, J. Minimizing additive distortion in steganography using syndrome-trellis codes. IEEE Trans. Inf. Forensics Secur. 2011, 6, 920–935. [Google Scholar] [CrossRef]
- Tang, W.; Tan, S.; Li, B.; Huang, J. Automatic steganographic distortion learning using a generative adversarial network. IEEE Signal Process. Lett. 2017, 24, 1547–1551. [Google Scholar] [CrossRef]
- Zhu, J.; Kaplan, R.; Johnson, J.; Li, F.-F. Hidden: Hiding data with deep networks. In Proceedings of the The European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Lin, J.; Chang, C.-C.; Horng, J.-H. Asymmetric Data Hiding for Compressed Images with High Payload and Reversibility. Symmetry 2021, 13, 2355. [Google Scholar] [CrossRef]
- Baluja, S. Hiding images within images. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 1685–1697. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G. Imagenet classification with deep convolutional networks. In Proceedings of the Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Wu, X.; Chen, Q.; You, J.; Xiao, Y. Unconstrained offline handwritten word recognition by position embedding integrated ResNets model. IEEE Signal Process. Lett. 2019, 26, 597–601. [Google Scholar] [CrossRef]
- Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic routing between capsules. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 3856–3866. [Google Scholar]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
- Uchida, Y.; Nagai, Y.; Sakazawa, S.; Satoh, S. Embedding watermarks into deep neural networks. In Proceedings of the ACM on International Conference Multimedia Retrieval, Bucharest, Romania, 6–9 June 2017; pp. 269–277. [Google Scholar]
- Wang, Z.; Feng, G.; Wu, H.; Zhang, X. Data Hiding in Neural Networks for Multiple Receivers. IEEE Comput. Intell. Mag. 2021, 16, 70–84. [Google Scholar] [CrossRef]
- Wang, J.; Wu, H.; Zhang, X.; Yao, Y. Watermarking in deep neural networks via error back-propagation. Electron. Imag. 2020, 2020, 1–22. [Google Scholar] [CrossRef]
- Zhang, J.; Gu, Z.; Jang, J.; Wu, H. Protecting intellectual property of deep neural networks with watermarking. In Proceedings of the 2018 Asia Conference on Computer and Communications Security, Incheon, Republic of Korea, 4–8 June 2018; pp. 159–172. [Google Scholar]
- Adi, Y.; Baum, C.; Cisse, M.; Pinkas, B.; Keshet, J. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In Proceedings of the 27th {USENIX} Security Symposium, ({USENIX} Security 18), Baltimore, MD, USA, 15–17 August 2018; pp. 1615–1631. [Google Scholar]
- Merrer, E.L.; Perez, P.; Trédan, G. Adversarial frontier stitching for remote neural network watermarking. Neural Comput. Appl. 2020, 32, 9233–9244. [Google Scholar] [CrossRef]
- Wu, H.; Liu, G.; Yao, Y.; Zhang, X. Watermarking neural networks with watermarked images. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 2591–2601. [Google Scholar] [CrossRef]
- Fridrich, J.; Soukal, D. Matrix embedding for large payloads. IEEE Trans. Inf. Forensics Secur. 2006, 1, 390–395. [Google Scholar] [CrossRef]
- Wang, Z.; Feng, G.; Zhang, X. Repeatable Data Hiding: Towards the Reusability of Digital Images. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 135–146. [Google Scholar] [CrossRef]
- Tao, J.; Li, S.; Zhang, X.; Wang, Z. Towards Robust Image Steganography. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 594–600. [Google Scholar] [CrossRef]
- Simmons, G.J. The prisoners’ problem and the subliminal channel. In Advance in Cryptology; Springer: New York, NY, USA, 1984; pp. 51–67. [Google Scholar]
- Chandramouli, R.; Memon, N. Analysis of LSB based image steganography techniques. In Proceedings of the 2001 International Conference on Image Processing (Cat. No.01CH37205), Thessaloniki, Greece, 7–10 October 2001; Volume 3, pp. 1019–1022. [Google Scholar] [CrossRef]
- Higher-order statistical steganalysis of palette images. In Proceedings of the SPIE Security Watermarking Multimedia Contents, Santa Clara, CA, USA, 20 January 2003; Volume 5020, pp. 131–142.
- Ker, A.D. Steganalysis of LSB matching in grayscale images. IEEE Signal Process. Lett. 2005, 12, 441–444. [Google Scholar] [CrossRef]
- Tang, W.; Li, H.; Luo, W.; Huang, J. Adaptive Steganalysis against WOW Embedding Algorithm; ACM: New York, NY, USA, 2014. [Google Scholar]
- Shi, Y.Q.; Chen, C.; Wen, C. A Markov Process Based Approach to Effective Attacking JPEG Steganography. In Proceedings of the Information Hiding, 8th International Workshop, IH 2006, Alexandria, VA, USA, 10–12 July 2006. [Google Scholar]
- Pevny, T.; Fridrich, J. Multiclass Detector of Current Steganographic Methods for JPEG Format. IEEE Trans. Inf. Forensics Secur. 2008, 3, 635–650. [Google Scholar] [CrossRef]
- Fridrich, J.; Kodovsky, J. Rich Models for Steganalysis of Digital Images. IEEE Trans. Inf. Forensics Secur. 2012, 7, 868–882. [Google Scholar] [CrossRef]
- Feng, G.; Zhang, X.; Ren, Y.; Qian, Z.; Li, S. Diversity-Based Cascade Filters for JPEG Steganalysis. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 376–386. [Google Scholar] [CrossRef]
- Xu, G.; Wu, H.-Z.; Shi, Y.-Q. Structural Design of Convolutional Neural Networks for Steganalysis. IEEE Signal Process. Lett. 2016, 23, 708–712. [Google Scholar] [CrossRef]
- Yedroudj, M.; Comby, F.; Chaumont, M. Yedroudj-Net: An efficient CNN for spatial steganalysis. arXiv 2018, arXiv:1803.00407. [Google Scholar]
- Boroumand, M.; Chen, M.; Fridrich, J. Deep Residual Network for Steganalysis of Digital Images. IEEE Trans. Inf. Forensics Secur. 2019, 14, 1181–1193. [Google Scholar] [CrossRef]
- Zhang, R.; Zhu, F.; Liu, J.; Liu, G. Depth-Wise Separable Convolutions and Multi-Level Pooling for an Efficient Spatial CNN-Based Steganalysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 1138–1150. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Suykens, J.A.K.; Vandewalle, J. Least squares support vector machine classifiers. Neural Process. Lett. 1999, 9, 293–300. [Google Scholar] [CrossRef]
- Kodovsky, J.; Fridrich, J.; Holub, V. Ensemble Classifiers for Steganalysis of Digital Media. IEEE Trans. Inf. Forensics Secur. 2012, 7, 432–444. [Google Scholar] [CrossRef]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
- Bas, P.; Filler, T.; Pevný, T. “Break our steganographic system”: The ins and outs of organizing BOSS. In Proceedings of the International Workshop on Information Hiding; Springer: Berlin/Heidelberg, Germany, 2011; pp. 59–70. [Google Scholar]
- Bas, P.; Furon, T. BOWS-2 Contest (Break Our Watermarking System). In Proceedings of the European Network of Excellence ECRYPT, Virtual, 17 July 2007–17 April 2008. [Google Scholar]
- LeCun, Y.; Cortes, C.; Burges, C.J. The MNIST Database of Handwritten Digits. 1998. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 11 May 2023).
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Technical Report, CIFAR. 2009. Available online: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (accessed on 11 May 2023).
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
Networks | Test Accuracy | Training Time Cost (s) |
---|---|---|
plain-CapsNets-MNIST | 98.58 | 27.3654 |
CapsNets-MNIST-100 | 98.25 | 29.7114 |
CapsNets-MNIST-600 | 98.13 | 33.0454 |
CapsNets-MNIST-6000 | 98.38 | 67.8747 |
Time for Model Fitting (s) | |||||||
Kernel | The Value of C | ||||||
0.001 | 0.01 | 0.1 | 1.0 | 10 | 100 | 1000 | |
Linear kernel | 9.6312 | 9.5247 | 7.7596 | 6.8128 | 7.7702 | 13.1937 | 50.2388 |
Poly kernel | 10.1259 | 7.4181 | 4.7071 | 4.0531 | 6.1947 | 23.6238 | 224.9755 |
Gaussian (RBF) kernel | 15.0295 | 14.0100 | 8.8310 | 5.4564 | 4.4979 | 6.6971 | 22.2775 |
Sigmoid kernel | 15.6790 | 16.3076 | 14.9520 | 14.1250 | 14.0520 | 13.9231 | 13.8807 |
Time for Model Predicting (s) | |||||||
Kernel | The Value of C | ||||||
0.001 | 0.01 | 0.1 | 1.0 | 10 | 100 | 1000 | |
Linear kernel | 7.4459 | 6.4646 | 4.7664 | 4.0588 | 3.9461 | 3.9244 | 3.9447 |
Poly kernel | 7.1663 | 4.7006 | 3.0290 | 2.2568 | 1.9474 | 1.8184 | 1.9649 |
Gaussian (RBF) kernel | 35.8621 | 31.7887 | 18.6460 | 11.6413 | 8.0697 | 26.5473 | 45.7309 |
Sigmoid kernel | 12.7003 | 11.1387 | 8.6828 | 8.4397 | 8.5023 | 8.3859 | 8.3899 |
Networks | Test Accuracy | Training Time Cost (s) | Extraction Error |
---|---|---|---|
plain-CapsNets-MNIST | 98.82 | 26.7196 | / |
CapsNets-MNIST-3000 | 98.66 | 47.7244 | 0.0000 |
plain-CapsNets-ImageNet | 60.42 | 57.3742 | / |
CapsNets-ImageNet-3000 | 59.61 | 63.5359 | 0.0000 |
plain-ResNets-MNIST | 96.97 | 52.6523 | / |
ResNets-MNIST-750 | 96.77 | 64.0487 | 0.0000 |
plain-ResNets-CIFAR-10 | 58.26 | 49.3478 | / |
ResNets-CIFAR-10-750 | 57.83 | 61.2284 | 0.0000 |
plain-AlexNet-MNIST | 96.55 | 18.6910 | / |
AlexNet-MNIST-500 | 95.76 | 24.0367 | 0.0000 |
plain-AlexNet-CIFAR-10 | 65.31 | 65.9686 | / |
AlexNet-CIFAR-10-500 | 63.94 | 70.3321 | 0.0000 |
Networks on Datasets | Detection Accuracy for Different Embedded Bits | ||||
---|---|---|---|---|---|
/ | 600 | 1200 | 1800 | 2400 | 3000 |
CapsNets + MNIST | 91.16 | 90.14 | 83.47 | 80.83 | 78.83 |
CapsNets + CIFAR-10 | 68.05 | 60.38 | 58.77 | 57.14 | 55.75 |
CapsNets + ImageNet | 76.74 | 71.65 | 64.81 | 65.48 | 59.12 |
/ | 100 | 200 | 300 | 400 | 500 |
AlexNet + MNIST | 67.96 | 68.73 | 68.92 | 66.84 | 69.33 |
AlexNet + CIFAR-10 | 90.83 | 90.87 | 91.00 | 91.96 | 90.92 |
AlexNet + ImageNet | 93.89 | 94.66 | 95.03 | 94.70 | 94.63 |
/ | 150 | 300 | 450 | 600 | 750 |
ResNets + MNIST | 75.04 | 76.35 | 78.13 | 80.91 | 82.69 |
ResNets + CIFAR-10 | 57.12 | 67.11 | 72.61 | 69.32 | 70.92 |
ResNets + ImageNet | 71.35 | 73.52 | 76.65 | 77.02 | 78.66 |
CapsNets-MNIST-600 | CapsNets-MNIST-1200 | CapsNets-MNIST-1800 | CapsNets-MNIST-2400 | CapsNets-MNIST-3000 | |
---|---|---|---|---|---|
CapsNets-MNIST-600 | 92.03% | 90.14% | 79.24% | 77.19% | 71.87% |
CapsNets-MNIST-1200 | 91.91% | 90.61% | 81.49% | 80.79% | 75.52% |
CapsNets-MNIST-1800 | 88.51% | 88.80% | 83.95% | 84.02% | 79.06% |
CapsNets-MNIST-2400 | 83.81% | 83.22% | 80.20% | 81.07% | 77.77% |
CapsNets-MNIST-3000 | 85.49% | 83.75% | 79.15% | 80.66% | 78.83% |
Fitting Time Cost (s) | Testing Accuracy | Predicting Time Cost (s) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
fixed embedding capacity | 4.4740 | / | 8.4185 | ||||||||
changing embedding capacities | 116.0371 | 600 | 1200 | 1800 | 2400 | 3000 | 600 | 1200 | 1800 | 2400 | 3000 |
90.94 | 90.67 | 85.84 | 86.77 | 82.74 | 39.1952 | 38.6721 | 39.1012 | 39.2150 | 39.0934 |
CapsNets-CIFAR-10-600 | CapsNets-CIFAR-10-1200 | CapsNets-CIFAR-10-1800 | CapsNets-CIFAR-10-2400 | CapsNets-CIFAR-10-3000 | |
---|---|---|---|---|---|
CapsNets-CIFAR-10-600 | 68.05 | 53.63 | 51.56 | 49.84 | 51.06 |
CapsNets-CIFAR-10-1200 | 61.66 | 60.24 | 52.73 | 52.38 | 54.73 |
CapsNets-CIFAR-10-1800 | 50.44 | 44.85 | 46.16 | 43.90 | 44.03 |
CapsNets-CIFAR-10-2400 | 63.61 | 61.64 | 58.05 | 58.77 | 57.69 |
CapsNets-CIFAR-10-3000 | 64.37 | 56.24 | 52.73 | 55.21 | 55.75 |
CapsNets-ImageNet-600 | CapsNets-ImageNet-1200 | CapsNets-ImageNet-1800 | CapsNets-ImageNet-2400 | CapsNets-ImageNet-3000 | |
---|---|---|---|---|---|
CapsNets-ImageNet-600 | 76.74 | 71.62 | 63.01 | 62.24 | 55.39 |
CapsNets-ImageNet-1200 | 68.82 | 71.65 | 66.67 | 66.59 | 60.95 |
CapsNets-ImageNet-1800 | 66.70 | 69.51 | 64.81 | 61.24 | 57.74 |
CapsNets-ImageNet-2400 | 73.05 | 70.65 | 65.33 | 65.48 | 56.27 |
CapsNets-ImageNet-3000 | 52.06 | 55.25 | 53.17 | 52.42 | 59.12 |
Networks on Datasets | Detection Accuracy with Different Numbers of Embedded Bits | ||||
---|---|---|---|---|---|
600 | 1200 | 1800 | 2400 | 3000 | |
CapsNets + MNIST | 90.78 (6.97%) | 91.46 (8.24%) | 85.93 (6.78%) | 85.80 (8.61%) | 79.86 (7.99%) |
CapsNets + CIFAR-10 | 69.49 (19.05%) | 68.61 (23.76%) | 63.86 (17.7%) | 63.86 (19.96%) | 64.19 (20.16%) |
CapsNets + ImageNet | 60.98 (8.92%) | 65.49 (10.24%) | 60.74 (7.57%) | 59.34 (6.92%) | 52.41 (−2.6%) |
Networks on Datasets | Detection Accuracy with Different Numbers of Embedded Bits | ||||
---|---|---|---|---|---|
600 | 1200 | 1800 | 2400 | 3000 | |
CapsNets + MNIST | 93.43 (9.62%) | 93.23 (10.01%) | 86.58 (7.43%) | 86.46 (9.27%) | 80.67 (8.8%) |
CapsNets + CIFAR-10 | 72.16 (21.72%) | 67.38 (22.53%) | 60.27 (14.11%) | 62.61 (18.71%) | 62.31 (18.28%) |
CapsNets + ImageNet | 66.96 (14.9%) | 69.71 (14.46%) | 65.45 (12.28%) | 64.56 (12.14%) | 56.51 (1.5%) |
Networks on Datasets | Detection Accuracy with Different Numbers of Embedded Bits | ||||
---|---|---|---|---|---|
600 | 1200 | 1800 | 2400 | 3000 | |
CapsNets + MNIST | 94.22 (10.41%) | 93.37 (10.15%) | 88.06 (8.91%) | 88.65 (11.46%) | 84.38 (12.51%) |
CapsNets + CIFAR-10 | 73.67 (23.23%) | 71.16 (26.31%) | 64.09 (17.93%) | 66.13 (22.23%) | 64.48 (20.45%) |
CapsNets + ImageNet | 67.84 (15.78%) | 71.99 (16.74%) | 67.44 (14.27%) | 65.85 (13.43%) | 56.55 (1.54%) |
Networks on Datasets | Detection Accuracy with Different Embedded Bits | ||||
---|---|---|---|---|---|
/ | 600 | 1200 | 1800 | 2400 | 3000 |
CapsNets + MNIST | 86.69 | 85.29 | 79.42 | 80.35 | 74.26 |
CapsNets + CIFAR-10 | 58.80 | 59.89 | 52.61 | 59.26 | 49.58 |
CapsNets + ImageNet | 65.24 | 62.95 | 58.14 | 55.19 | 54.77 |
/ | 100 | 200 | 300 | 400 | 500 |
AlexNet + MNIST | 70.23 | 69.61 | 70.14 | 66.94 | 72.62 |
AlexNet + CIFAR-10 | 66.34 | 66.65 | 72.78 | 74.88 | 70.81 |
AlexNet + ImageNet | 92.67 | 92.76 | 92.74 | 92.05 | 92.15 |
/ | 150 | 300 | 450 | 600 | 750 |
ResNets + MNIST | 79.40 | 76.31 | 78.54 | 78.37 | 78.79 |
ResNets + CIFAR-10 | 50.60 | 57.02 | 58.31 | 56.18 | 56.98 |
ResNets + ImageNet | 52.49 | 53.78 | 54.75 | 56.21 | 56.07 |
Networks on Datasets | Detection Accuracy with Different Embedded Bits | ||||
---|---|---|---|---|---|
/ | 600 | 1200 | 1800 | 2400 | 3000 |
CapsNets + MNIST | 69.16 | 70.98 | 67.80 | 67.28 | 61.52 |
CapsNets + CIFAR-10 | 52.12 | 52.48 | 51.99 | 52.33 | 49.19 |
CapsNets + ImageNet * | 53.89 | 53.73 | 52.27 | 51.98 | 50.46 |
/ | 100 | 200 | 300 | 400 | 500 |
AlexNet + MNIST | 48.55 | 49.35 | 48.91 | 49.32 | 47.73 |
AlexNet + CIFAR-10 | 72.53 | 70.73 | 73.36 | 73.82 | 71.47 |
AlexNet + ImageNet * | 57.56 | 56.13 | 61.03 | 58.65 | 58.51 |
/ | 150 | 300 | 450 | 600 | 750 |
ResNets + MNIST | 56.71 | 58.18 | 56.73 | 58.07 | 57.19 |
ResNets + CIFAR-10 | 51.95 | 56.93 | 57.86 | 57.55 | 57.96 |
ResNets + ImageNet * | 47.21 | 48.19 | 48.68 | 47.72 | 45.26 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tang, X.; Wang, Z.; Zhang, X. Steganalysis of Neural Networks Based on Symmetric Histogram Distribution. Symmetry 2023, 15, 1079. https://doi.org/10.3390/sym15051079
Tang X, Wang Z, Zhang X. Steganalysis of Neural Networks Based on Symmetric Histogram Distribution. Symmetry. 2023; 15(5):1079. https://doi.org/10.3390/sym15051079
Chicago/Turabian StyleTang, Xiong, Zichi Wang, and Xinpeng Zhang. 2023. "Steganalysis of Neural Networks Based on Symmetric Histogram Distribution" Symmetry 15, no. 5: 1079. https://doi.org/10.3390/sym15051079