Nothing Special   »   [go: up one dir, main page]

Skip to main content

Push for More: On Comparison of Data Augmentation and SMOTE with Optimised Deep Learning Architecture for Side-Channel

  • Conference paper
  • First Online:
Information Security Applications (WISA 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12583))

Included in the following conference series:

Abstract

Side-channel analysis has seen rapid adoption of deep learning techniques over the past years. While many paper focus on designing efficient architectures, some works have proposed techniques to boost the efficiency of existing architectures. These include methods like data augmentation, oversampling, regularization etc. In this paper, we compare data augmentation and oversampling (particularly SMOTE and its variants) on public traces of two side-channel protected AES. The techniques are compared in both balanced and imbalanced classes setting, and we show that adopting SMOTE variants can boost the attack efficiency in general. Further, we report a successful key recovery on ASCAD(desync=100) with 180 traces, a 50% improvement over current state of the art.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Refer to the Keras API description in https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator.

  2. 2.

    https://github.com/ikizhvatov/randomdelays-traces.

  3. 3.

    https://github.com/ANSSI-FR/ASCAD.

  4. 4.

    DA(x) indicates that \(x\times 100\)% of the whole points is randomly shifted while training phase, which was suggested in [4].

References

  1. Brier, Eric., Clavier, Christophe, Olivier, Francis: Correlation power analysis with a leakage model. In: Joye, Marc, Quisquater, Jean-Jacques (eds.) CHES 2004. LNCS, vol. 3156, pp. 16–29. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28632-5_2

    Chapter  Google Scholar 

  2. Benadjila, Ryad., Prouff, Emmanuel., Strullu, Rémi., Cagli, Eleonora, Dumas, Cécile: Deep learning for side-channel analysis and introduction to ASCAD database. J. Cryptogr. Eng. 10(2), 163–188 (2019). https://doi.org/10.1007/s13389-019-00220-8

    Article  Google Scholar 

  3. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Int. Res. 16(1), 321–357 (2002)

    MATH  Google Scholar 

  4. Cagli, Eleonora., Dumas, Cécile, Prouff, Emmanuel: Convolutional neural networks with data augmentation against jitter-based countermeasures. In: Fischer, Wieland, Homma, Naofumi (eds.) CHES 2017. LNCS, vol. 10529, pp. 45–68. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66787-4_3

    Chapter  Google Scholar 

  5. Chari, Suresh., Rao, Josyula R., Rohatgi, Pankaj: Template attacks. In: Kaliski, Burton S., Koç, çetin K., Paar, Christof (eds.) CHES 2002. LNCS, vol. 2523, pp. 13–28. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-36400-5_3

    Chapter  Google Scholar 

  6. Coron, Jean-Sébastien, Kizhvatov, Ilya: An efficient method for random delay generation in embedded software. In: Clavier, Christophe, Gaj, Kris (eds.) CHES 2009. LNCS, vol. 5747, pp. 156–170. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04138-9_12

    Chapter  MATH  Google Scholar 

  7. Zaid, G., Bossuet, L., Habrard, A., Venelli, A.: Methodology for efficient CNN architectures in profiling attacks. IACR Trans. Cryptographic Hardware Embed. Syst. 1, 1–36 (2020)

    Google Scholar 

  8. Kovács, G.: smote_variants Documentation Release 0.1.0. https://readthedocs.org/projects/smote-variants/downloads/pdf/latest/, 03 February 2020

  9. Kocher, Paul., Jaffe, Joshua, Jun, Benjamin: Differential power analysis. In: Wiener, Michael (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48405-1_25

    Chapter  Google Scholar 

  10. Kim, J., Picek, S., Heuser, A., Bhasin, S., Hanjalic, A.: Make some noise unleashing the power of convolutional neural netoworks for pofiled side-channel analysis. IACR Trans. Cryptographic Hardware Embed. Syst. 2019(3), 148–179 (2019)

    Article  Google Scholar 

  11. Lee, H., Kim, J., Kim, S.: Gaussian-based SMOTE llgorithm for solving Skewe class distributions. Int. J. Fuzzy Log. Intell. Syst. 17(4), 229–237 (2017)

    Article  MathSciNet  Google Scholar 

  12. Standaert, François-Xavier., Malkin, Tal G., Yung, Moti: A unified framework for the analysis of side-channel key recovery attacks. In: Joux, Antoine (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 443–461. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01001-9_26

    Chapter  Google Scholar 

  13. Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: 7th International Conference on Document Analysis and Recognition - Volume 2 (2003)

    Google Scholar 

  14. Ma, L., Fan, S.: CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests. BMC Bioinformatics 169, 2017 (2017)

    Google Scholar 

  15. Maghrebi, Houssem., Portigliatti, Thibault, Prouff, Emmanuel: Breaking cryptographic implementations using deep learning techniques. In: Carlet, Claude, Hasan, M.Anwar, Saraswat, Vishal (eds.) SPACE 2016. LNCS, vol. 10076, pp. 3–26. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49445-6_1

    Chapter  Google Scholar 

  16. Picek, S., Heuser, A., Jovic, A., Bhasin, S., Regazzoni, F.: The curse of class imbalance and conflicting metrics with machine learning for side-channel evaluations. IACR Trans. Cryptographic Hardware and Embed. Syst. 2019(1), 209–237 (2019)

    Google Scholar 

  17. Wang, S., Li, Z., Chao, W., Cao, Q.: Applying adaptive over-sampling technique based on data density and cost-sensitive SVM to imbalanced learning. In: The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2012)

    Google Scholar 

  18. Wolpert, D.H.: The lack of a prior distinctions between learning algorithms. Neural Comput. 8, 1341–1390 (1996)

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. The authors acknowledge the support from the ‘National Integrated Centre of Evaluation’ (NICE); a facility of Cyber Security Agency, Singapore (CSA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoo-Seung Won .

Editor information

Editors and Affiliations

Appendices

A Variants of Oversampling Techniques (SMOTE Variants)

Two approaches were already introduced in [16] (SMOTE and SMOTE-ENN). However, there are currently 85 variant of SMOTEs referring to [8]. To the best of our knowledge, the investigation for effectiveness of these schemes has not been properly conducted in terms of SCA.

The variant SMOTEs in Table 2 have developed to overcome the bias for imbalanced data for DL context. As mentioned previously, only SMOTE and SMOTE-ENN are utilized in [16]. Although the performance of SMOTE in [16] is better, many variant SMOTEs have not been utilized. Moreover, they mentioned that the role of SMOTE and SMOTE-ENN is to only increase the number of minority instance. However, in general, the oversampling techniques can be further used as compared to previous suggestion. Naturally, these techniques can be used beyond HW/HD model, because the data might be biased in practice. As such, variant SMOTEs provide benefit as preprocessing tool, which help smoothing the distribution of the data.

Table 2. API function list for variant SMOTEs in [8]. Table reports the number of training set for AES_RD(256 classes), AES_RD(9 classes), ASCAD(256 classes), and ASCAD(9 classes), respectively after applying each oversampling techniques. ‘-’ indicates that we do not perform its oversampling techniques due to the time limit.

Moreover, as mentioned earlier, these techniques are worth investigated in the context of SCA, because there are several advantages offered by SMOTE variants, such as the change of majority and noise removal. Among 85 variant SMOTEs, we have conducted preliminary investigation on their effectiveness and only reported those who are quite successful for SCA.

B Results for All Oversampling Techniques Against AES_RD and ASCAD(desync=100)

The legend of Fig. 3 and 4 is referred to Fig. 5.

Fig. 3.
figure 3

Result for variant SMOTEs and DA against AES_RD

Fig. 4.
figure 4

Result for variant SMOTEs and DA against ASCAD(desync=100)

Fig. 5.
figure 5

The legend of Figs. 3 and 4

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Won, YS., Jap, D., Bhasin, S. (2020). Push for More: On Comparison of Data Augmentation and SMOTE with Optimised Deep Learning Architecture for Side-Channel. In: You, I. (eds) Information Security Applications. WISA 2020. Lecture Notes in Computer Science(), vol 12583. Springer, Cham. https://doi.org/10.1007/978-3-030-65299-9_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65299-9_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65298-2

  • Online ISBN: 978-3-030-65299-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics