Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

AdvIris: a hybrid approach to detecting adversarial iris examples using wavelet transform

  • Published:
International Journal of Speech Technology Aims and scope Submit manuscript

Abstract

Deep neural networks have shown significant progress in biometric applications. Deep learning networks are particularly vulnerable to Adversarial examples where adversarial examples are manipulated input data. The adversarial attacks make the biometric system to fail in terms of performance. An effective defensive mechanism against adversarial attacks is introduced in the proposed work which is used to detect adversarial iris examples. The proposed defensive mechanism is based on Discrete Wavelet Transform (DWT) which examines the high and mid spectrum of wavelet sub bands. The model then recreates the various denoised versions of the iris images based on DWT. The U-net based Deep convolutional architecture is used for further classification. The proposed process is tested by classifying adversarial iris images affected by various adversarial attacks such as FGSM, Deepfool and iGSM methods. An experimental analysis on a benchmark iris image database, namely IITD, generates excellent results with an average accuracy of 94 percent. Experiment results show that the proposed strategy performs better in detecting adversarial attacks than other state of art defensive models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Agarwal, A., Sehwag, A., Singh, R., & Vatsa, M. (2019). Deceiving face presentation attack detection via image transforms. In IEEE international conference on multimedia big data (pp. 373–382).

  • Agarwal, A., Singh, R., Vatsa, M., & Ratha, N. (2018). Are image-agnostic universal adversarial perturbations for face recognition difficult to detect? In IEEE international conference on biometrics theory, applications and systems (pp. 1–7).

  • Awais, M., Shamshad, F., & Bae, S. H. (2020). Towards an adversarially robust normalization approach. arXiv preprint arXiv:2006.11007.

  • Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.

  • Bradshaw, J., Matthews, A. G. G., & Ghahramani, Z. (2017). Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks. arXiv preprint arXiv:1707.02476 .

  • Bruna, J., Szegedy, C., Sutskever, I., Goodfellow, I., Zaremba, W., Fergus, R., & Erhan, D. (2014). Intriguing properties of neural networks. In International conference on learning representations.

  • Carlini, N., & Wagner, D. (2016). Towards evaluating the robustness of Neural Networks. arXiv preprint arXiv:1608.04644.

  • Carlini, N., & Wagner, D. (2017). Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 3–14). ACM.

  • Deb, D., Zhang, J., & Jain, A. K. (2019). Advfaces: Adversarial face synthesis. arXiv preprint arXiv:1908.05008.

  • Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., & Zhu, J. (2019). Efficient decision-based blackbox adversarial attacks on face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7714–7722).

  • Gong, Z., Wang, W., & Ku, W.-S. (2017). Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960.

  • Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.

  • Goswami, G., Agarwal, A., Ratha, N., Singh, R., & Vatsa, M. (2019). Detecting and mitigating adversarial perturbations for robust face recognition. International Journal of Computer Vision, 127(6–7), 719–742.

    Article  Google Scholar 

  • Grosse, K., Manoharan, P., Papernot, N., Backes, M., & McDaniel, P. (2017). On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280.

  • Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A. R., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., & Kingsbury, B. (2012). Deep Neural Networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.

    Article  Google Scholar 

  • Jain, A. K., Nandakumar, K., & Ross, A. (2016). 50 years of biometric research: Accomplishments challenges and opportunities. Pattern Recognition Letters, 79, 80–105.

    Article  Google Scholar 

  • Katz, G., Barrett, C., Dill, D. L., Julian, K., & Kochenderfer, M. J. (2017). Reluplex: An efficient SMT solver for verifying deep neural networks. In International conference on computer aided verification (pp. 97–117).

  • Kim, J., Cho, S., Choi, J., & Marks, R. J. (2004). Iris recognition using wavelet features. Journal of VLSI Signal Processing Systems for Signal, Image and Video Technology, 38(2), 147–156.

    Article  Google Scholar 

  • Kurakin, A., Goodfellow, I. J., & Bengio, S. (2018). Adversarial examples in the physical world. In International conference on learning representations-workshop.

  • Meenakshi, K., & Maragatham, G. (2019). A review on security attacks and protective strategies of machine learning. In International conference on emerging current trends in computing and expert technology (pp. 1076–1087).

  • Meenakshi, K., & Maragatham, G. (2021). A self supervised defending mechanism against adversarial iris attacks based on wavelet transform. International Journal of Advanced Computer Science and Applications. https://doi.org/10.14569/IJACSA.2021.0120270

    Article  Google Scholar 

  • Meng, D., & Chen, H. (2017). Magnet: A two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security (pp. 135–147). ACM

  • Moosavi-Dezfooli, S.-M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574–2582).

  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506–519).

  • Rozsa, A., Rudd, E. M., & Boult, T. E. (2016). Adversarial diversity and hard positive generation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 25–32).

  • Soleymani, S., Dabouei, A., Iranmanesh, S. M., Kazemi, H., Dawson, J., & Nasrabadi, N. M. (2018a). Prosodic-enhanced siamese convolutional neural networks for cross-device text-independent speaker verification. arXiv preprint arXiv:1808.01026 .

  • Soleymani, S., Torfi, A., Dawson, J., & Nasrabadi, N. M. (2018b) Generalized bilinear deep convolutional neural networks for multimodal biometric identification. In 25th IEEE international conference on image processing (pp. 763–767).

  • Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 .

  • Taherkhani, F., Nasrabadi, N. M., & Dawson, J. (2018). A deep face identification network enhanced by facial attributes prediction. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 553–560).

  • Talreja, V., Valenti, M. C., & Nasrabadi, N. M. (2017). Multibiometric secure system based on deep learning. In 2017 IEEE global conference on signal and information processing (globalSIP) (pp. 298–302). IEEE.

  • Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204 .

  • Yuan, X., He, P., Zhu, Q., & Li, X. (2019). Adversarial examples: Attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems, 30(9), 2805–2824.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Meenakshi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Meenakshi, K., Maragatham, G. AdvIris: a hybrid approach to detecting adversarial iris examples using wavelet transform. Int J Speech Technol 25, 435–441 (2022). https://doi.org/10.1007/s10772-022-09967-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10772-022-09967-8

Keywords

Navigation