Abstract
Most traffic sign recognition tasks rely on artificial neural network. As a kind of transfer learning method, knowledge distillation has improved the robustness of neural network models to a certain extent and saved time for model training. However, the weights of the original model (teacher model) and the new model (student model) are similar. The adversarial examples of the teacher model are easy to transfer and can successfully attack the student model. In order to solve this problem, this paper proposes a lightweight defense mechanism to reduce the similarity between the weight of the student model and the weight of the teacher model, and the dropout-randomization method is applied in the input layer of the student model to reduce the input probability of the adversarial examples. Moreover, we evaluate the precision and the recall of the improved model, the results show that the robustness of the model is significantly improved under the Carlini-Wagner (CW) attack and Project Gradient Descent (PGD) attack.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Berger, M., Forechi, A., De Souza, A.F., de Oliveira Neto, J., Veronese, L., Badue, C.: Traffic sign recognition with VG-RAM weightless neural networks. In: 2012 12th International Conference on Intelligent Systems Design and Applications (ISDA), pp. 315–319. IEEE (2012)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57, May 2017
Goutte, C., Gaussier, E.: A probabilistic interpretation of precision, recall and F-Score, with implication for evaluation. In: Losada, D.E., Fernández-Luna, J.M. (eds.) ECIR 2005. LNCS, vol. 3408, pp. 345–359. Springer, Heidelberg (2005). https://doi.org/10.1007/978-3-540-31865-1_25
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
He, W., Wei, J., Chen, X., Carlini, N., Song, D.: Adversarial example defenses: ensembles of weak defenses are not strong. In: Proceedings of the 11th USENIX Conference on Offensive Technologies, pp. 15–15 (2017)
Hua, Y., Ge, S., Li, C., Luo, Z., Jin, X.: Distilling deep neural networks for robust classification with soft decision trees. In: 2018 14th IEEE International Conference on Signal Processing (ICSP), pp. 1128–1132. IEEE (2018)
Huang, G.-B., Saratchandran, P., Sundararajan, N.: A generalized growing and pruning RBF (GGAP-RBF) neural network for function approximation. IEEE Trans. Neural Netw. 16(1), 57–67 (2005)
Huang, Z., Yuanlong, Y., Jason, G., Liu, H.: An efficient method for traffic sign recognition based on extreme learning machine. IEEE Trans. Cybern. 47(4), 920–933 (2016)
El Jelali, S., Lyhyaoui, A., Figueirasvidal, A.R.: Designing model based classifiers by emphasizing soft targets. Fundamenta Informaticae 96(4), 419–433 (2009)
Ketkar, N.: Introduction to keras. Deep Learning with Python, pp. 97–111. Springer, Berlin (2017)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)
Luo, J., Wu, J., Lin, W.: Thinet: a filter level pruning method for deep neural network compression. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5068–5076 (2017)
Luo, X., Chang, X., Ban, X.: Regression and classification using extreme learning machine based on \(l1-norm\) and \(l2-norm\). Neurocomputing 174, 179–186 (2016)
Pan, W., Zhong, E., Yang, Q.: Transfer learning for text mining. Mining Text Data, pp. 223–257. Springer, Berlin (2012)
Kashif Naseer Qureshi and Abdul Hanan Abdullah: A survey on intelligent transportation systems. Middle-East J. Sci. Res. 15(5), 629–642 (2013)
Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.Y.: Self-taught learning: transfer learning from unlabeled data. In: Proceedings of the 24th International Conference on Machine Learning, pp. 759–766 (2007)
Tan, Q., Yu, G., Domeniconi, C., Wang, J., Zhang, Z.: Incomplete multi-view weak-label learning. In: IJCAI, pp. 2703–2709 (2018)
Tang, Z., Wang, D., Zhang, Z.: Recurrent neural network training with dark knowledge transfer. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5900–5904. IEEE (2016)
Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-generation open source framework for deep learning. In: Proceedings of Workshop on Machine Learning Systems (LearningSys) in the Twenty-Ninth Annual Conference on Neural Information Processing Systems (NIPS), vol. 5, pp. 1–6 (2015)
Weiss, K., Khoshgoftaar, T.M., Wang, D.D.: A survey of transfer learning. J. Big Data 3(1), 1–40 (2016). https://doi.org/10.1186/s40537-016-0043-6
Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)
Zhou, W., et al.: Transferable adversarial perturbations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 452–467 (2018)
Acknowledgements
This article is supported in part by the National Natural Science Foundation of China under projects 61772150, 61862012, and 61962012, the Guangxi Key R&D Program under project AB17195025, the Guangxi Natural Science Foundation under grants 2018GXNSFDA281054, 2018GXNSFAA281232, 2019GXNSFFA245015, 2019GXNSFGA245004 and AD19245048, and the Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Ding, Y., Liu, W., Qin, Y., Wang, Y. (2020). Smart Watchdog: A Lightweight Defending Mechanism Against Adversarial Transfer Learning. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_48
Download citation
DOI: https://doi.org/10.1007/978-3-030-62460-6_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62459-0
Online ISBN: 978-3-030-62460-6
eBook Packages: Computer ScienceComputer Science (R0)