Abstract
Neural networks have been widely used in image processing, speech recognition, and other fields. When multiple parties collaborate to train neural network models by combining their individual data, some data are sensitive and cannot be leaked to anyone other than the data holder. In this paper, we propose new protocols for private neural network training that protect the privacy of sensitive data when they are integrated from different sources. Our protocols can reach information-theoretic security and allow for the flexible configuration of the number of parties based on actual requirements. We conduct experiments in settings with different numbers of participants, where we train neural networks on the MNIST dataset. The experimental results demonstrate that our protocols can successfully implement the task of neural network training while offering the benefit of flexible deployment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: Proceeding of Foundations of Computer Science, pp. 136–145 (2001)
Catrina, O.: Round-efficient protocols for secure multiparty fixed-point arithmetic. In: 2018 International Conference on Communications (COMM), pp. 431–436 (2018)
Catrina, O., Hoogh, S.: Improved primitives for secure multi-party integer computation. In: International Conference on Security and Cryptography for Networks, pp. 182–199 (2010)
Catrina, O., Saxena, A.: Secure computation with fixed-point numbers. In: International Conference on Financial Cryptography and Data Security, pp. 35–50 (2010)
Chaudhari, H., Rachuri, R., Suresh, A.: Trident: efficient 4pc framework for privacy preserving machine learning. In: Symposium on Network and Distributed System Security (NDSS) (2020)
Damgård, I., Fitzi, M., Kiltz, E., Nielsen, J., Toft, T.: Unconditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation. In: Proceedings of the 3th Theory of Cryptography Conference (TCC), pp. 285–304 (2006)
Damgård, I., Nielsen., J.B.: Scalable and unconditionally secure multiparty computation. In: Annual International Cryptology Conference, pp. 572–590 (2007)
Demmler, D., Schneider, T., Zohner, M.: ABY - a framework for efficient mixed-protocol secure two-party computation. In: Network & Distributed System Security Symposium (2015)
Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29(6), 141–142 (2012)
Koti, N., Patra, A., Rachuri, R., Suresh, A.: Tetrad: actively secure 4pc for secure training and inference. In: Symposium on Network and Distributed System Security (NDSS) (2022)
Mohassel, P., Rindal, P.: ABY\(^3\): a mixed protocol framework for machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 35–52 (2018)
Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: IEEE Symposium on Security and Privacy, pp. 19–38 (2017)
Patra, A., Suresh, A.: BLAZE: blazing fast privacy-preserving machine learning. In: Symposium on Network and Distributed System Security (NDSS) (2020)
Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979)
Wagh, S., Gupta, D., Chandran, N.: SecureNN: 3-party secure computation for neural network training. Proc. Priv. Enhanc. Technol. 2019(3), 26–49 (2019)
Wagh, S., Tople, S., Benhamouda, F., Kushilevitz, E., Mittal, P., Rabin, T.: FALCON: honest-majority maliciously secure framework for private deep learning. Proc. Priv. Enhanc. Technol. 2021(1), 188–208 (2021)
Weston, J., Watkins, C.: Support vector machines for multi-class pattern recognition. In: Proceedings of the 7th European Symposium on Artificial Neural Networks (ESANN), pp. 219–224 (1999)
Yao, A.: Protocols for secure computations. In: 23rd Annual Symposium on Foundations of Computer Science, pp. 160–164 (1982)
Yao, A.: How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science, pp. 162–167 (1986)
Zhou, H.: Information-theoretically secure multi-party linear regression and logistic regression. In: 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW), pp. 192–199 (2023)
Acknowledgments
This work was supported in part by the National Key Research and Development Program under Grant 2022YFA1004900. We warmly thank Professor Chaoping Xing from Shanghai Jiao Tong University for his guidance on the design of our private protocols.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, H. (2023). Information-Theoretically Secure Neural Network Training with Flexible Deployment. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14258. Springer, Cham. https://doi.org/10.1007/978-3-031-44192-9_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-44192-9_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44191-2
Online ISBN: 978-3-031-44192-9
eBook Packages: Computer ScienceComputer Science (R0)