Nothing Special   »   [go: up one dir, main page]

Skip to main content

Information-Theoretically Secure Neural Network Training with Flexible Deployment

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14258))

Included in the following conference series:

  • 947 Accesses

Abstract

Neural networks have been widely used in image processing, speech recognition, and other fields. When multiple parties collaborate to train neural network models by combining their individual data, some data are sensitive and cannot be leaked to anyone other than the data holder. In this paper, we propose new protocols for private neural network training that protect the privacy of sensitive data when they are integrated from different sources. Our protocols can reach information-theoretic security and allow for the flexible configuration of the number of parties based on actual requirements. We conduct experiments in settings with different numbers of participants, where we train neural networks on the MNIST dataset. The experimental results demonstrate that our protocols can successfully implement the task of neural network training while offering the benefit of flexible deployment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Canetti, R.: Universally composable security: a new paradigm for cryptographic protocols. In: Proceeding of Foundations of Computer Science, pp. 136–145 (2001)

    Google Scholar 

  2. Catrina, O.: Round-efficient protocols for secure multiparty fixed-point arithmetic. In: 2018 International Conference on Communications (COMM), pp. 431–436 (2018)

    Google Scholar 

  3. Catrina, O., Hoogh, S.: Improved primitives for secure multi-party integer computation. In: International Conference on Security and Cryptography for Networks, pp. 182–199 (2010)

    Google Scholar 

  4. Catrina, O., Saxena, A.: Secure computation with fixed-point numbers. In: International Conference on Financial Cryptography and Data Security, pp. 35–50 (2010)

    Google Scholar 

  5. Chaudhari, H., Rachuri, R., Suresh, A.: Trident: efficient 4pc framework for privacy preserving machine learning. In: Symposium on Network and Distributed System Security (NDSS) (2020)

    Google Scholar 

  6. Damgård, I., Fitzi, M., Kiltz, E., Nielsen, J., Toft, T.: Unconditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation. In: Proceedings of the 3th Theory of Cryptography Conference (TCC), pp. 285–304 (2006)

    Google Scholar 

  7. Damgård, I., Nielsen., J.B.: Scalable and unconditionally secure multiparty computation. In: Annual International Cryptology Conference, pp. 572–590 (2007)

    Google Scholar 

  8. Demmler, D., Schneider, T., Zohner, M.: ABY - a framework for efficient mixed-protocol secure two-party computation. In: Network & Distributed System Security Symposium (2015)

    Google Scholar 

  9. Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  10. Koti, N., Patra, A., Rachuri, R., Suresh, A.: Tetrad: actively secure 4pc for secure training and inference. In: Symposium on Network and Distributed System Security (NDSS) (2022)

    Google Scholar 

  11. Mohassel, P., Rindal, P.: ABY\(^3\): a mixed protocol framework for machine learning. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 35–52 (2018)

    Google Scholar 

  12. Mohassel, P., Zhang, Y.: SecureML: a system for scalable privacy-preserving machine learning. In: IEEE Symposium on Security and Privacy, pp. 19–38 (2017)

    Google Scholar 

  13. Patra, A., Suresh, A.: BLAZE: blazing fast privacy-preserving machine learning. In: Symposium on Network and Distributed System Security (NDSS) (2020)

    Google Scholar 

  14. Shamir, A.: How to share a secret. Commun. ACM 22(11), 612–613 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  15. Wagh, S., Gupta, D., Chandran, N.: SecureNN: 3-party secure computation for neural network training. Proc. Priv. Enhanc. Technol. 2019(3), 26–49 (2019)

    Google Scholar 

  16. Wagh, S., Tople, S., Benhamouda, F., Kushilevitz, E., Mittal, P., Rabin, T.: FALCON: honest-majority maliciously secure framework for private deep learning. Proc. Priv. Enhanc. Technol. 2021(1), 188–208 (2021)

    Google Scholar 

  17. Weston, J., Watkins, C.: Support vector machines for multi-class pattern recognition. In: Proceedings of the 7th European Symposium on Artificial Neural Networks (ESANN), pp. 219–224 (1999)

    Google Scholar 

  18. Yao, A.: Protocols for secure computations. In: 23rd Annual Symposium on Foundations of Computer Science, pp. 160–164 (1982)

    Google Scholar 

  19. Yao, A.: How to generate and exchange secrets. In: 27th Annual Symposium on Foundations of Computer Science, pp. 162–167 (1986)

    Google Scholar 

  20. Zhou, H.: Information-theoretically secure multi-party linear regression and logistic regression. In: 2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing Workshops (CCGridW), pp. 192–199 (2023)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Key Research and Development Program under Grant 2022YFA1004900. We warmly thank Professor Chaoping Xing from Shanghai Jiao Tong University for his guidance on the design of our private protocols.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hengcheng Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, H. (2023). Information-Theoretically Secure Neural Network Training with Flexible Deployment. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14258. Springer, Cham. https://doi.org/10.1007/978-3-031-44192-9_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44192-9_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44191-2

  • Online ISBN: 978-3-031-44192-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics