Nothing Special   »   [go: up one dir, main page]

Skip to main content

Verifiable Privacy-Preserving Federated Learning in Web 3.0

  • Chapter
  • First Online:
Security and Privacy in Web 3.0

Part of the book series: Digital Privacy and Security ((DPS))

  • 112 Accesses

Abstract

Web 3.0 emphasizes the decentralization of data assets to build a more open, trustworthy, and user-empowered data ecosystem. Therefore, users have complete sovereignty and ownership over their data in Web 3.0. However, the high degree of personal control over data limits the mobility and interoperability of data, forming data silos that restrict the development of Web 3.0. As an advanced paradigm that breaks down data silos, federated learning can promote collaborative sharing of data assets while protecting user privacy. However, in the open and complex environment of Web 3.0, federated learning is vulnerable to attacks. In the Web 3.0 environment, inquisitive servers and clients might exploit global models to conduct passive inference assaults, aiming to illicitly acquire data assets from training data. Additionally, the global model also faces the threat of malicious clients launching active inference attacks and submitting false local gradients. We introduce PILE, a resilient framework for federated learning that safeguards the confidentiality of both local gradients and global models. Furthermore, it guarantees their integrity through the verification of gradients. In PILE, we propose a scheme for gradient validation using local gradients. It includes two components of zero-knowledge proof so that the local gradient and global model do not need to be publicly disclosed. In addition, we have demonstrated the security of PILE and conducted experimental evaluations of the scheme under both active and passive inference attacks. The experiment results show that PILE can provide strong privacy protection and model training robustness for data assets in Web 3.0.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi M, Chu A, Goodfellow I, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep learning with differential privacy. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp 308–318

    Google Scholar 

  2. Blanchard P, El Mhamdi EM, Guerraoui R, Stainer J (2017) Machine learning with adversaries: Byzantine tolerant gradient descent. In: Advances in neural information processing systems, 30.

    Google Scholar 

  3. Boyle E, Gilboa N, Ishai Y (2016) Function secret sharing: improvements and extensions. In: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 1292–1303

    Google Scholar 

  4. Cao X, Fang M, Liu J, Gong NZ (2020)0 FLTrust: Byzantine-robust federated learning via trust bootstrapping. arXiv preprint arXiv:2012.13995

    Google Scholar 

  5. Cheng Y, Liu Y, Chen T, Yang Q (2020) Federated learning for privacy-preserving AI. Commun ACM 63(12):33–36

    Article  Google Scholar 

  6. Cramer R, Damgård I, Nielsen JB (2001) Multiparty computation from threshold homomorphic encryption. In: Advances in cryptology–EUROCRYPT 2001: international conference on the theory and application of cryptographic techniques Innsbruck, Austria, May 6–10, 2001 proceedings 20. Springer, pp 280–300

    Google Scholar 

  7. Damgård I, Jurik M (2001) A generalisation, a simplification and some applications of Paillier’s probabilistic public-key system. In: Public key cryptography: 4th international workshop on practice and theory in public key cryptosystems, PKC 2001 Cheju Island, February 13–15, 2001 proceedings 4. Springer, pp 119–136

    Google Scholar 

  8. Dwork C (2006) Differential privacy. In: International colloquium on automata, languages, and programming. Springer, Berlin, pp 1–12

    Google Scholar 

  9. Fang M, Cao X, Jia J, Gong N (2020) Local model poisoning attacks to {Byzantine-Robust} federated learning. In: 29th USENIX security symposium (USENIX security 20), pp 1605–1622

    Google Scholar 

  10. Geyer RC, Klein T, Nabi M (2017) Differentially private federated learning: a client level perspective. arXiv preprint arXiv:1712.07557

    Google Scholar 

  11. Ghodsi Z, Gu T, Garg S (2017) SafetyNets: verifiable execution of deep neural networks on an untrusted cloud. In: Advances in neural information processing systems, 30

    Google Scholar 

  12. Goldreich O (2009) Foundations of cryptography: volume 2, basic applications. Cambridge University Press, Cambridge

    Google Scholar 

  13. Hannila H, Silvola R, Harkonen J, Haapasalo H (2022) Data-driven begins with data; potential of data assets. J Comput Inf Syst 62(1):29–38

    Google Scholar 

  14. Huang L, Wu C, Wang B, Ouyang Q (2018) Big-data-driven safety decision-making: a conceptual framework and its influencing factors. Safety Sci 109:46–56

    Article  Google Scholar 

  15. Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B (2018) Manipulating machine learning: poisoning attacks and countermeasures for regression learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE, pp 19–35

    Google Scholar 

  16. Kairouz P, McMahan HB, Avent B, Bellet A, Bennis M, Bhagoji AN, Bonawitz K, Charles Z, Cormode G, Cummings R, et al (2021) Advances and open problems in federated learning. Foundat Trends® Mach Learn 14(1–2):1–210

    Article  Google Scholar 

  17. Katz J, Lindell Y (2007) Introduction to modern cryptography: principles and protocols. Chapman and Hall/CRC, Boca Raton

    Book  Google Scholar 

  18. Keller M, Pastro V, Rotaru D (2018) Overdrive: making SPDZ great again. In: Annual international conference on the theory and applications of cryptographic techniques. Springer, pp 158–189

    Google Scholar 

  19. Koh PW, Steinhardt J, Liang P (2022) Stronger data poisoning attacks break data sanitization defenses. Mach Learn, 111(1):1–47

    Article  MathSciNet  Google Scholar 

  20. McMahan B, Moore E, Ramage D, Hampson S, Arcas BAy (2017) Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics. PMLR, pp 1273–1282

    Google Scholar 

  21. Mohassel P, Zhang Y (2017) SecureML: a system for scalable privacy-preserving machine learning. In: 2017 IEEE symposium on security and privacy (SP). IEEE, pp 19–38

    Google Scholar 

  22. Nasr M, Shokri R, Houmansadr A (2019) Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. In: 2019 IEEE symposium on security and privacy (SP). IEEE, pp 739–753

    Google Scholar 

  23. Nasr M, Shokri R, Houmansadr A (2018) Machine learning with membership privacy using adversarial regularization. In: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, pp 634–646

    Google Scholar 

  24. Nishide T, Sakurai K (2011) Distributed paillier cryptosystem without trusted dealer. In: Information security applications: 11th international workshop, WISA 2010, Jeju Island, August 24–26, 2010, revised selected papers 11. Springer, pp 44–60

    Google Scholar 

  25. Paillier P (1999) Public-key cryptosystems based on composite residuosity classes. In: Advances in cryptology—EUROCRYPT, pp 223–239

    Google Scholar 

  26. Rathor S, Zhang M, Im T (2023) Web 3.0 and sustainability: challenges and research opportunities. Sustainability 15(20):15126

    Google Scholar 

  27. Sathya SS, Vepakomma P, Raskar R, Ramachandra R, Bhattacharya S (2018) A review of homomorphic encryption libraries for secure computation. arXiv preprint arXiv:1812.02428

    Google Scholar 

  28. Song L, Mittal P (2021) Systematic evaluation of privacy risks of machine learning models. In: 30th USENIX security symposium (USENIX security 21), pp 2615–2632

    Google Scholar 

  29. Veugen T, Attema T, Spini G (2019). An implementation of the Paillier crypto system with threshold decryption without a trusted dealer. Cryptology ePrint archive

    Google Scholar 

  30. Xu G, Li H, Liu S, Yang K, Lin X (2019) VerifyNet: secure and verifiable federated learning. IEEE Trans Inf Forens Secur 15:911–926

    Article  Google Scholar 

  31. Yang X, Feng Y, Fang W, Shao J, Tang X, Xia S-T, Lu R (2022) An accuracy-lossless perturbation method for defending privacy attacks in federated learning. In: Proceedings of the ACM web conference, pp 732–742

    Google Scholar 

  32. Yuan D, Li Q, Li G, Wang Q, Ren K (2019) PriRadar: a privacy-preserving framework for spatial crowdsourcing. IEEE Trans Inf Forens Secur 15:299–314

    Article  Google Scholar 

  33. Zhao J, Zhu H, Wang F, Lu R, Liu Z, Li H (2022) PVD-FL: a privacy-preserving and verifiable decentralized federated learning framework. IEEE Trans Inf Forens Secur 17:2059–2073

    Article  Google Scholar 

  34. Zheng W, Popa RA, Gonzalez JE, Stoica I (2019). Helen: maliciously secure coopetitive learning for linear models. In: 2019 IEEE symposium on security and privacy (SP). IEEE, pp 724–738

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Shen, M., Tang, X., Wang, W., Zhu, L. (2024). Verifiable Privacy-Preserving Federated Learning in Web 3.0. In: Security and Privacy in Web 3.0. Digital Privacy and Security. Springer, Singapore. https://doi.org/10.1007/978-981-97-5752-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-5752-7_3

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-5751-0

  • Online ISBN: 978-981-97-5752-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics