Nothing Special   »   [go: up one dir, main page]

Skip to main content

FedSel: Federated SGD Under Local Differential Privacy with Top-k Dimension Selection

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12112))

Included in the following conference series:

Abstract

As massive data are produced from small gadgets, federated learning on mobile devices has become an emerging trend. In the federated setting, Stochastic Gradient Descent (SGD) has been widely used in federated learning for various machine learning models. To prevent privacy leakages from gradients that are calculated on users’ sensitive data, local differential privacy (LDP) has been considered as a privacy guarantee in federated SGD recently. However, the existing solutions have a dimension dependency problem: the injected noise is substantially proportional to the dimension d. In this work, we propose a two-stage framework FedSel for federated SGD under LDP to relieve this problem. Our key idea is that not all dimensions are equally important so that we privately select Top-k dimensions according to their contributions in each iteration of federated SGD. Specifically, we propose three private dimension selection mechanisms and adapt the gradient accumulation technique to stabilize the learning process with noisy updates. We also theoretically analyze privacy, accuracy and time complexity of FedSel, which outperforms the state-of-the-art solutions. Experiments on real-world and synthetic datasets verify the effectiveness and efficiency of our framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)

    Google Scholar 

  2. Bonawitz, K., et al.: Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046 (2019)

  3. Yang, Q., Liu, Y., Chen, T., Tong, Y.: Federated machine learning: concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 10(2), 1–19 (2019)

    Article  Google Scholar 

  4. McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.: Federated learning of deep networks using model averaging. CoRR abs/1602.05629. arXiv preprint arXiv:1602.05629 (2016)

  5. Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: NeurIPS, pp. 14747–14756 (2019)

    Google Scholar 

  6. Nasr, M., Shokri, R., Houmansadr, A.: Comprehensive privacy analysis of deep learning. In: IEEE SP (2019)

    Google Scholar 

  7. Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: SIGSAC CCS, pp. 1322–1333 (2015)

    Google Scholar 

  8. Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In IEEE INFOCOM, pp. 2512–2520 (2019)

    Google Scholar 

  9. Shin, H., Kim, S., Shin, J., Xiao, X.: Privacy enhanced matrix factorization for recommendation with local differential privacy. IEEE TKDE 30(9), 1770–1782 (2018)

    Google Scholar 

  10. Gu, X., Li, M., Cheng, Y., Xiong, L., Cao, Y.: PCKV: locally differentially private correlated key-value data collection with optimized utility. In: USENIX Security Symposium (2020)

    Google Scholar 

  11. Ye, Q., Hu, H., Meng, X., Zheng, H.: PrivKV: key-value data collection with local differential privacy. In: IEEE SP, pp. 317–331 (2019)

    Google Scholar 

  12. Nguyên, T.T., Xiao, X., Yang, Y., Hui, S.C., Shin, H., Shin, J.: Collecting and analyzing data from smart device users with local differential privacy. arXiv preprint arXiv:1606.05053 (2016)

  13. Wang, N., et al.: Collecting and analyzing multidimensional data with local differential privacy. In: IEEE ICDE, pp. 638–649 (2019)

    Google Scholar 

  14. Duchi, J.C., Jordan, M.I., Wainwright, M.J.: Minimax optimal procedures for locally private estimation. J. Am. Stat. Assoc. 113(521), 182–201 (2018)

    Article  MathSciNet  Google Scholar 

  15. Gu, X., Li, M., Cao, Y., Xiong, L.: Supporting both range queries and frequency estimation with local differential privacy. In: IEEE Conference on Communications and Network Security (CNS), pp. 124–132 (2019)

    Google Scholar 

  16. Gu, X., Li, M., Xiong, L., Cao, Y.: Providing input-discriminative protection for local differential privacy. In: IEEE ICDE (2020)

    Google Scholar 

  17. Johnson, W.B., Lindenstrauss, J.: Extensions of Lipschitz mappings into a Hilbert space. Contempor. Math. 26(189–206), 1 (1984)

    MathSciNet  MATH  Google Scholar 

  18. Dwork, C., Roth, A.: The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9(3–4), 211–407 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Sun, H., et al.: Sparse gradient compression for distributed SGD. In: Li, G., Yang, J., Gama, J., Natwichai, J., Tong, Y. (eds.) DASFAA 2019. LNCS, vol. 11447, pp. 139–155. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-18579-4_9

    Chapter  Google Scholar 

  20. Duchi, J.C., Jordan, M.I., Wainwright, M.J.: Local privacy and statistical minimax rates. In: Annual Symposium on Foundations of Computer Science, pp. 429–438. IEEE (2013)

    Google Scholar 

  21. Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., Renggli, C.: The convergence of sparsified gradient methods. In: NeurIPS, pp. 5973–5983 (2018)

    Google Scholar 

  22. Alistarh, D., Hoefler, T., Johansson, M., Konstantinov, N., Khirirat, S., Renggli, C.: The convergence of sparsified gradient methods. In NeurIPS, pp. 5973–5983 (2018)

    Google Scholar 

  23. Shokri, R., Shmatikov, V.: Privacy-preserving deep learning. In: SIGSAC CCS, pp. 1310–1321. ACM (2015)

    Google Scholar 

  24. Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: ICLR (2018)

    Google Scholar 

  25. Aji, A.F., Heafield, K.: Sparse communication for distributed gradient descent. In: EMNLP, pp. 440–445 (2017)

    Google Scholar 

  26. Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. In: NeurIPS, pp. 1299–1309 (2018)

    Google Scholar 

  27. Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing. In: INTERSPEECH (2015)

    Google Scholar 

  28. Fang, M., Cao, X., Jia, J., Gong, N. Z.: Local model poisoning attacks to Byzantine-robust federated learning. In: USENIX Security Symposium (2020)

    Google Scholar 

  29. Bonawitz, K., et al.: In: SIGSAC CCS, pp. 1175–1191, ACM (2017)

    Google Scholar 

  30. Agarwal, N., Suresh, A.T., Yu, F.X.X., Kumar, S., McMahan, B.: cpSGD: communication-efficient and differentially-private distributed SGD. In: NeurIPS, pp. 7564–7575 (2018)

    Google Scholar 

  31. Warner, S.L.: Randomized response: a survey technique for eliminating evasive answer bias. J. Am. Stat. Assoc. 60(309), 63–69 (1965)

    Article  Google Scholar 

  32. Kairouz, P., Oh, S., Viswanath, P.: Extremal mechanisms for local differential privacy. In: NeurIPS, pp. 2879–2887 (2014)

    Google Scholar 

  33. Bhowmick, A., Duchi, J., Freudiger, J., Kapoor, G., Rogers, R.: Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984 (2018)

Download references

Acknowledgements

This work is supported by the National Key Research and Development Program of China (No. 2018YFB1004401), National Natural Science Foundation of China (No. 61532021, 61772537, 61772536, 61702522), JSPS KAKENHI Grant No. 17H06099, 18H04093, 19K20269, and Microsoft Research Asia (CORE16).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, R., Cao, Y., Yoshikawa, M., Chen, H. (2020). FedSel: Federated SGD Under Local Differential Privacy with Top-k Dimension Selection. In: Nah, Y., Cui, B., Lee, SW., Yu, J.X., Moon, YS., Whang, S.E. (eds) Database Systems for Advanced Applications. DASFAA 2020. Lecture Notes in Computer Science(), vol 12112. Springer, Cham. https://doi.org/10.1007/978-3-030-59410-7_33

Download citation

Publish with us

Policies and ethics