Nothing Special   »   [go: up one dir, main page]

Skip to main content

An Algorithmic Framework for Decentralised Matrix Factorisation

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2020)

Abstract

We propose a framework for fully decentralised machine learning and apply it to latent factor models for top-N recommendation. The training data in a decentralised learning setting is distributed across multiple agents, who jointly optimise a common global objective function (the loss function). Here, in contrast to the client-server architecture of federated learning, the agents communicate directly, maintaining and updating their own model parameters, without central aggregation and without sharing their own data. This framework involves two key contributions. Firstly, we propose a method to extend a global loss function to a distributed loss function over the distributed parameters of the decentralised system; secondly, we show how this distributed loss function can be optimised using an algorithm that operates in two phases. In the learning phase, a large number of steps of local learning are carried out by each agent without communication. In a following sharing phase, neighbouring agents exchange messages that enable a batch update of local parameters. Thus, unlike other decentralised algorithms that require some inter-agent communication after one (or a few) model updates, our algorithm significantly reduces the number of messages that need to be exchanged during learning. We prove the convergence of our framework and demonstrate its effectiveness using both the Weighted Matrix Factorisation and Bayesian Personalised Ranking latent factor recommender models. We demonstrate empirically the performance of our approach on a number of different recommender system datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We ignore bias terms for simplicity, though these can easily be incorporated into the framework.

  2. 2.

    SGD schemes for wmf typically sample negative items (\(i \notin R_u\)) at a different rate to positive samples. In this case, the stochastic gradients are unbiased gradient estimators of a loss function where the confidence term is \(c_{ui}=\pi _i(1+\alpha r_{ui})\), where \(\pi _i\) is the probability of sampling item i.

  3. 3.

    Other accuracy measures follow the trends we see for prec@10, achieving the well-known scores of the central algorithm when the decentralised algorithm converges, see e.g. www.librec.net/release/v1.3/example.html.

References

  1. Assran, M., Loizou, N., Ballas, N., Rabbat, M.: Stochastic gradient push for distributed deep learning. In: Proceedings of the ICML, Long Beach, California (2019)

    Google Scholar 

  2. Bellet, A., Guerraoui, R., Taziki, M., Tommasi, M.: Personalized and private peer-to-peer machine learning. In: Proceedings of the 21st AISTATS, Lanzarote (2017)

    Google Scholar 

  3. Chen, C., Liu, Z., Zhao, P., Zhou, J., Li, X.: Privacy preserving point-of-interest recommendation using decentralized matrix factorization. In: AAAI (2018)

    Google Scholar 

  4. Colin, I., Bellet, A., Salmon, J., Clémençon, S.: Gossip dual averaging for decentralized optimization of pairwise functions. In: Proceedings of 33rd ICML (2016)

    Google Scholar 

  5. Ammad-ud din, M., Ivannikova, E., Khan, S.A., Oyomno, W., Fu, Q., Tan, K.E., Flanagan, A.: Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888 (2019)

  6. Duriakova, E., Tragos, E.Z., Smyth, B., Hurley, N., Pena, F.J., Symeonidis, P., Geraci, J., Lawlor, A.: PDMFRec: a decentralised matrix factorisation with tunable user-centric privacy. In: Proceedings of the 13th RecSys. ACM (2019)

    Google Scholar 

  7. He, C., Tan, C., Tang, H., Qiu, S., Liu, J.: Central server free federated learning over single-sided trust social networks. arXiv preprint arXiv:1910.04956 (2019)

  8. Hegedűs, I., Danner, G., Jelasity, M.: Decentralized recommendation based on matrix factorization: a comparison of gossip and federated learning. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019, Part I. CCIS, vol. 1167, pp. 317–332. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43823-4_27

    Chapter  Google Scholar 

  9. Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272, December 2008

    Google Scholar 

  10. Jalalirad, A., Scavuzzo, M., Capota, C., Sprague, M.: A simple and efficient federated recommender system. In: Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, pp. 53–58. ACM, New York (2019)

    Google Scholar 

  11. Kairouz, P., et al.: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019)

  12. Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. In: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2003, p. 482. IEEE Computer Society, USA (2003)

    Google Scholar 

  13. Koloskova, A., Stich, S.U., Jaggi, M.: Decentralized stochastic optimization and gossip algorithms with compressed communication. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California (2019)

    Google Scholar 

  14. Konecný, J., McMahan, H.B., Ramage, D.: Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575 (2015)

  15. Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. Adv. Neural Inf. Process. Syst. 30, 5330–5340 (2017)

    Google Scholar 

  16. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282 (2017)

    Google Scholar 

  17. Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  18. Nedić, A., Olshevsky, A., Rabbat, M.G.: Network topology and communication-computation tradeoffs in decentralized optimization. Proc. IEEE 106(5), 953–976 (2018)

    Article  Google Scholar 

  19. Papaioannou, T.G., Ranvier, J.E., Olteanu, A., Aberer, K.: A decentralized recommender system for effective web credibility assessment. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management. pp. 704–713. ACM (2012)

    Google Scholar 

  20. Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. In: Proceedings of the 25th Conference on Uncertainty in AI, pp. 452–461. AUAI Press, Arlington, Virginia, USA (2009)

    Google Scholar 

  21. Tang, H., Lian, X., Yan, M., Zhang, C., Liu, J.: D\(^2\): decentralized training over decentralized data. In: International Conference on Machine Learning, pp. 4848–4856 (2018)

    Google Scholar 

  22. Tsitsiklis, J., Bertsekas, D., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31(9), 803–812 (1986)

    Article  MathSciNet  Google Scholar 

  23. Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) (2017)

    Google Scholar 

  24. Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The work is supported by the Science Foundation Ireland under the grant number SFI/12/RC/2289_P2 and Samsung Research, Samsung Electronics Co., Seoul, Republic of Korea. We wish to thank the reviewers for the helpful feedback.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Erika Duriakova or Wěipéng Huáng .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 288 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Duriakova, E. et al. (2021). An Algorithmic Framework for Decentralised Matrix Factorisation. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12458. Springer, Cham. https://doi.org/10.1007/978-3-030-67661-2_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-67661-2_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-67660-5

  • Online ISBN: 978-3-030-67661-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics