Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3659211.3659274acmotherconferencesArticle/Chapter ViewAbstractPublication PagesbdeimConference Proceedingsconference-collections
research-article

Research on communication mechanism optimization based on distributed graph computing environment

Published: 29 May 2024 Publication History

Abstract

In recent years, with the rapid development of the Internet of Things, the Internet, and social networks, the storage of data in the network is growing at an explosive rate and is becoming more and more closely related to the real natural world. Driven by large-scale data mining and machine learning applications, distributed graph computing models that use graph data structures to describe data and relationships between data have been increasingly widely used. Therefore, this paper studies the optimization of communication mechanisms based on a distributed graph computing environment. Firstly, a BSP model based on a pure message-passing communication mechanism of a distributed graph computing system is established. Secondly, the optimization model is evaluated from two aspects: data communication and convergence condition judgment. Finally, large-scale data sets are used to test and evaluate performance optimization. The results show that this method can greatly improve the efficiency of graph parallel computing.

References

[1]
SHI S, CHU X, LI B. Mg wfbp: efficient data communication for distributed synchronous and algorithms[C]]/ / International Conference on Computer Communications. Piscataw ay, USA: IEEE, 2019, 172-180.
[2]
ZHANG H, ZHENG Z, XU S, Poseidon: an efficient communication architecture for distributed deep learning on gpu clusters [C]// 2017 USENIX Annual Technical Conference. Berkeley, USA: USENIX, 2017, 181-193.
[3]
ABADI M, BAR HAM P, CHEN J, Tensorflow: a system for large-scale machine learning [C]// USENIX Symposium on Operating Systems Design and Implementation. Berkeley, USA: USENIX, 2016. 265-283.
[4]
YOU Y, LIJ, REDDI S, Large batch optimization for deep learning: training best in 76 minutes[C]//International Conference on Learning Representations. Piscataway, USA: IEEE, 2019, 1-14.
[5]
Polato I, RéR, Goldman A, A comprehensive view of Hadoop research—A systematic literature review[J]. Journal of Network & Computer Applications, 2014, 46: 1–25.
[6]
Low Y, Gonzalez J E, Kyrola A, GraphLab: A New Framework For Parallel Machine Learning[J]. Computer Science, 2014.
[7]
Mccune R R, Weninger T, Madey G. Thinking Like a Vertex: A Survey of Vertex-Centric Frameworks for Large-Scale Distributed Graph Processing[J]. Acm Computing Surveys, 2015, 48(2): 1-39.
[8]
Stutz P, Bernstein A, Cohen W. Signal/Collect: Graph Algorithms for the (Semantic) Web[C]. International Semantic Web Conference on the Semantic Web. Springer-Verlag, 2010, 764-780.
[9]
Gonzalez J E, Xin R S, Dave A, GraphX: graph processing in a distributed dataflow framework[C]. Usenix Conference on Operating Systems Design and Implementation. USENIX Association, 2014, 599-613.
[10]
Bersani M M, Bianculli D, Ghezzi C, Efficient large-scale trace checking using mapreduce[C]. International Conference on Software Engineering. ACM, 2015, 888-898.

Index Terms

  1. Research on communication mechanism optimization based on distributed graph computing environment

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    BDEIM '23: Proceedings of the 2023 4th International Conference on Big Data Economy and Information Management
    December 2023
    917 pages
    ISBN:9798400716669
    DOI:10.1145/3659211
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 29 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    BDEIM 2023

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 8
      Total Downloads
    • Downloads (Last 12 months)8
    • Downloads (Last 6 weeks)2
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media