Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3581784.3607079acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article

Automated Mapping of Task-Based Programs onto Distributed and Heterogeneous Machines

Published: 11 November 2023 Publication History

Abstract

In a parallel and distributed application, a mapping is a selection of a processor for each computation or task and memories for the data collections that each task accesses. Finding high-performance mappings is challenging, particularly on heterogeneous hardware with multiple choices for processors and memories. We show that fast mappings are sensitive to the machine, application, and input. Porting to a new machine, modifying the application, or using a different input size may necessitate re-tuning the mapping to maintain the best possible performance.
We present AutoMap, a system that automatically tunes the mapping to the hardware used and finds fast mappings without user intervention or code modification. In contrast, hand-written mappings often require days of experimentation. AutoMap utilizes a novel constrained coordinate-wise descent search algorithm that balances the trade-off between running computations quickly and minimizing data movement. AutoMap discovers mappings up to 2.41× faster than custom, hand-written mappers.

Supplemental Material

MP4 File - SC23 video presentation for "Automated Mapping of Task-Based Programs onto Distributed and Heterogeneous Machines"
SC23 video presentation for the main program paper "Automated Mapping of Task-Based Programs onto Distributed and Heterogeneous Machines" by Thiago S. F. X. Teixeira, Alexandra Henzinger, Rohan Yadav and Alex Aiken

References

[1]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (OSDI'16). USENIX Association, USA, 265--283.
[2]
Jason Ansel, Shoaib Kamil, Kalyan Veeramachaneni, Jonathan Ragan-Kelley, Jeffrey Bosboom, Una-May O'Reilly, and Saman Amarasinghe. 2014. OpenTuner: An extensible framework for program autotuning. In 2014 23rd International Conference on Parallel Architecture and Compilation Techniques (PACT). ACM, New York, NY, USA, 303--315.
[3]
Cédric Augonnet, Jérôme Clet-Ortega, Samuel Thibault, and Raymond Namyst. 2010. Data-Aware Task Scheduling on Multi-Accelerator based Platforms. In 16th International Conference on Parallel and Distributed Systems. IEEE, Shangai, China, 291--298. https://hal.inria.fr/inria-00523937
[4]
Cédric Augonnet, Samuel Thibault, Raymond Namyst, and Pierre-André Wacrenier. 2011. StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurrency and Computation: Practice and Experience 23, 2 (2011), 187--198.
[5]
Stephen T Barnard and Horst D Simon. 1994. Fast multilevel implementation of recursive spectral bisection for partitioning unstructured problems. Concurrency: Practice and experience 6, 2 (1994), 101--117.
[6]
Michael Bauer, Sean Treichler, Elliott Slaughter, and Alex Aiken. 2012. Legion: Expressing Locality and Independence with Logical Regions. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (SC '12). IEEE Computer Society Press, Washington, DC, USA, Article 66, 11 pages.
[7]
Robert D Blumofe and Charles E Leiserson. 1999. Scheduling Multithreaded Computations by Work Stealing. Journal of the ACM (JACM) 46, 5 (1999), 720--748.
[8]
George Bosilca, Aurelien Bouteiller, Anthony Danalis, Mathieu Faverge, Thomas Herault, and Jack J. Dongarra. 2013. PaRSEC: Exploiting Heterogeneity to Enhance Scalability. Computing in Science Engineering 15, 6 (2013), 36--45.
[9]
Hu Chen, Wenguang Chen, Jian Huang, Bob Robert, and H. Kuhn. 2006. MPIPP: An Automatic Profile-Guided Parallel Process Placement Toolset for SMP Clusters and Multiclusters. In Proceedings of the 20th Annual International Conference on Supercomputing (Cairns, Queensland, Australia) (ICS '06). Association for Computing Machinery, New York, NY, USA, 353--360.
[10]
Thomas M Conte, Kishore N Menezes, and Mary Ann Hirsch. 1996. Accurate and Practical Profile-Driven Compilation Using the Profile Buffer. In Proceedings of the International Symposium on Microarchitecture. IEEE/ACM, New York, NY, USA, 36--45.
[11]
Dask Development Team. 2016. Dask: Library for dynamic task scheduling. Dask. https://dask.org
[12]
Mario Di Renzo, Lin Fu, and Javier Urzay. 2020. HTR solver: An open-source exascale-oriented task-based multi-GPU high-order code for hypersonic aerother-modynamics. Computer Physics Communications 255 (2020), 107262.
[13]
Alejandro Duran, Eduard Ayguadé, Rosa M. Badia, Jesús Labarta, Luis Martinell, Xavier Martorell, and Judit Planas. 2011. Ompss: a Proposal for Programming Heterogeneous Multi-Core Architectures. Parallel Processing Letters 21 (2011), 173--193.
[14]
Hillary R. Fairbanks, Lluís Jofre, Gianluca Geraci, Gianluca Iaccarino, and Alireza Doostan. 2020. Bi-fidelity approximation for uncertainty quantification and sensitivity analysis of irradiated particle-laden turbulence. J. Comput. Phys. 402 (2020), 108996.
[15]
Kayvon Fatahalian, Daniel Reiter Horn, Timothy J. Knight, Larkhoon Leem, Mike Houston, Ji Young Park, Mattan Erez, Manman Ren, Alex Aiken, William J. Dally, and Pat Hanrahan. 2006. Sequoia: Programming the Memory Hierarchy, In SC '06: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing. Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis 0, 83--es.
[16]
Charles Ferenbaugh. 2014. PENNANT: An unstructured mesh mini-app for advanced architecture research. Concurrency and Computation: Practice and Experience 27 (10 2014).
[17]
Zhihao Jia, Yongkee Kwon, Galen Shipman, Pat McCormick, Mattan Erez, and Alex Aiken. 2017. A Distributed Multi-GPU System for Fast Graph Processing. Proceedings of the VLDB Endowment 11, 3 (Nov. 2017), 297--310.
[18]
Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2020. Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc. In Proceedings of Machine Learning and Systems 2020, Vol. 2. mlsys.org, Austin, TX, 187--198.
[19]
Zhihao Jia, Matei Zaharia, and Alex Aiken. 2019. Beyond Data and Model Parallelism for Deep Neural Networks. In Proceedings of Machine Learning and Systems 2019, MLSys 2019. mlsys.org, Stanford, CA, USA, March 31 - April 2, 2019, 1--10.
[20]
Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. 2017. The Tensor Algebra Compiler. Proc. ACM Program. Lang. 1, OOPSLA, Article 77 (oct 2017), 29 pages.
[21]
Francesc Lordan, Enric Tejedor, Jorge Ejarque, Roger Rafanell, Javier Álvarez Cid-Fuentes, Fabrizio Marozzo, Daniele Lezzi, Raül Sirvent, Domenico Talia, and Rosa M. Badia. 2013. ServiceSs: An Interoperable Programming Framework for the Cloud. Journal of Grid Computing 12 (03 2013), 1--25.
[22]
Muthucumaru Maheswaran, Shoukat Ali, Howard Jay Siegel, Debra Hensgen, and Richard F. Freund. 1999. Dynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems. J. Parallel Distrib. Comput. 59, 2 (Nov. 1999), 107--131.
[23]
Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. 2018. Ray: A Distributed Framework for Emerging AI Applications. In Proceedings of the 13th USENIX Conference on Operating Systems Design and Implementation (OSDI'18). USENIX Association, USA, 561--577.
[24]
Ravi Teja Mullapudi, Andrew Adams, Dillon Sharlet, Jonathan Ragan-Kelley, and Kayvon Fatahalian. 2016. Automatically Scheduling Halide Image Processing Pipelines. ACM Trans. Graph. 35, 4, Article 83 (jul 2016), 11 pages.
[25]
Michael F. P. O'Boyle, Zheng Wang, and Dominik Grewe. 2013. Portable Mapping of Data Parallel Programs to OpenCL for Heterogeneous Systems. In Proceedings of the 2013 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) (CGO '13). IEEE Computer Society, USA, 1--10.
[26]
OpenMP 2015. OpenMP Version 4.5. OpenMP. https://www.openmp.org/wp-content/uploads/openmp-4.5.pdf
[27]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the 33rd International Conference on Neural Information Processing Systems. Curran Associates Inc., Red Hook, NY, USA, Article 721, 12 pages.
[28]
Gabriel Poesia, Breno Guimarães, Fabrício Ferracioli, and Fernando Magno Quintão Pereira. 2017. Static Placement of Computation on Heterogeneous Devices. Proceedings of the ACM on Programming Languages 1, OOPSLA, Article 50 (Oct. 2017), 28 pages.
[29]
Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. 2013. Halide: A Language and Compiler for Optimizing Parallelism, Locality, and Recomputation in Image Processing Pipelines. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI '13). Association for Computing Machinery, New York, NY, USA, 519--530.
[30]
Mahesh Ravishankar, Roshan Dathathri, Venmugil Elango, Louis-Noël Pouchet, J. Ramanujam, Atanas Rountev, and P. Sadayappan. 2015. Distributed Memory Code Generation for Mixed Irregular/Regular Computations. SIGPLAN Not. 50, 8 (jan 2015), 65--75.
[31]
Mahesh Ravishankar, John Eisenlohr, Louis-Noël Pouchet, J. Ramanujam, Atanas Rountev, and P. Sadayappan. 2012. Code Generation for Parallel Execution of a Class of Irregular Loops on Distributed Memory Systems. In Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis (Salt Lake City, Utah) (SC '12). IEEE Computer Society Press, Washington, DC, USA, Article 72, 11 pages.
[32]
Manman Ren, Ji Young Park, Mike Houston, Alex Aiken, and William J. Dally. 2008. A Tuning Framework for Software-Managed Memory Hierarchies. In Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques (Toronto, Ontario, Canada) (PACT '08). Association for Computing Machinery, New York, NY, USA, 280--291.
[33]
Andrei Rădulescu and Arjan J. C. van Gemund. 1999. On the Complexity of List Scheduling Algorithms for Distributed-Memory Systems. In Proceedings of the 13th International Conference on Supercomputing (Rhodes, Greece) (ICS '99). Association for Computing Machinery, New York, NY, USA, 68--75.
[34]
Vijay A Saraswat, Prabhanjan Kambadur, Sreedhar Kodali, David Grove, and Sriram Krishnamoorthy. 2011. Lifeline-based global load balancing. ACM SIGPLAN Notices 46, 8 (2011), 201--212.
[35]
Alina Sbîrlea, Yi Zou, Zoran Budimlíc, Jason Cong, and Vivek Sarkar. 2012. Mapping a Data-Flow Programming Model onto Heterogeneous Platforms. In Proceedings of the 13th ACM SIGPLAN/SIGBED International Conference on Languages, Compilers, Tools and Theory for Embedded Systems (Beijing, China) (LCTES '12). Association for Computing Machinery, New York, NY, USA, 61--70.
[36]
Ryan Senanayake, Changwan Hong, Ziheng Wang, Amalee Wilson, Stephen Chou, Shoaib Kamil, Saman Amarasinghe, and Fredrik Kjolstad. 2020. A Sparse Iteration Space Transformation Framework for Sparse Tensor Algebra. Proceedings of the ACM on Programming Languages 4, OOPSLA, Article 158 (2020), 30 pages.
[37]
Enric Tejedor, Yolanda Becerra, Guillem Alomar, Anna Queralt, Rosa M Badia, Jordi Torres, Toni Cortes, and Jesús Labarta. 2017. PyCOMPSs: Parallel Computational Workflows in Python. The International Journal of High Performance Computing Applications 31, 1 (2017), 66--82.
[38]
H. Topcuoglu, S. Hariri, and Min-You Wu. 2002. Performance-effective and low-complexity task scheduling for heterogeneous computing. IEEE Transactions on Parallel and Distributed Systems 13, 3 (2002), 260--274.
[39]
Zheng Wang and Michael F.P. O'Boyle. 2009. Mapping Parallelism to Multi-Cores: A Machine Learning Based Approach. In Proceedings of the 14th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (Raleigh, NC, USA) (PPoPP '09). Association for Computing Machinery, New York, NY, USA, 75--84.
[40]
Rob Wijngaart and Tim Mattson. 2015. The Parallel Research Kernels. 2014 IEEE High Performance Extreme Computing Conference, HPEC 2014 1, 1 (02 2015).
[41]
Matei Zaharia, Mosharaf Chowdhury, Michael J. Franklin, Scott Shenker, and Ion Stoica. 2010. Spark: Cluster Computing with Working Sets. In Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing (HotCloud'10). USENIX Association, USA, 10.
[42]
Yunming Zhang, Mengjiao Yang, Riyadh Baghdadi, Shoaib Kamil, Julian Shun, and Saman Amarasinghe. 2018. GraphIt: A High-Performance Graph DSL. Proceedings of the ACM on Programming Languages 2, OOPSLA, Article 121 (2018), 30 pages.

Cited By

View all
  • (2024)Dynamic Tuning of Core Counts to Maximize Performance in Object-Based Runtime SystemsAsynchronous Many-Task Systems and Applications10.1007/978-3-031-61763-8_9(92-104)Online publication date: 14-Feb-2024

Index Terms

  1. Automated Mapping of Task-Based Programs onto Distributed and Heterogeneous Machines
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SC '23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
      November 2023
      1428 pages
      ISBN:9798400701092
      DOI:10.1145/3581784
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 11 November 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Badges

      Author Tags

      1. parallel programming
      2. mapping
      3. autotuning
      4. distributed systems
      5. heterogeneous hardware
      6. high performance computing

      Qualifiers

      • Research-article

      Conference

      SC '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,516 of 6,373 submissions, 24%

      Upcoming Conference

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)374
      • Downloads (Last 6 weeks)38
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Dynamic Tuning of Core Counts to Maximize Performance in Object-Based Runtime SystemsAsynchronous Many-Task Systems and Applications10.1007/978-3-031-61763-8_9(92-104)Online publication date: 14-Feb-2024

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media