Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/237090.237181acmconferencesArticle/Chapter ViewAbstractPublication PagesasplosConference Proceedingsconference-collections
Article
Free access

An integrated compile-time/run-time software distributed shared memory system

Published: 01 September 1996 Publication History

Abstract

On a distributed memory machine, hand-coded message passing leads to the most efficient execution, but it is difficult to use. Parallelizing compilers can approach the performance of hand-coded message passing by translating data-parallel programs into message passing programs, but efficient execution is limited to those programs for which precise analysis can be carried out. Shared memory is easier to program than message passing and its domain is not constrained by the limitations of parallelizing compilers, but it lags in performance. Our goal is to close that performance gap while retaining the benefits of shared memory. In other words, our goal is (1) to make shared memory as efficient as message passing, whether hand-coded or compiler-generated, (2) to retain its ease of programming, and (3) to retain the broader class of applications it supports.To this end we have designed and implemented an integrated compile-time and run-time software DSM system. The programming model remains identical to the original pure run-time DSM system. No user intervention is required to obtain the benefits of our system. The compiler computes data access patterns for the individual processors. It then performs a source-to-source transformation, inserting in the program calls to inform the run-time system of the computed data access patterns. The run-time system uses this information to aggregate communication, to aggregate data and synchronization into a single message, to eliminate consistency overhead, and to replace global synchronization with point-to-point synchronization wherever possible.We extended the Parascope programming environment to perform the required analysis, and we augmented the TreadMarks run-time DSM library to take advantage of the analysis. We used six Fortran programs to assess the performance benefits: Jacobi, 3D-FFT, Integer Sort, Shallow, Gauss, and Modified Gramm-Schmidt, each with two different data set sizes. The experiments were run on an 8-node IBM SP/2 using user-space communication. Compiler optimization in conjunction with the augmented run-time system achieves substantial execution time improvements in comparison to the base TreadMarks, ranging from 4% to 59% on 8 processors. Relative to message passing implementations of the same applications, the compile-time run-time system is 0-29% slower than message passing, while the base run-time system is 5-212% slower. For the five programs that XHPF could parallelize (all except IS), the execution times achieved by the compiler optimized shared memory programs are within 9% of XHPF.

References

[1]
S.P. Amarasinghe et al. The SUIF compiler for scalable parallel machines. In Proceedings of the 7th SiAM Conference on Parallel Processing for Scientific Computing, February 1995.
[2]
C. Amza et al. TreadMarks: Shared memory computing on networks of workstations. IEEE Computer, February 1996.
[3]
Applied Parallel Research. FORGE High Performance Fortran User's Guide, version 2.0.
[4]
D. Bailey et al. The NAS parallel benchmarks. Technical Report 103863, NASA, July 1993.
[5]
H.E. Bal, M.F. Kaashoek, and A.S. Tanenbaum. Orca: A language for parallel progranuning of distributed systems. IEEE- TSE, June 1992.
[6]
B.N. Bershad, M.J. Zekauskas, and W.A. Sawdon. The Midway distributed shared memory system. In Proceedings of the '93 CompCon Conference, February 1993.
[7]
D. Callahan, K. Kennedy, and A. Porterfield. Software prefetching. In Proceedings of ASPLOS-g., April 1991.
[8]
J.B. Carter, J.K. Bennett, and W. Zwaenepoel. Techniques for reducing consistency-related information in distributed shared memory systems. A CM TOCS, August 1995.
[9]
G.A. Geist and V.S. Sunderam. Network-based concurrent computing on the PVM system. Concurrency: Practice and Experience, June 1992.
[10]
K. Gharachorloo et al. Memory consistency and event ordering in scalable shared-memory multiprocessors. In Proceedings of ISCA-17, May 1990.
[11]
E. Gornish, E. Granston, and A. Veidenbaum. Compilerdirected data prefetchlng in multiprocessors with memory hierarchies. In Proceedings of ICS-90, 1990.
[12]
E. Granston and H. Wijshoff. Managing pages in shared virtual memory systems: Getting the compiler into the game. in Proceedings of ICS-93, July 1993.
[13]
P. Havlak and K. Kennedy. An implementation of interprocedural bounded regular section analysis. IEEE- TPDS, July 1991.
[14]
S. Hiranandani, K. Kennedy, and C. Tseng. Compiling Fortran D for MIMD distributed-memory machines. CA CM, August 1992.
[15]
T.E. Jeremiassen and S. Eggers. Computing per-process summary side-effect information. In U. Banerjee et al., editors, Fifth Workshop on Languages and Compilers for Parallelism, August 1992.
[16]
T.E. Jeremiassen and S. Eggers. Reducing false sharing on shared memory multiprocessors through compile time data transformations. In Proceedings of PPoPP-95, July 1995.
[17]
P. Keleher, A. L. Cox, and W. Zwaenepoel. Lazy release consistency for software distributed shared memory. In Proceedings of ISCA-19, May 1992.
[18]
K. Kennedy, K. S. McKinley, and C. Tseng. Analysis and transformation in an interactive parallel programming tool. Concurrency: Practice and Experience, October 1993.
[19]
D. Kranz et al. Integrating xtmssage-passing and sharedmemory: Early experience. In Proceedings of PPoPP-93, May 1993.
[20]
J. Kuskin et al. The Stanford FLASH multiprocessor. In Proceedings of ISCA-21, April 1994.
[21]
K. Li and P. Hudak. Memory coherence in shared virtual memory systems. A CM TOCS, November 1989.
[22]
H. Lu et al. Message passing versus distributed shared memory on networks of workstations. In Proceedings SuperCompu~ing '95, December 1995.
[23]
T.C. Mowry, M.S. Lam, and A. Gupta. Design and evaluation of a compiler algorithm for prefetching. In Proceedings of ASPLOS-5, October 1992.
[24]
S.C. Woo, J.P. Singh, and J.L. Hennessy. The performance advantages of integrating block data transfer in cachecoherent multiprocessors. In Proceedings of ASPLOS-6, October 1994.

Cited By

View all
  • (2024)TrackFM: Far-out Compiler Support for a Far Memory WorldProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 110.1145/3617232.3624856(401-419)Online publication date: 27-Apr-2024
  • (2019)CoSMIXProceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference10.5555/3358807.3358854(555-570)Online publication date: 10-Jul-2019
  • (2019)Static Compiler Analyses for Application-specific Optimization of Task-Parallel Runtime SystemsJournal of Signal Processing Systems10.1007/s11265-018-1356-991:3-4(303-320)Online publication date: 1-Mar-2019
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ASPLOS VII: Proceedings of the seventh international conference on Architectural support for programming languages and operating systems
October 1996
290 pages
ISBN:0897917677
DOI:10.1145/237090
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 September 1996

Permissions

Request permissions for this article.

Check for updates

Qualifiers

  • Article

Conference

ASPLOS96
Sponsor:

Acceptance Rates

ASPLOS VII Paper Acceptance Rate 25 of 109 submissions, 23%;
Overall Acceptance Rate 535 of 2,713 submissions, 20%

Upcoming Conference

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)154
  • Downloads (Last 6 weeks)27
Reflects downloads up to 18 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)TrackFM: Far-out Compiler Support for a Far Memory WorldProceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 110.1145/3617232.3624856(401-419)Online publication date: 27-Apr-2024
  • (2019)CoSMIXProceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference10.5555/3358807.3358854(555-570)Online publication date: 10-Jul-2019
  • (2019)Static Compiler Analyses for Application-specific Optimization of Task-Parallel Runtime SystemsJournal of Signal Processing Systems10.1007/s11265-018-1356-991:3-4(303-320)Online publication date: 1-Mar-2019
  • (2017)Task-parallel Runtime System Optimization Using Static Compiler AnalysisProceedings of the Computing Frontiers Conference10.1145/3075564.3075574(201-210)Online publication date: 15-May-2017
  • (2015)Code Generation for Distributed-Memory ArchitecturesThe Computer Journal10.1093/comjnl/bxv077(bxv077)Online publication date: 15-Sep-2015
  • (2014)Errata for GPU-Efficient Recursive Filtering and Summed-Area TablesACM Transactions on Graphics10.1145/260086033:3(1-1)Online publication date: 2-Jun-2014
  • (2014)Adaptive reduction parallelization techniquesACM International Conference on Supercomputing 25th Anniversary Volume10.1145/2591635.2667180(311-322)Online publication date: 10-Jun-2014
  • (2014)Loop Transforming for Reducing Data Alignment on Multi-Core SIMD ProcessorsJournal of Signal Processing Systems10.1007/s11265-013-0754-274:2(137-150)Online publication date: 1-Feb-2014
  • (2013)A decoupled local memory allocatorACM Transactions on Architecture and Code Optimization10.1145/2400682.24006939:4(1-22)Online publication date: 20-Jan-2013
  • (2013)The CRNS framework and its application to programmable and reconfigurable cryptographyACM Transactions on Architecture and Code Optimization10.1145/2400682.24006929:4(1-25)Online publication date: 20-Jan-2013
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media