Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/1356058.1356087acmconferencesArticle/Chapter ViewAbstractPublication PagescgoConference Proceedingsconference-collections
keynote

Code optimization of parallel programs: evolutionary vs. revolutionary approaches

Published: 06 April 2008 Publication History

Abstract

Code optimization has a rich history that dates back over half a century. Over the years, it has contributed deep innovations to address challenges posed by new computer system and programming language features. Examples of the former include optimizations for improved register utilization, instruction-level parallelism, vector parallelism, multiprocessor parallelism and memory hierarchy utilization. Examples of the latter include optimizations for procedural, object-oriented, functional and domain-specific languages as well as dynamic optimization for managed runtimes. These optimizations have contributed significantly to programmer productivity by reducing the effort that programmers need to spend on hand-implementing code optimizations and by enabling code to be more portable, especially as programming models and computer architectures change. While compiler frameworks are often able to incorporate new code optimizations in an evolutionary manner, there have been notable periods in the history of compilers when more revolutionary changes were necessary. Examples of such paradigm shifts in the history of compilers include interprocedural whole program analysis, coloring-based register allocation, static single assignment form, array dependence analysis, pointer alias analysis, loop transformations, adaptive profile-directed optimizations, and dynamic compilation. The revolutionary nature of these shifts is evidenced by the fact that production-strength optimization frameworks (especially those in industry) had to be rewritten from scratch or significantly modified to support the new capabilities. In this talk, we claim that the current multicore trend in the computer industry is forcing a new paradigm shift in compilers to address the challenge of code optimization of parallel programs, regardless of whether the parallelism is implicit or explicit in the programming model. All computers --- embedded, mainstream, and high-end --- are now being built from multicore processors with little or no increase in clock speed per core. This trend poses multiple challenges for compilers for future systems as the number of cores per socket continues to grow, and the cores become more heterogeneous. In addition, compilers have to keep pace with emerging parallel programming models embodied in a proliferation of new libraries and new languages.
To substantiate our claim, we examine the historical foundations of code optimization including intermediate representations (IR's), abstract execution models, legality and cost analyses of IR transformations and show that they are all deeply entrenched in the von Neumann model of sequential computing. We discuss ongoing evolutionary efforts to support optimization of parallel programs in the context of existing compiler frameworks, and their inherent limitations for the long term. We then outline what a revolutionary approach will entail, and identify where its underlying paradigm shifts are likely to lie. We provide examples of past research that are likely to influence future directions in code optimization of parallel programs such as program dependence graphs, partitioning and scheduling of lightweight parallelism, synchronization optimizations, communication optimizations, transactional memory optimizations, code generation for heterogeneous accelerators, impact of memory models on code optimization, and general forms of data and computation alignment. Finally, we briefly describe the approach to code optimization of parallel programs being taken in the Habanero Multicore Software Research project at Rice University.

Index Terms

  1. Code optimization of parallel programs: evolutionary vs. revolutionary approaches

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CGO '08: Proceedings of the 6th annual IEEE/ACM international symposium on Code generation and optimization
    April 2008
    235 pages
    ISBN:9781595939784
    DOI:10.1145/1356058
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 06 April 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. code optimization
    2. multicore processors
    3. parallel programs

    Qualifiers

    • Keynote

    Conference

    CGO '08

    Acceptance Rates

    CGO '08 Paper Acceptance Rate 21 of 66 submissions, 32%;
    Overall Acceptance Rate 312 of 1,061 submissions, 29%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 887
      Total Downloads
    • Downloads (Last 12 months)5
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 22 Sep 2024

    Other Metrics

    Citations

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media