Nothing Special   »   [go: up one dir, main page]

skip to main content
Autotuning, code generation and optimizing compiler technology for gpus
Publisher:
  • University of Southern California
  • Computer Science Dept. 200 University Park Los Angeles, CA
  • United States
ISBN:978-1-267-44368-7
Order Number:AAI3513791
Pages:
97
Reflects downloads up to 25 Nov 2024Bibliometrics
Skip Abstract Section
Abstract

Graphics Processing Units (GPUs) have evolved to devices with teraflop-level performance potential. Application developers have a tedious task in developing GPU software by correctly identifying parallel computation and optimizing placement of data for the parallel processors in such architectures. Further, code optimized for one architecture may not perform well on different generations of even the same processor family. Many manually tuned GPU solutions would need a complete rewrite on a different architecture, hence more programmer time and effort. High-performance computing on GPUs can be facilitated by programming models and programming frameworks that attempt to reduce the amount of time and effort needed to develop GPU applications.

This thesis describes a compiler framework for automatically generating and optimizing parallel code for GPUs (CUDA code for Nvidia GPUs), relieving programmers of the tedious work of parallelizing sequential code. The framework describes a script-based compiler for CUDA code generation combining: a Transformation Strategy Generator (TSG), which automatically generates multiple scripts representing different optimization strategies; an Autotuning System that automatically generates a set of code variants and selects the best among them through empirical evaluation.

An underlying loop transformation and code generation framework takes TSG-generated scripts as input and generates CUDA code; thus enabling an end-to-end system to produce high performance solutions for scientific computations on a given GPU architecture.

This flexible organization enables the system to explore a large optimization search space, simultaneously targeting different architecture features, but constrained by data dependences and guided by data reuse and the best heuristics from manual tuning. The system tailors the generated code to yield high-performance results for different GPU generations, data types and data sets. The key contributions of this thesis include: (1) the meta-optimizer, TSG, (2) a search and autotuning mechanism, (3) integration with a script-based compiler framework, resulting in an end-to-end automatic parallelization system, (4) performance portable code generation for the Nvidia GTX-280 and Nvidia Tesla C2050 Fermi architectures, and (5) performance gains of up to 1.84x over linear algebra kernels in the manually-tuned Nvidia CUBLAS library, and up to 2.03x for a set of scientific, multimedia and imaging kernels over a state-of-the-art GPU compiler.

Contributors
  • The University of Utah
  • University of Southern California
  • University of Southern California
Please enable JavaScript to view thecomments powered by Disqus.

Recommendations