Graphics Processing Units (GPUs) have evolved to devices with teraflop-level performance potential. Application developers have a tedious task in developing GPU software by correctly identifying parallel computation and optimizing placement of data for the parallel processors in such architectures. Further, code optimized for one architecture may not perform well on different generations of even the same processor family. Many manually tuned GPU solutions would need a complete rewrite on a different architecture, hence more programmer time and effort. High-performance computing on GPUs can be facilitated by programming models and programming frameworks that attempt to reduce the amount of time and effort needed to develop GPU applications.
This thesis describes a compiler framework for automatically generating and optimizing parallel code for GPUs (CUDA code for Nvidia GPUs), relieving programmers of the tedious work of parallelizing sequential code. The framework describes a script-based compiler for CUDA code generation combining: a Transformation Strategy Generator (TSG), which automatically generates multiple scripts representing different optimization strategies; an Autotuning System that automatically generates a set of code variants and selects the best among them through empirical evaluation.
An underlying loop transformation and code generation framework takes TSG-generated scripts as input and generates CUDA code; thus enabling an end-to-end system to produce high performance solutions for scientific computations on a given GPU architecture.
This flexible organization enables the system to explore a large optimization search space, simultaneously targeting different architecture features, but constrained by data dependences and guided by data reuse and the best heuristics from manual tuning. The system tailors the generated code to yield high-performance results for different GPU generations, data types and data sets. The key contributions of this thesis include: (1) the meta-optimizer, TSG, (2) a search and autotuning mechanism, (3) integration with a script-based compiler framework, resulting in an end-to-end automatic parallelization system, (4) performance portable code generation for the Nvidia GTX-280 and Nvidia Tesla C2050 Fermi architectures, and (5) performance gains of up to 1.84x over linear algebra kernels in the manually-tuned Nvidia CUBLAS library, and up to 2.03x for a set of scientific, multimedia and imaging kernels over a state-of-the-art GPU compiler.
Recommendations
Compiler-based code generation and autotuning for geometric multigrid on GPU-accelerated supercomputers
Highlights- Generate parallel CUDA code from sequential C input code using a compiler-based tool for key operators in Geometric Multigrid.
AbstractGPUs, with their high bandwidths and computational capabilities are an increasingly popular target for scientific computing. Unfortunately, to date, harnessing the power of the GPU has required use of a GPU-specific programming model ...
Model-driven autotuning of sparse matrix-vector multiply on GPUs
PPoPP '10We present a performance model-driven framework for automated performance tuning (autotuning) of sparse matrix-vector multiply (SpMV) on systems accelerated by graphics processing units (GPU). Our study consists of two parts.
First, we describe several ...
Model-driven autotuning of sparse matrix-vector multiply on GPUs
PPoPP '10: Proceedings of the 15th ACM SIGPLAN Symposium on Principles and Practice of Parallel ProgrammingWe present a performance model-driven framework for automated performance tuning (autotuning) of sparse matrix-vector multiply (SpMV) on systems accelerated by graphics processing units (GPU). Our study consists of two parts.
First, we describe several ...