Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Performance Model for GPUs with Caches

Published: 01 July 2015 Publication History

Abstract

To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on an analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. The proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.

References

[1]
AMD, ATI Stream Software Development Kit (SDK) v2.1, http://developer.amd.com/tools-and-sdks/opencl-zone/opencl-tools-sdks/amd-accelerated-parallel-processingapp-sdk/, 2010.
[2]
S. S. Baghsorkhi, M. Delahaye, S. J. Patel, W. D. Gropp, and W. W. Hwu, “An adaptive performance modeling tool for GPU architectures, ” in Proc. 15th ACM SIGPLAN Symp. Principles Practice Parallel Program., 2010, pp. 105–114.
[3]
B. J. Barnes, B. Rountree, D. K. Lowenthal, J. Reeves, B. de Supinski, and M. Schulz, “A regression-based approach to scalability prediction,” in Proc. 22nd Annu. Int. Conf. Supercomput., 2008, pp. 368–377.
[4]
R. Bitirgen, E. Ipek, and J. F. Martinez, “Coordinated management of multiple interacting resources in chip multiprocessors: A machine learning approach,” in Proc. 41st Annu. IEEE/ACM Int. Symp. Microarchit. , 2008, pp. 318–329.
[5]
S. Collange, D. Defour, and D. Parello, “Barra, a parallel functional GPGPU simulator,” Université de Perpignan, Perpignan, France, Tech. Rep. hal-00359342, 2009.
[6]
C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn., vol. 20, no. 3, pp. 273–297, 1995.
[7]
A. Danalis, G. Marin, C. McCurdy, J. S. Meredith, P. C. Roth, K. Spafford, V. Tipparaju, and J. S. Vetter, “The scalable heterogeneous computing (SHOC) benchmark suite,” in Proc. 3rd Workshop General-Purpose Comput. Graph. Process. Units, 2010, pp. 63–74.
[8]
M. Fatica, “Accelerating linpack with CUDA on heterogenous clusters,” in Proc. 2nd Workshop General Purpose Process. Graph. Process. Units, 2009, pp. 46–51.
[9]
Khronos OpenCL Working Group, The OpenCL SpecificationVersion 1.0, http://www.khronos.org/opencl/, 2009 .
[10]
The IMPACT Research Group, Parboil Benchmark Suite, http://impact.crhc.illinois.edu/Parboil/parboil.aspx, 2009.
[11]
S. Hong and H. Kim, “An analytical model for a GPU architecture with memory-level and thread-level parallelism awareness,” in Proc. 36th Int. Symp. Comput. Archit., Jun. 2009, pp. 152–163.
[12]
S. Hong and H. Kim, “An integrated GPU power and performance model,” in Proc. 37th Annu. Int. Symp. Comput. Archit., 2010, pp. 280–289.
[13]
E. Ipek, B. R. De Supinski, M. Schulz, and S. A. Mckee, “An approach to performance prediction for parallel applications,” in Proc. 11th Int. Euro-Par Conf. Parallel Process., 2005, pp. 196–205.
[14]
W. Jia, K. A. Shaw, and M. Martonosi, “Stargazer: Automated regression-based GPU design space exploration,” in Proc. IEEE Int. Symp. Perform. Anal. Syst. Softw., 2012, pp. 2–13.
[15]
W. Liu, W. Muller-Wittig, and B. Schmidt, “ Performance predictions for general-purpose computation on GPUs,” in Proc. Int. Conf. Parallel Process., Sep. 2007, p. 50.
[16]
NVIDIA, NVIDIA GeForce GTX 580, http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-580, 2010.
[17]
NVIDIA, NVIDIA GeForce GTX 680, http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-680, 2012.
[18]
NVIDIA, NVIDIA OpenCL SDK code samples, https://developer.nvidia.com/opencl, 2014.
[19]
NVIDIA, NVIDIA Fermi Compute Architecture White Paper, http://www.nvidia.com/content/PDF/fermi_white_papers/NVIDIA_Fermi_Compute_Architecture_Whitepaper.pdf, 2009.
[20]
NVIDIA NVIDIA CUDA Profiler, http://docs.nvidia.com/cuda/pdf/CUDA_Profiler_Users_Guide.pdf, 2014.
[21]
NVIDIA, NVIDIA Compute-PTX: Parallel Thread Execution, ISAVersion 1.4, http://www.nvidia.com/content/CUDA-ptx_isa_1.4.pdf, 2009.
[22]
NVIDIA, OpenCL Programming Guide for the CUDA Architecture, Version 4.0, http://www.nvidia.com/content/cudazone/download/OpenCL/NVIDIA_OpenCL_ProgrammingGuide.pdf, 2011.
[23]
S. Seo, G. Jo, and J. Lee, “Performance characterization of the NAS parallel benchmarks in OpenCL,” in Proc. IEEE Int. Symp. Workload Characterization, 2011, pp. 137–148.
[24]
A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” Statist. Comput., vol. 14, no. 3, pp. 199–222, 2004.
[25]
Top500.org, Top500 Supercomputer Sites - June 2014, http://www.top500.org/lists/2014/06/, 2014.
[26]
V. Vapnik, The Nature of Statistical Learning. New York, NY, USA: Springer, 1999.
[27]
H. Wong, M. M. Papadopoulou, M. Sadooghi-Alvandi, and A. Moshovos, “ Demystifying GPU microarchitecture through microbenchmarking,” in Proc. IEEE Int. Symp. Perform. Anal. Syst. Softw., 2010, pp. 235–246.
[28]
C. Yang, F. Wang, Y. Du, J. Chen, J. Liu, H. Yi, and K. Lu, “Adaptive optimization for petascale heterogeneous CPU/GPU computing,” in Proc. IEEE Int. Conf. Cluster Comput. , 2010, pp. 19–28.
[29]
Y. Zhang and J. D. Owens, “A quantitative performance analysis model for GPU architectures,” in Proc. IEEE 17th Int. Symp. High Perform. Comput. Archit., Feb. 2011, pp. 382–393.
[30]
A. Kerr, G. Diamos, and S. Yalamanchili, “ Modeling GPU-CPU workloads and systems,” in Proc. 3rd Workshop General-Purpose Comput. Graph. Process. Units, 2010, pp. 31–42.
[31]
D. Grewe and M. F. P OBoyle, “A static task partitioning approach for heterogeneous systems using OpenCL,” in Proc. 20th Int. Conf. Compiler Construction, 2011, pp. 286–305.
[32]
C.-K. Luk, S. Hong, and H. Kim, “Qilin: Exploiting parallelism on heterogeneous multiprocessors with adaptive mapping,” in Proc. 42nd Annu. IEEE/ACM Int. Symp. Microarchit., 2009, pp. 45–55 .

Cited By

View all
  • (2023)Evaluating execution time predictions on GPU kernels using an analytical model and machine learning techniquesJournal of Parallel and Distributed Computing10.1016/j.jpdc.2022.09.002171:C(66-78)Online publication date: 1-Jan-2023
  • (2022)Symbolic identification of shared memory based bank conflicts for GPUsJournal of Systems Architecture: the EUROMICRO Journal10.1016/j.sysarc.2022.102518127:COnline publication date: 1-Jun-2022
  • (2021)Performance prediction of parallel applications: a systematic literature reviewThe Journal of Supercomputing10.1007/s11227-020-03417-577:4(4014-4055)Online publication date: 1-Apr-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Parallel and Distributed Systems
IEEE Transactions on Parallel and Distributed Systems  Volume 26, Issue 7
July 2015
286 pages

Publisher

IEEE Press

Publication History

Published: 01 July 2015

Author Tags

  1. AMD
  2. GPU
  3. performance modeling
  4. caches
  5. scheduling
  6. OpenCL
  7. NVIDIA

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 29 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Evaluating execution time predictions on GPU kernels using an analytical model and machine learning techniquesJournal of Parallel and Distributed Computing10.1016/j.jpdc.2022.09.002171:C(66-78)Online publication date: 1-Jan-2023
  • (2022)Symbolic identification of shared memory based bank conflicts for GPUsJournal of Systems Architecture: the EUROMICRO Journal10.1016/j.sysarc.2022.102518127:COnline publication date: 1-Jun-2022
  • (2021)Performance prediction of parallel applications: a systematic literature reviewThe Journal of Supercomputing10.1007/s11227-020-03417-577:4(4014-4055)Online publication date: 1-Apr-2021
  • (2020)Efficient and Portable Workgroup Size TuningIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2019.293729531:2(455-469)Online publication date: 1-Feb-2020
  • (2019)A Performance Model for GPU Architectures that Considers On-Chip Resources: Application to Medical Image RegistrationIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2019.290521330:9(1947-1961)Online publication date: 1-Sep-2019
  • (2018)Efficient Cache Performance Modeling in GPUs Using Reuse Distance AnalysisACM Transactions on Architecture and Code Optimization10.1145/329105115:4(1-24)Online publication date: 19-Dec-2018
  • (2018)Automatic Mapping for OpenCL-Programs on CPU/GPU Heterogeneous PlatformsComputational Science – ICCS 201810.1007/978-3-319-93701-4_23(301-314)Online publication date: 11-Jun-2018
  • (2016)PIPSEAProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security10.1145/2976749.2978329(1255-1267)Online publication date: 24-Oct-2016

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media