Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3432261.3432270acmotherconferencesArticle/Chapter ViewAbstractPublication PageshpcasiaConference Proceedingsconference-collections
research-article
Open access

Conjugate Gradient Solvers with High Accuracy and Bit-wise Reproducibility between CPU and GPU using Ozaki scheme

Published: 20 January 2021 Publication History

Abstract

On Krylov subspace methods such as the Conjugate Gradient (CG) method, the number of iterations until convergence may increase due to the loss of computational accuracy caused by rounding errors in floating-point computations. At the same time, because the order of the computation is nondeterministic on parallel computation, the result and the behavior of the convergence may be nonidentical in different computational environments, even for the same input. In this study, we present an accurate and reproducible implementation of the unpreconditioned CG method on x86 CPUs and NVIDIA GPUs. In our method, while all variables are stored on FP64, all inner product operations (including matrix-vector multiplications) are performed using the Ozaki scheme. The scheme delivers the correctly rounded computation as well as bit-level reproducibility among different computational environments. In this paper, we show some examples where the standard FP64 implementation of CG results in nonidentical results across different CPUs and GPUs. We then demonstrate the applicability and the effectiveness of our approach in terms of accuracy and reproducibility and their performance on both CPUs and GPUs. Furthermore, we compare the performance of our method against an existing accurate and reproducible CG implementation based on the Exact Basic Linear Algebra Subprograms (ExBLAS) on CPUs.

References

[1]
S. W. D. Chien, I. B. Peng, and S. Markidis. 2019. Posit NPB: Assessing the Precision Improvement in HPC Scientific Applications. (to appear).
[2]
C. Chohra, P. Langlois, and D. Parello. 2016. Reproducible, Accurately Rounded and Efficient BLAS. In 22nd International European Conference on Parallel and Distributed Computing (Euro-Par 2016). 609–620.
[3]
Sylvain Collange, David Defour, Stef Graillat, and Roman Iakymchuk. 2015. Numerical Reproducibility for the Parallel Reduction on Multi- and Many-Core Architectures. Parallel Computing 49(2015), 83–97. https://doi.org/10.1016/j.parco.2015.09.001
[4]
T. A. Davis and Y. Hu. 2011. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Software 38, 1 (2011), 1:1–1:25.
[5]
J. Demmel, P. Ahrens, and H. D. Nguyen. 2016. Efficient Reproducible Floating Point Summation and BLAS. Technical Report UCB/EECS-2016-121. EECS Department, University of California, Berkeley.
[6]
L. Fousse, G. Hanrot, V. Lefèvre, P. Pélissier, and P. Zimmermann. 2007. MPFR: A Multiple-precision Binary Floating-point Library with Correct Rounding. ACM Trans. Math. Software 33, 2 (2007), 13:1–13:15.
[7]
Y. Hida, X. S. Li, and D. H. Bailey. 2007. Library for Double-Double and Quad-Double Arithmetic. Technical Report. NERSC Division, Lawrence Berkeley National Laboratory.
[8]
R. Iakymchuk, M. Barreda, S. Graillat, J. I. Aliaga, and E. S. Quintana-Ortí. 2020. Reproducibility of Parallel Preconditioned Conjugate Gradient in Hybrid Programming Environments. IJHPCA (2020). Available OnlineFirst 17 June 2020. https://doi.org/10.1177/1094342020932650.
[9]
R. Iakymchuk, M. Barreda, M. Wiesenberger, J. I. Aliaga, and E. S. Quintana-Ortí. 2020. Reproducibility strategies for parallel Preconditioned Conjugate Gradient. J. Comput. Appl. Math. 371 (2020), 112697. https://doi.org/10.1016/j.cam.2019.112697
[10]
R. Iakymchuk, S. Collange, D. Defour, and S. Graillat. 2015. ExBLAS: Reproducible and Accurate BLAS Library. In Proc. Numerical Reproducibility at Exascale (NRE2015) at SC’15.
[11]
U. W. Kulisch. 2013. Computer arithmetic and validity(2nd ed.). de Gruyter Studies in Mathematics, Vol. 33. Walter de Gruyter & Co., Berlin. xxii+434 pages. Theory, implementation, and applications.
[12]
X. S. Li, J. W. Demmel, D. H. Bailey, G. Henry, Y. Hida, J. Iskandar, W. Kahan, A. Kapur, M. C. Martin, T. Tung, and D. J. Yoo. 2000. Design, Implementation and Testing of Extended and Mixed Precision BLAS. ACM Trans. Math. Software 28, 2 (2000), 152–205.
[13]
D. Mukunoki, T. Ogita, and K. Ozaki. 2020. Reproducible BLAS Routines with Tunable Accuracy Using Ozaki Scheme for Many-core Architectures. In Proc. 13th International Conference on Parallel Processing and Applied Mathematics (PPAM2019), Lecture Notes in Computer Science, Vol. 12043. Springer Berlin Heidelberg, 516–527. https://doi.org/10.1007/978-3-030-43229-4_44
[14]
D. Mukunoki, K. Ozaki, T. Ogita, and T. Imamura. 2020. DGEMM using Tensor Cores, and Its Accurate and Reproducible Versions. In ISC High Performance 2020, Lecture Notes in Computer Science, Vol. 12151. Springer International Publishing, 230–248. https://doi.org/10.1007/978-3-030-50743-5_12
[15]
D. Mukunoki and D. Takahashi. 2014. Using Quadruple Precision Arithmetic to Accelerate Krylov Subspace Methods on GPUs. In 10th International Conference on Parallel Processing and Applied Mathematics (PPAM2013). 632–642.
[16]
M. Nakata. [n.d.]. The MPACK; Multiple precision arithmetic BLAS (MBLAS) and LAPACK (MLAPACK). http://mplapack.sourceforge.net.
[17]
K. Ozaki, T. Ogita, S. Oishi, and S. M. Rump. 2012. Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications. Numer. Algorithms 59, 1 (2012), 95–118.
[18]
K. Ozaki, T. Ogita, S. Oishi, and S. M. Rump. 2013. Generalization of error-free transformation for matrix multiplication and its application. Nonlinear Theory and Its Applications, IEICE 4 (2013), 2–11.
[19]
S. M. Rump, T. Ogita, and S. Oishi. 2008. Accurate Floating-Point Summation Part I: Faithful Rounding. SIAM J. Sci. Comput. 31, 1 (2008), 189–224. https://doi.org/10.1137/050645671
[20]
S. M. Rump, T. Ogita, and S. Oishi. 2008. Accurate floating-point summation part II: Sign, K-fold faithful and rounding to nearest. SIAM J. Sci. Comput. 31, 2 (2008), 1269–1302.
[21]
S. M. Rump, T. Ogita, and S. Oishi. 2009. Accurate Floating-Point Summation Part II: Sign, K-Fold Faithful and Rounding to Nearest. SIAM Journal on Scientific Computing 31, 2 (2009), 1269–1302.
[22]
S. M. Rump, T. Ogita, and S. Oishi. 2010. Fast high precision summation. Nonlinear Theory and Its Applications, IEICE 1, 1 (2010), 2–24.
[23]
R. Todd. 2012. Introduction to Conditional Numerical Reproducibility (CNR). https://software.intel.com/en-us/articles/introduction-to-the-conditional-numerical-reproducibility-cnr.

Cited By

View all
  • (2023)Comparison of Reproducible Parallel Preconditioned BiCGSTAB Algorithm Based on ExBLAS and ReproBLASProceedings of the International Conference on High Performance Computing in Asia-Pacific Region10.1145/3578178.3578234(46-54)Online publication date: 27-Feb-2023
  • (2022)Compensated summation and dot product algorithms for floating-point vectors on parallel architecturesJournal of Computational and Applied Mathematics10.1016/j.cam.2022.114434414:COnline publication date: 1-Nov-2022
  • (2022)Infinite-Precision Inner Product and Sparse Matrix-Vector Multiplication Using Ozaki Scheme with Dot2 on Manycore ProcessorsParallel Processing and Applied Mathematics10.1007/978-3-031-30442-2_4(40-54)Online publication date: 11-Sep-2022
  • Show More Cited By

Index Terms

  1. Conjugate Gradient Solvers with High Accuracy and Bit-wise Reproducibility between CPU and GPU using Ozaki scheme
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        HPCAsia '21: The International Conference on High Performance Computing in Asia-Pacific Region
        January 2021
        143 pages
        ISBN:9781450388429
        DOI:10.1145/3432261
        This work is licensed under a Creative Commons Attribution International 4.0 License.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 20 January 2021

        Check for updates

        Author Tags

        1. Accuracy
        2. CPU
        3. Conjugate Gradient
        4. GPU
        5. heterogeneous computing
        6. reproducibility

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        Conference

        HPC Asia 2021

        Acceptance Rates

        Overall Acceptance Rate 69 of 143 submissions, 48%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)193
        • Downloads (Last 6 weeks)25
        Reflects downloads up to 24 Sep 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2023)Comparison of Reproducible Parallel Preconditioned BiCGSTAB Algorithm Based on ExBLAS and ReproBLASProceedings of the International Conference on High Performance Computing in Asia-Pacific Region10.1145/3578178.3578234(46-54)Online publication date: 27-Feb-2023
        • (2022)Compensated summation and dot product algorithms for floating-point vectors on parallel architecturesJournal of Computational and Applied Mathematics10.1016/j.cam.2022.114434414:COnline publication date: 1-Nov-2022
        • (2022)Infinite-Precision Inner Product and Sparse Matrix-Vector Multiplication Using Ozaki Scheme with Dot2 on Manycore ProcessorsParallel Processing and Applied Mathematics10.1007/978-3-031-30442-2_4(40-54)Online publication date: 11-Sep-2022
        • (2021)Accurate Matrix Multiplication on Binary128 Format Accelerated by Ozaki SchemeProceedings of the 50th International Conference on Parallel Processing10.1145/3472456.3472493(1-11)Online publication date: 9-Aug-2021

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media