Abstract
This work presents a novel strategy for the parallelization of applications containing sparse references. Our approach is a first step to converge from the data-parallel to the automatic parallelization by taking into account the semantical relationship of vectors composing a higher-level data structure. Applying a sparse privatization and a multi-loops analysis at compile-time we enhance the performance and reduce the number of extra code annotations. The building/updating of a sparse matrix at run-time is also studied in this paper, solving the problem of using pointers and some levels of indirections on the left hand side. The evaluation of the strategy has been performed on a Cray T3E with the matrix transposition algorithm, using different temporary buffers for the sparse communication.
The work described in this paper was supported by the Ministry of Education and Culture (CICYT) of Spain under project TIC96-1125-C03.
Chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
R. Asenjo. LU Sparse Matrices Factorization on Multiprocessors. PhD thesis, Computer Architecture Dept., University of Málaga, 1997.
G. Bandera. Semi-Automatic Parallelization of Applications containing Sparse Matrices. PhD thesis, Computer Architecture Dept, University of Málaga, 1999.
A.J.C. Bik. Compiler Support for Sparse Matrix Computations. PhD thesis, University of Leiden, The Netherlands, 1996.
D.R. Chakrabarti, N. Shenoy, A. Choudhary, and P. Banerjee. An efficient uniform run-time scheme for mixed regular-irregular applications. In Proc. of ICS’98.
F. Delaplace and R. Adle. Extension of the dependence analysis for sparse computation. In Proc. of Parallel and Distributed Computing Systems, October 1997.
R. Ghiya and L.J. Hendren. Putting pointer analysis to work. In Proc. of the 25 th ACM SIGPLAN-SIGACT Symp. on Principles of Programming Languages, 1998.
V. Kotlyar, K. Pingali, and P. Stodghill. Compiling parallel code for sparse matrix applications. In Proc. of Supercomputing, 1997.
S. Pissanetzky. Sparse Matrix Technology. Academic Press Inc., 1984.
P. Tu and David Padua. Automatic array privatization. Sixth Workshop on Languages and Compilers for Parallel Computing, 1993.
M. Ujaldón, E.L. Zapata, B. Chapman, and H.P. Zima. Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation. IEEE Trans. on Parallel and Distributed Systems, 8(10):1068–1083, 1997.
J. Wu, R. Das, J. Saltz, and H. Berryman. Distributed memory compiler design for sparse problems. IEEE Trans. on Computers, 44(6):737–753, June 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bandera, G., Zapata, E.L. (2000). Improving the Sparse Parallelization Using Semantical Information at Compile-Time. In: Bode, A., Ludwig, T., Karl, W., Wismüller, R. (eds) Euro-Par 2000 Parallel Processing. Euro-Par 2000. Lecture Notes in Computer Science, vol 1900. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44520-X_44
Download citation
DOI: https://doi.org/10.1007/3-540-44520-X_44
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-67956-1
Online ISBN: 978-3-540-44520-3
eBook Packages: Springer Book Archive