Abstract
Generation of efficient parallel code is a major goal of a well-designed and developed parallelizing compiler. Another important goal is portability of both compiler system and the resulting output source codes. The various choices of current and future parallel computer architectures as well as the cost of developing a parallelizing compiler make portability a very important design goal. Since the design of parallelizing compilers is considerably move complex than designing conventional compilers, it is very important to achieve both efficiency and portability. To meet this dual goal, we have investigated the application of object oriented design to parallelizing compilers. Our parallelizing compiler design is based on abstractions of intermediate representations of loops and their class definitions. In this paper, we address the problem of loop parallelization and propose a framework where the loop parallelization process is divided into three phases and the optimization of loops is performed via a cyclic application of these three phases. The class of each phase is hierarchically derived from intermediate representations of loops. This facilitates the portability of the resulting parallelizing compilers. Furthermore, one of the phases uses a reservation table of hardware resources in order to obtain optimized parallel programs for given hardware resources. The validation of the proposed framework is given through the application of the object oriented design on an example program which is then parallelized efficiently.
Similar content being viewed by others
References
U.Banerjee. Dependence Analysis for Supercomputing. Kluwer Academic Pub., 1988.
C.Brownhill, A.Nicolau, S.Novack and C.Polychronopoulos. Achieving Multi-level Parallelization. In Proceedings of Int'l Symposium on High Performance Computing, pp.183–194, Nov 1997.
M.Snir, S. Otto, S.Huss-Lederman, D. Walker, and J.Dongara. MPI: The Complete Reference. MIT press, 1996.
S.Gossain and B.Anderson. An Iterative-Design Model for Reusable Object-Oriented Software. In Proceedings of Object-Oriented Programming Systems, Languages, and Applications' 90, pp.12–27, Oct 1990.
K. Kennedy S.Hiranandani and C.Tseng. Compiling Fortran d for MIMD Distributed-Memory Machines. Communications of the ACM, Vol.35, No.8, pp.66–80, Aug 1992.
J.Kam and J. Ullman. Global Data Flow Analysis and Iterative Algorithms. Journal of ACM, Vol.23, No.1, pp.158–171, 1976.
J. Lewis, S. Henry, D. Kafura, and R. Schulman. An Empirical Study of The Object-Oriented Paradigm and Software Reuse. In Proceedings of Object-Oriented Programming Systems, Languages, and Applications' 91, pages 184–196, Nov 1991.
M.Loomis, A.Shah and J. Rumbaugh. An Object Modelling Technique for Conceptual Design. In P. Cointe J. B´ezivin, J-M. Hullot and H. Lieberman, editors, Lecture Notes in Computer Science No.276, pages 192–202, Springer Verlag, 1987.
S.Novack. The EVE Mutation Scheduling Compiler: Adaptive Code Generation for Advanced Microprocessors. PhD thesis, University of California, Irvine, 1997.
Y.Omori, K.Joe and A.Fukuda. A Parallelizing Compiler by Object Oriented Design. In Proceedings of Int'l Computer Software and Applications Conf., pp.232–239, Aug 1997.
C.Polychronopulos, M.Girkar, M.Haghighat, C.-L.Lee, B.Leung and D.Schouten. Parafrase-2: An Environment for Parallelizing, Partitioning, Synchronizing, and Scheduling Programs on Multiprocessors. In Journal of High Speed Computing, Vol.1, No.1, pp.45–72, 1989.
J.Rumbaugh, M.Blaha, W.Premerlani, F.Eddy and W. Lorensen. Object Oriented Modeling and Design. Prentice-Hall, 1991.
M. Wolfe and M. Lam. A Loop Transformation Theory and an Algorithm to Maximize parallelism. IEEE Transactions on Parallel and Distributed Systems, Vol.2, No.4, pp.452–470, Oct 1991.
H. Zima and B. Chapman. Supercompilers for Parallel and Vector Computers. ACM Press, 1991.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Omori, Y., Fukuda, A. & Joe, K. An Object-Oriented Framework for Loop Parallelization. The Journal of Supercomputing 13, 57–69 (1999). https://doi.org/10.1023/A:1008062717485
Issue Date:
DOI: https://doi.org/10.1023/A:1008062717485