Nothing Special   »   [go: up one dir, main page]

Skip to main content

Supporting dynamic data and processor repartitioning for irregular applications

  • Conference paper
  • First Online:
Parallel Algorithms for Irregularly Structured Problems (IRREGULAR 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1117))

Abstract

Recent research has shown that dynamic reconfiguration of resources allocated to parallel applications can improve both system utilization and application throughput. Distributed Resource Management System (DRMS) is a parallel programming environment that supports development and execution of reconfigurable applications on a dynamically varying set of resources. This paper describes DRMS support for developing reconfigurable irregular applications, using a sparse Cholesky factorization as a model application. We present performance levels achieved by DRMS redistribution primitives, which show that the cost of dynamic data redistribution between different processor configurations for irregular data are comparable to those for regular data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agerwala, T., Martin, J. L., Mirza, J. H., Sadler, D. C., Dias, D. M., and Snir, M., SP2 system architecture. IBM Systems Journal, 34(2): 152–184, 1995.

    Google Scholar 

  2. Chapman, B., Mehrotra, P., and Zima, H. User defined mappings in Vienna Fortran. SIG-PLAN Notices, 28(1), 1993.

    Google Scholar 

  3. Duff, I. S., Grimes, R. G., and Lewis, J. G., Sparse matrix test problems. ACM Transactions on Mathematical Software, 15:1–14, 1989.

    Article  Google Scholar 

  4. Edjlali, E., Agrawal, G., Sussman, A., and Saltz, J. Data parallel programming in an adaptive environment. In Proceedings of 9th International Parallel Processing Symposium, Santa Barbara, CA, April 1995.

    Google Scholar 

  5. Fox, G., Hiranandani, S., Kennedy,K., Koelbel, C., Kremer, U., Tseng, C., and Wu, M. Fortran D language specification. Technical Report COMP TR90-141, Department of Computer Science, Rice University, December 1990.

    Google Scholar 

  6. Indiana University. Sage++, A Class library for Building Fortran 90 and C++ Restructuring Tools, May 1995.

    Google Scholar 

  7. Koelbel, C. H., Loveman, D. B., Schreiber, R. S., Steele Jr., G. L., and Zosel, M. E. The High Performance Fortran Handbook. The MIT Press, 1994.

    Google Scholar 

  8. Kohn, S. R., and Baden, S. B. A robust parallel programming model for dynamic non-uniform scientific computations. In Proceedings of Scalable High-Performance Computing Conference, pages 509–517, Knoxville, TN, May 1994.

    Google Scholar 

  9. Konuru, R., Casas, J., Otto, S. W., Prouty, R., and Walpole, J. A user-level process package for PVM. In Proceedings of the Scalable High Performance Computing Conference, pages 48–55, Knoxville, TN, May 1994.

    Google Scholar 

  10. Konuru, R. B., Moreira, J. E., and Naik, V. K. Application-assisted dynamic scheduling on large-scale multi-computer systems. Technical Report RC 20390, IBM Research Division, February 1996. To appear in Euro-Par'96, Lyon, France, August 27–29, 1996.

    Google Scholar 

  11. Lain, A. and Banerjee, P. Exploiting spatial regularity in irregular iterative applications. In Proceedings of the 9th International Parallel Processing Symposium, pages 820–827, Santa Barbara, CA, April 1995.

    Google Scholar 

  12. McCann, C., Vaswami, R., and Zahorjan, J. A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors. ACM Transactions on Computer Systems, 11(2):146–178, May 1993.

    Article  Google Scholar 

  13. Müller, A. and Rühl, R. Extending High Performance Fortran for the support of unstructured computations. In Proceedings of the 1995 International Conference on Supercomputing, pages 127–136, July 3–7 1995.

    Google Scholar 

  14. Naik, V. K., Setia, S. K., and Squillante, M. S. Processor allocation in multiprogrammed, distributed-memory parallel computer systems. Technical Report RC 20239, IBM Research Division, October 1995. Submitted to Journal of Parallel and Distributed Computing.

    Google Scholar 

  15. Ramaswamy, S. and Banerjee, P. Processor allocation and scheduling of macro dataflow graphs on distributed memory multicomputers by the PARADIGM compiler. In Proceedings of the International Conference on Parallel Processing, pages II:134–138, August 1993.

    Google Scholar 

  16. Sadayappan, P. and Visvanathan, V. Distributed sparse factorization of circuit matrices via recursive e-tree partitioning. In SIAM Symposium on Sparse Matrices, Gleneden Beach, OR, 1989.

    Google Scholar 

  17. Sharma, S. D., Ponnusamy, R., Moon, B., Hwang, Y., Das, R., and Saltz, J. Run-time and compile-time support for adaptive irregular problems. In Proceedings of Supercomputing'94, pages 97–106, November 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alfonso Ferreira José Rolim Yousef Saad Tao Yang

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Moreira, J.E., Eswar, K., Konuru, R.B., Naik, V.K. (1996). Supporting dynamic data and processor repartitioning for irregular applications. In: Ferreira, A., Rolim, J., Saad, Y., Yang, T. (eds) Parallel Algorithms for Irregularly Structured Problems. IRREGULAR 1996. Lecture Notes in Computer Science, vol 1117. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0030114

Download citation

  • DOI: https://doi.org/10.1007/BFb0030114

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61549-1

  • Online ISBN: 978-3-540-68808-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics