Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Abstract

An algorithm can be modeled as a set of indexed computations, and a schedule is a mapping of the algorithm index space into time.Linear schedules are a special class of schedules that are described by a linear mapping and are commonly used in many systolic algorithms.Free schedules cause computations of an algorithm to execute as soon as their operands are available. If one computation uses data generated by another computation, then a data dependence exists between these two computations which can be represented by the difference of their indices (calleddependence vector). Many important algorithms are characterized by the fact that data dependencies areuniform, i.e., the values of the dependence vectors are independent of the indices of computations. There are applications where it is of interest to find an optimal linear schedule with respect to the time of execution ofa specific computation of the given algorithm. This paper addresses the problem of identifying optimal linear schedules for uniform dependence algorithms so that the execution time ofa specific computation of the algorithm is minimized and proposes a procedure to solve this problem based on the mathematical solution of a linear optimization problem. Also, linear schedules are compared with free schedules. The comparison indicates that optimal linear schedules can be as efficient as free schedules, the best schedules possible, and identifies a class of algorithms for which this is always true.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. R. M. Karp, R. E. Miller and S. Winograd. The organization of computations for uniform recurrence equations.JACM 14, 3, Jul. 1967, pp. 563–590.

    Article  MathSciNet  MATH  Google Scholar 

  2. D.I. Moldovan and J.A.B. Fortes. Partitioning and mapping algorithms into fixed size systolic arrays.IEEE Trans. Computers, Vol. C-35, No. 1, Jan. 1986, pp. 1–12.

    Article  Google Scholar 

  3. P.R. Cappello and K. Steiglitz. Unifying VLSI array designs with geometric transformations.Proc. Int’l Conf on Parallel Processing, 1983, pp. 448–457.

  4. P. Quinton. Automatic synthesis of systolic arrays from uniform recurrent equations.Proc. 11th Annual Symposium on Computer Architecture, 1984, pp. 208–214.

  5. S.K. Rao.Regular iterative algorithms and their implementations on processor arrays. Ph.D. Dissertation, Stanford University, Stanford, California, Oct. 1985.

    Google Scholar 

  6. M. Chen. A design methodology for synthesizing parallel algorithms and architectures.Journal of Parallel and Distributed Computing, Dec. 1986, pp. 461–491.

  7. J.-M. Delosme and I.C. F. Ipsen. An illustration of a methodology for the construction of efficient systolic architectures in VLSI. Proc. Second Int’l Symposium on VLSI Technology, Systems and Applications, 1985, pp. 268–273.

  8. S.Y. Kung.VLSI Array Processors. Englewood Cliffs, N.J.: Prentice-Hall, 1987.

    Google Scholar 

  9. C. Guerra and R. Melhem. Synthesizing non-uniform systolic designs,Proc. Int’l Conf. on Parallel Processing, 1986, pp. 765–771.

  10. G.-J. Li and B.W. Wah. The design of optimal systolic arrays.IEEE Trans. Computers, Vol. C-34, Jan. 1985, pp. 66–77.

    Article  Google Scholar 

  11. M.T. O’Keefe and J.A.B. Fortes. A comparative study of two systematic design methodologies for systolic arrays.Proc. Int’l Conf. on Parallel Processing, 1986, pp. 672–675.

  12. J.A.B. Fortes, F. Parisi-Presicce, Optimal linear schedule for the parallel execution of algorithms.Proc. Int’l Conf. on Parallel Processing, 1984, pp. 322–328.

  13. R. Cytron. Doacross: Beyond vectorization for multiprocessors (extended abstract).Proc. Int’l Conf. on Parallel Processing, 1986, pp. 836–844.

  14. W. Shang and J.A.B. Fortes. Time optimal linear schedules for algorithms with uniform dependencies.Proc. Int’l Conf. on Systolic Arrays, May 1988, pp. 393–402.

  15. D.A. Padua.Multiprocessors: Discussion of theoretical and practical problems. Ph.D. Thesis, Univ. of Illinois at Urb.-Champ., Rept. No. UIUCDCS-R79-990, Nov. 1979.

  16. J.-K. Peir and R. Cytron. Minimum distance: a method for partitioning recurrences for multiprocessors.Proc. Int’l Conf. on Parallel Processing, 1987, pp. 217–225.

  17. W. Shang and J.A.B. Fortes. Independent partitioning of algorithms with uniform dependencies.Proc. Int’l Conf. on Parallel Processing, Vol. 2, 1988, pp 26–33.

    Google Scholar 

  18. P.E. Gill, W. Murray and M.H. Wright.Practical Optimization. New York: Academic Press, 1981.

    MATH  Google Scholar 

  19. D.G. Luenberger.Linear and Nonlinear Programming. Second Edition, Menlo Park, California: Addison-Wesley Publishing Company, 1984.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Additional information

This research was supported in part by the National Science Foundation under Grant DCI-8419745 and in part by the Innovative Science and Technology Office of the Strategic Defense Initiative Organization and was administered through the Office of Naval Research under contracts No. 00014-85-k-0588 and No. 00014-88-k-0723.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Shang, W., Fortes, J.A.B. On the optimality of linear schedules. J VLSI Sign Process Syst Sign Image Video Technol 1, 209–220 (1989). https://doi.org/10.1007/BF02427795

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02427795

Keywords

Navigation