Nothing Special   »   [go: up one dir, main page]

Skip to main content

Optimal Control of Linear Dynamic Systems

  • Chapter
Dynamic Systems in Management Science
  • 103 Accesses

Abstract

Pontryagin and his associates developed the maximum principle for solving continuous-data control problems. Basically the maximum (or minimum) principle provides a set of local necessary conditions for optimality. According to this method, variables analogous to the Lagrange multipliers should be introduced. These variables are often called the co-state or adjoint-system variables. A scalar-value function H, which generally is a function of x,p,u (state, co-state, control vector) and t, named Hamiltonian function of the problem, is also considered.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

eBook
USD 15.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

Authors

Copyright information

© 2015 Alexis Lazaridis

About this chapter

Cite this chapter

Lazaridis, A. (2015). Optimal Control of Linear Dynamic Systems. In: Dynamic Systems in Management Science. Palgrave Macmillan, London. https://doi.org/10.1057/9781137508928_9

Download citation

Publish with us

Policies and ethics