Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

On the Lagrange Duality of Stochastic and Deterministic Minimax Control and Filtering Problems

  • LINEAR SYSTEMS
  • Published:
Automation and Remote Control Aims and scope Submit manuscript

Abstract

As shown below, the linear operator norms in the deterministic and stochastic cases are optimal values of the Lagrange-dual problems. For linear time-varying systems on a finite horizon, the duality principle leads to stochastic interpretations of the generalized H2 and H norms of the system. Stochastic minimax filtering and control problems with unknown covariance matrices of random factors are considered. Equations of generalized H-suboptimal controllers, filters, and identifiers are derived to achieve a trade-off between the error variance at the end of the observation interval and the sum of the error variances on the entire interval.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

REFERENCES

  1. Kurzhanski, A.B., Upravlenie i nablyudenie v usloviyakh neopredelennosti (Control and Observation under Uncertainty), Moscow: Nauka, 1977.

  2. Kwakernaak, H. and Sivan, R., Linear Optimal Control Systems, 1st ed., Wiley-Interscience, 1972.

    MATH  Google Scholar 

  3. Kailath, T., Sayed, A.H., and Hassibi, B., Linear Estimation, New Jersey: Prentice Hall, 2000.

    MATH  Google Scholar 

  4. Hinrichsen, D. and Pritchard, A.J., Stochastic H , SIAM J. Control Optim., 1998, vol. 36, no. 5, pp. 1504–1538.

    Article  MathSciNet  MATH  Google Scholar 

  5. Petersen, I.R., Ugrinovskii, V.A., and Savkin, A.V., Robust Control Design Using H -Methods, London: Springer-Verlag, 2000.

  6. Schweppe, F.C., Recursive State Estimation: Unknown but Bounded Errors and System Inputs, IEEE Trans. Autom. Control, 1968, vol. 13, no. 1, pp. 22–28.

    Article  Google Scholar 

  7. Wilson, D., Extended Optimality Properties of the Linear Quadratic Regulator and Stationary Kalman Filter, IEEE Trans. Autom. Control, 1990, vol. 35, no. 5, pp. 583–585.

    Article  MathSciNet  MATH  Google Scholar 

  8. Willems, J.C., Deterministic Least Squares Filtering, Journal of Econometrics, 2004, vol. 118, pp. 341–373.

    Article  MathSciNet  MATH  Google Scholar 

  9. Buchstaller, D., Liu, J., and French, M., The Deterministic Interpretation of the Kalman Filter, Int. J. Control, 2021, vol. 94, no. 11, pp. 3226–3236.

    Article  MathSciNet  MATH  Google Scholar 

  10. Kogan, M.M., Optimal Estimation and Filtration under Unknown Covariances of Random Factors, Autom. Remote Control, 2014, vol. 75, no. 11, pp. 1964–1981.

    MathSciNet  MATH  Google Scholar 

  11. Boyd, S. and Vandenberghe, L., Convex Optimization, Cambridge: University Press, 2004.

    Book  MATH  Google Scholar 

  12. Chernousko, F.L., State Estimation for Dynamic Systems, CRC Press, 1993.

    Google Scholar 

  13. Wilson, D.A., Convolution and Hankel Operator Norms for Linear Systems, IEEE Trans. Autom. Control, 1989, vol. 34, no. 1, pp. 94–97.

    Article  MathSciNet  MATH  Google Scholar 

  14. Balandin, D.V., Biryukov, R.S., and Kogan, M.M., Minimax Control of Deviations for the Outputs of a Linear Discrete Time-Varying System, Autom. Remote Control, 2019, vol. 80, no. 12, pp. 345–359.

    MATH  Google Scholar 

  15. Balandin, D.V. and Kogan, M.M., Sintez zakonov upravleniya na osnove lineinykh matrichnykh neravenstv (Control Law Design Based on Linear Matrix Inequalities), Moscow: Fizmatlit, 2007.

  16. Hsieh, C. and Skelton, R., All Covariance Controllers for Linear Discrete-Time Systems, IEEE Trans. Autom. Control, 1990, vol. 35, no. 8, pp. 908–915.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Funding

This work was supported by the Scientific and Educational Mathematical Center “Mathematics of Future Technologies,” agreement no. 075-02-2021-1394.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. M. Kogan.

Additional information

This paper was recommended for publication by B.M. Miller, a member of the Editorial Board

APPENDIX

APPENDIX

Proof of Theorem 4.1. We write the Lagrange function for problem S and find its dual function:

$$\mathop {\min }\limits_{\lambda \; \geqslant \;0,X(t)} \mathop {\max }\limits_{W(t)\; \geqslant \;0} \sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {\operatorname{tr} \left( {C(t)\;D(t)} \right)W(t){{{\left( {C(t)\;D(t)} \right)}}^{{\text{T}}}}} + \operatorname{tr} SMW({{t}_{f}}){{M}^{{\text{T}}}}$$
$$ - \;\lambda \left[ {\operatorname{tr} {{R}^{{ - 1}}}MW({{t}_{0}}){{M}^{{\text{T}}}} + \sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {\operatorname{tr} {{G}^{{ - 1}}}(t)HW(t){{H}^{{\text{T}}}} - 1} } \right]$$
$$ + \;\sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {\operatorname{tr} \left[ {\left( {A(t)\;B(t)} \right)W(t){{{\left( {A(t)\;B(t)} \right)}}^{{\text{T}}}} - MW(t + 1){{M}^{{\text{T}}}}} \right]X(t + 1)} $$
$$ = \mathop {\min }\limits_{\lambda \; \geqslant \;0,X(t)} \mathop {\max }\limits_{W(t)\; \geqslant \;0} \left\{ {\lambda + \sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {\operatorname{tr} W(t)\left[ {{{{\left( {C(t)\;D(t)} \right)}}^{{\text{T}}}}\left( {C(t)\;D(t)} \right) + {{{\left( {A(t)\;B(t)} \right)}}^{{\text{T}}}}X(t + 1)\left( {A(t)\;B(t)} \right)} \right.} } \right.$$
$$\left. {^{{^{{^{{^{{}}}}}}}}\left. { - \;{{M}^{{\text{T}}}}X(t)M - \lambda {{H}^{{\text{T}}}}{{G}^{{ - 1}}}(t)H} \right] + \operatorname{tr} W({{t}_{f}}){{M}^{{\text{T}}}}[S - X({{t}_{f}})]M} \right\},$$

where X(t0) = λR–1. The dual function is finite under the following inequalities:

$$\begin{gathered} {{\left( {C(t)\;D(t)} \right)}^{{\text{T}}}}\left( {C(t)\;D(t)} \right) + {{\left( {A(t)\;B(t)} \right)}^{{\text{T}}}}X(t + 1)\left( {A(t)\;B(t)} \right) - {{M}^{{\text{T}}}}X(t)M - \lambda {{H}^{{\text{T}}}}{{G}^{{ - 1}}}(t)H\;\leqslant \;0, \\ t = {{t}_{0}},\,\, \ldots ,\,\,{{t}_{f}} - 1,\quad S - X({{t}_{f}})\;\leqslant \;0. \\ \end{gathered} $$
(A.1)

(Otherwise, W(t) can be chosen so that the corresponding term will become infinite.) Thus, inequalities (A.1) must hold, but in this case, the minimum in the minimax problem is reached at W(t) = 0, t = t0, …, tf. As a result, we arrive at the dual problem: min λ subject to the constraints (A.1). With the introduced notations and the variable X(t) replaced by λX(t), these constraints are reduced to inequalities (4.7). Since the function is convex and there exists an interior point satisfying the constraints, the values of the primal and dual problems coincide.

We define the function V(t) = xT(t)X(t)x(t), where X(t) satisfies inequalities (4.7). The increment of this function along the trajectories of system (4.1) satisfies the conditions

$$\begin{gathered} \Delta V(t) + {{\lambda }^{{ - 1}}}{{\left| {z(t)} \right|}^{2}} - {{{v}}^{{\text{T}}}}(t){{G}^{{ - 1}}}{v}(t)\;\leqslant \;0, \\ V({{t}_{0}}) = {{x}^{{\text{T}}}}({{t}_{0}}){{R}^{{ - 1}}}x({{t}_{0}}),\quad V({{t}_{f}})\; \geqslant \;{{\lambda }^{{ - 1}}}{{x}^{{\text{T}}}}({{t}_{f}})Sx({{t}_{f}}). \\ \end{gathered} $$
(A.2)

Hence,

$$\sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {{{{\left| {z(t)} \right|}}^{2}}} + {{x}^{{\text{T}}}}({{t}_{f}})Sx({{t}_{f}})\;\leqslant \;\lambda + \lambda \left[ {x({{t}_{0}}){{R}^{{ - 1}}}x({{t}_{0}}) + \sum\limits_{t = {{t}_{0}}}^{{{t}_{f}} - 1} {{{{v}}^{{\text{T}}}}(t){{G}^{{ - 1}}}(t){v}(t) - 1} } \right],$$

i.e., the minimum value λ making inequalities (4.7) solvable is the optimal value in problem D and coincides with the generalized H norm of system (4.1).

Proof of Theorem 5.1. Let us apply Theorem 4.1 to system (5.3): if inequalities (4.7) hold with the matrix A replaced by A – ΘC and the matrix B replaced by B – ΘC, then the generalized H norm of this system is smaller than λ. Using Schur’s complement lemma, we transform these inequalities to

$$\begin{gathered} Y(t + 1) - (A - \Theta C)Y(t){{(A - \Theta C)}^{{\text{T}}}} - (B - \Theta D)G{{(B - \Theta D)}^{{\text{T}}}} \\ - \;(A - \Theta C)Y(t)C_{z}^{{\text{T}}}{{\left( {\lambda I - {{C}_{z}}Y(t)C_{z}^{{\text{T}}}} \right)}^{{ - 1}}}{{C}_{z}}Y(t){{(A - \Theta C)}^{{\text{T}}}}\; \geqslant \;0 \\ \end{gathered} $$

provided that CzY(t)\(C_{z}^{{\text{T}}}\) < λI. Completing the square in Θ(t) on the left-hand side of the latter inequality yields

$$\begin{gathered} Y(t + 1) - A{{\left[ {{{Y}^{{ - 1}}}(t) + {{C}^{{\text{T}}}}G_{D}^{{ - 1}}C - {{\lambda }^{{ - 1}}}C_{z}^{{\text{T}}}{{C}_{z}}} \right]}^{{ - 1}}}{{A}^{{\text{T}}}} - {{G}_{B}} \\ - \;(\Theta - {{\Theta }_{\infty }})\left[ {{{Y}^{{ - 1}}}(t) + {{C}^{{\text{T}}}}G_{D}^{{ - 1}}C - {{\lambda }^{{ - 1}}}C_{z}^{{\text{T}}}{{C}_{z}}} \right]{{(\Theta - {{\Theta }_{\infty }})}^{{\text{T}}}}\; \geqslant \;0, \\ \end{gathered} $$

where Θ is given by (5.9) for P(t) = Y(t). (Here, we have involved the notations (5.5) and some manipulations.) Hence, if the filter parameters are given by (5.9), where the matrix P(t) satisfies Eq. (5.10), then \(\gamma _{s}^{2}\) < λ.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kogan, M.M. On the Lagrange Duality of Stochastic and Deterministic Minimax Control and Filtering Problems. Autom Remote Control 84, 105–116 (2023). https://doi.org/10.1134/S0005117923020066

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S0005117923020066

Keywords:

Navigation