Linear Fractional Transformation: F: C 7→ C F (s) = a + bs c + ds a, b, c d ∈ C c 6= 0 F (s) = α + βs (1 − γs)
Linear Fractional Transformation: F: C 7→ C F (s) = a + bs c + ds a, b, c d ∈ C c 6= 0 F (s) = α + βs (1 − γs)
Linear Fractional Transformation: F: C 7→ C F (s) = a + bs c + ds a, b, c d ∈ C c 6= 0 F (s) = α + βs (1 − γs)
Robust Control 1
Motivation
A feedback control system can be rearranged as an LFT:
• LFT is a useful way to standardize block diagrams for robust control analysis and design
• Fl (G, K) is the transfer function between the error signals and external inputs. In H∞ control
problems the objective is to minimize ||Fl (G, K)||∞
- ∆
z w
G(s)
y u
e d
M (s)
- K(s)
z w
z G G12 w e M M12 d
= 11 = 11
y G21 G22 u z M21 M22 w
Theorem: Let M ∈ RH∞ . Then the closed-
Theorem: K stabilizes G iff K stabilizes G22 .
loop system is well-posed and internally sta-
Proof: (Zhou p. 223) G22 and G share the
∆ ∈ RH∞ with ||∆||∞ ≤ 1 iff
ble for all
same A.
||M11 ||∞ < 1.
Linear Fractional Transformation 5
Pulling out the uncertainties
Basic principle is to ”pull out all uncertainties” which can appear in different points of a block diagram
and to combine them in one uncertainty.
Example: Consider a mass-spring-damper system where the actual mass, m is within 10% of a
nominal mass, m̄, the actual damping value c is within 20% of a nominal value, c̄ and the spring
stiffness k is perfectly known. The dynamical equation of the system motion is:
c k F - ∆
ẍ + ẋ + =
m m m
where:
e d
m = m̄(1+0.1δm ) −1 ≤ δm ≤ 1 G(s)
z w
c = c̄(1 + 0.2δc ) − 1 ≤ δc ≤ 1
1
s2 x = (F − c̄(1 + 0.2δc )sx − kx)
m̄(1 + 0.1δm )
Linear Fractional Transformation 6
Pulling out the uncertainties
F - - 1 - ẍ - 1 x -
m̄ s2
6 6
− −
0.1
dm
δ m
em
c̄s
6 6
+ +
k
+
em G11 G12 G13 dm
δm 0
ec = G21 G22 G23 dc ∆=
0 δc
x G31 G32 G33 F
−0.1m̄s2 −0.2s2
G11 (s) = , G12 (s) = , G13 = · · ·
m̄s2 + c̄s + k m̄s2 + c̄s + k
Linear Fractional Transformation 7
Algebraic Ricatti Equation (ARE)
ARE: A∗ X + XA + XRX + Q = 0 where R = R∗ and Q = Q∗ .
• ARE is so much important for control design as Lyapunov equation for control analysis.
• There are many solutions X = X ∗ to ARE. If A + RX is stable X is a stabilizing solution. The
stabilizing solution is unique.
of H are symmetric
Lemma: Eigenvalues wrt the imaginary axis.
0 −I
Proof: Denote J = then J −1 HJ = −H ∗ . So λ is an eigenvalue of H iff −λ̄ is.
I 0
Remark: If there are no pure imaginary eigenvalues then there are n stable and n unstable
eigenvalues of H .
Robust Control 8
How to solve ARE
X1
Under no pure imaginary eigenvalues assumption, let T = be a basis of the stable
X2
2n×n
n-dimensional invariant subspace. Equivalently HT = T Λ for some stable matrix Λn×n .
Lemma: If det(X1 ) 6= 0 then X = X2 X1−1 is a stabilizing solution to ARE.
Proof: We are to prove that:
1. X = X∗
2. X satisfies to ARE.
3. A + RX is stable.
Under conditions:
A∗ X + XA + XBB ∗ X + C ∗ C = 0
Remark: We can easily treat the case ||G||∞ < γ by simply changing B with γ −1 B .
Optimal H∞ Control: Find all admissible controllers K(s) such that ||Tzw ||∞ is minimized.
Suboptimal H∞ Control: Given γ > 0, find all admissible controllers K(s), if there are any, such
that ||Tzw ||∞ < γ.
Assumptions:
(A1) (A, B1 , C1 ) is controllable and observable (A2) (A, B2 , C2 ) is stabilizable and detectable
∗
B1 ∗
0
(A3) D12 [C1 D12 ] = [0 I] (A4) D21 =
D21 I
Robust Control 11
H∞ Control
What can we do if the Assumptions are not satisfied?
(A3) This Assumption means that C1 and D12 are orthogonal so that the penalty on
z = C1 x + D12 u includes a nonsingular, normalized penalty on the control u (It means that there is
no cross weighting between the state and control input, and the control weight matrix is the identity).
Remark: If there is a control input with no weighting filter, D12 has not full column rank (D12 has a
zero column) and cannot be normalized. The solution is to add the weighting filters on control inputs
(even a very small gain to avoid the singularity in the computations).
(A4) This Assumption is dual to (A3) and concerns how the exogenous signal w enters P : w includes
both plant disturbances and sensor noise, these are orthogonal, and the sensor noise weighting is
normalized and nonsingular.
Remark: In order to avoid the singularity on each measured output we should add an exogenous input
(sensor noise).
D22 6= 0: This problem occurs when we transform a discrete system to a continuous system. In this
case we can solve the problem for D22 = 0 and compute the controller K0 and then the final
controller is: K = K0 (I + D22 K0 )−1
Robust Control 12
State-Space Solution
The solution involves two AREs with Hamiltonian matrices:
A γ −2 B1 B1∗ − B2 B2∗ A∗ γ −2 C1∗ C1 − C2∗ C2
H= J =
−C1∗ C1 −A∗ −B1 B1∗ −A
Theorem: There exist a stabilizing controller such that ||Tzw ||∞ < γ iff these conditions hold:
1. H ∈ dom(Ric) and X=Ric(H )> 0.
2. J ∈ dom(Ric) and Y=Ric(J )> 0.
3. ρ(XY ) < γ 2 (ρ(A) = |λmax (A)| is the spectral radius of A)
Moreover, one such controller is:
 (I − γ −2 Y X)−1 Y C2∗
Ksub (s) =
−B2∗ X 0
where:
 = A + γ −2 B1 B1∗ X − B2 B2∗ X − (I − γ −2 Y X)−1 Y C2∗ C2
H∞ Control 13
Example (mass/spring/damper)
x˙1 0 0 1 0 x1 0 0
x˙ 0 0 0 1 x 0 0
2 2 F1
= k1 b1
+
x˙3 −k1 −b1 x3 1 0
m1 m1 m1 m1 m1 F2
k1
x˙4 m2 − k1m+k
2
2 b1
m2 − b1m+b
2
2
x4 0 1
m2
H∞ Control 14
Example (mass/spring/damper)
6
z2
Wu w1 = F2 s+5 10
Wu = , W1 =
? s + 50 s+2
6- x1 - W1 z-
1
P x2 0.01s + 0.1
u = F1 Wn1 = Wn2 =
s + 100
y1
?
Wn1 w2 = n1 x1 (s) = P11 (s)F1 (s) + P12 (s)F2 (s)
x2 (s) = P21 (s)F1 (s) + P22 (s)F2 (s)
y2
?
w3 = n2
Wn2
Build the augmented plant:
z1 W1 P12 0 0 W1 P11 F2
z 0 0 0 Wu n
2 1
=
y1 P12 Wn1 0 P11 n2
y2 P22 0 Wn2 P21 F1
H∞ Control 15
Example (mass/spring/damper)
Matlab Codes:
H∞ Control 16
Example (mass/spring/damper)
H∞ controller design
[K,T,gopt] = hinfsyn(Gsys,nmeas,ncon,gmin,gmax,tol);
nmeas=2: Number of controller inputs,
ncon=1: Number of controller outputs,
gmin=0.1, gmax=10 (for the bisection algorithm)
H2 controller design
[K2,T2] = h2syn(Gsys,nmeas,ncon);
H∞ Control 17
Integral Control
How can we design an H∞ controller with integral action?
Tzw = W1 (1 + P K)−1
Now if the resulting controller K stabilizes the plant P and make ||Tzw ||∞ , then K
must have a pole at s = 0.
Problem: H∞ theory cannot be applied to systems with poles on the imaginary axis.
1
Solution: Consider a pole very close to the origin in W1 (i. e. W1 = s+ǫ
) and solve
the H∞ problem. The resulting controller will have a pole very close to the origin
which can be replaced by an integrator.
H∞ Control 18
Model & Controller Order Reduction
Introduction: Robust controller design based on the H∞ method leads to very
high-order controllers. There are different methods to design a low-order controller:
Multiplicative model reduction: The relative error ∆r = G−1 (G − Gr ) is defined and minimized.
The problem can be formulated as inf ||G−1 (G − Gr )||∞
deg(Gr )≤r
Example: In model reduction for control purpose, the objective is to find a reduced order model such
that the closed-loop transfer functions are close to each other:
1. (A,B) is controllable.
AP + P A∗ + BB ∗ = 0
1. (C,A) is observable.
A∗ Q + QA + C ∗ C = 0
Idea: Assume that P1 =diag(P11 , P12 ) such that λmax (P12 ) ≪ λmin (P11 ); then one can
discard those weekly controllable states corresponding to P12 without causing much error.
Problem: The controllability (or observability) Gramian alone cannot give an accurate indication of the
dominance of the system states.
Model & Controller Order Reduction 24
Balanced Realization
Balanced realization: A minimal realization of G(s) for which the controllability and observability
Gramians are equal is referred to as a balanced realization.
Lemma: The eigenvalues of the product of the Gramians are invariant under state transformation.
= T x. Then  = T AT −1 ,
Proof: Consider that the state is transformed by a nonsingular T to x̂
B̂ = T B and Ĉ = CT −1 . Now using the Lyapunov equations we find that P̂ = T P T ∗ and
Q̂ = (T −1 )∗ QT −1 . Note that P̂ Q̂ = T P QT −1 .
Remark: A transformation matrix T can always be chosen such that P̂ = Q̂ = Σ where
Σ = diag(σ1 Is1 , σ2 Is2 , . . . , σN IsN ). T and Σ are solutions to the following equations:
T AT −1 Σ + Σ(T −1 )∗ A∗ T ∗ + T BB ∗ T ∗ = 0
(T −1 )∗ A∗ T ∗ Σ + ΣT AT −1 + (T −1 )∗ C ∗ CT −1 = 0
C1 C2 D
Gramian Σ = diag(Σ1 , Σ2 ): (Σ1 and Σ2 have no diagonal entries in common)
N
X N
X
||G − Gr ||∞ ≤ 2 σi and ||G − G(∞)||∞ ≤ 2 σi
i=r+1 i=1
Residualization: The truncated model has not the same static gain as the original high-order model.
The residualization technique considers the DC contributions of the truncated states to modify the
system matrix (instead of truncating the states their derivatives are forced to zero so at
steady-state the original and reduced-order model have the same gain).
Unstable systems: The unstable systems can be factored as G(s) = Gst (s)Gunst (s), then only
the order of the stable part will be reduced.
Frequency weighted balanced reduction: In this technique the stability of the reduced-order model
cannot be guaranteed. Moreover, an upper bound for the modeling error cannot be derived.
Hankel-Norm: The largest Hankel Singular value is called Hankel norm of a system (||G||H = σ1 ).
It can be interpreted as the maximum of future output energy integral. Hankel norm is not really a
norm because ||G||H = 0 does not necessarily imply that G ≡ 0.
Hankel-Norm model reduction: This problem is to find Gr such that ||G − Gr ||H is minimal.
(MATLAB command: Gr=hankmr(Gbal,sigma,r);)
Upper bound on the modeling error: The Hankel norm of the modeling error is σr+1 and the upper
bound for the infinity norm of the modeling error is smaller than that of truncation method:
PN
||G − Gr ||∞ ≤ i=r+1 σi
Theorem: Consider G(s) with Hankel singular values σ1 > σ2 > · · · > σN ≥ 0, then:
Z ∞ N
X
||G||H ≤ ||G||∞ ≤ |g(t)|dt ≤ 2 σi
0 i=1
[Gb,sig] =sysbal(Gsys); 25
plot(sig) 20
Example 2: Consider the transfer function between the control force F1 and the filtered position z1 .
25
10
Magnitude (dB)
20
0
15
−10
10
−20
5
−30
0
1 1.5 2 2.5 3 3.5 4 4.5 5
−40
−2 −1 0 1 2
10 10 10 10 10
Frequency (rad/sec)
1 1 (K − Kr )P
min
− = min
1 + KP 1 + Kr P ∞
(1 + KP )(1 + Kr P ) ∞
KP Kr P
(K − Kr )P
min
− = min
1 + KP 1 + Kr P ∞
(1 + KP )(1 + Kr P ) ∞
6. If H1 (s) ∈ SPR and H2 (s) ∈ SPR then H(s) = α1 H1 (s) + α2 H2 (s) is SPR for
α1 ≥ 0, α2 ≥ 0 and α1 + α2 > 0.
7. If H1 (s) and H2 (s) are SPR then the feedback interconnection of two systems is also SPR.
H1 (s) 1
H(s) = = 1 ∈ SPR
1 + H1 (s)H2 (s) H1 (s) + H2 (s)
• H(s) is PR if and only if there exist P ∈ Rn×n > 0, Q ∈ Rm×n and W ∈ Rm×m such that:
P A + AT P = −QT Q, P B = C T − QT W, W T W = D + D T
H(s) is SPR if in addition H̄(s) = W + Q(sI − A)−1 B has no zero on the imaginary axis.
• H(s) is SPR, that is H(jω) + H ∗ (jω) > 0 ∀ω , if there exist P ∈ Rn×n > 0,
Q ∈ Rm×n , W ∈ Rm×m and ǫ > 0 such that:.
P A + AT P = −ǫP − QT Q, P B = C T − QT W, W T W = D + D T
Then the closed-loop system is globally and exponentially stable if H(s) is SPR.
Proof: We show that if H(s) is SPR the closed-loop system is stable. From positive real lemma, there
exist P > 0, ǫ > 0, Q, W such that:
P A + AT P = −ǫP − QT Q, P B = C T − QT W, W T W = D + D T
Problem 1: Let H(s) ∈ {Hi (s) ∈ PR, i = 1, . . . N } then if feedback controller K is SPR the
closed loop system is robustly stable.
Example: Consider a second-order system (the transfer function between velocity and force in a
position control system) with uncertain parameters ωn and ζ :
sωn2
H(s) = 2 ζ > 0, ωn > 0
s + 2ζωn s + ωn2
Since H(s) is PR any SPR controller will stabilize the closed-loop system.
αy 2 ≤ yφ(α, β) ≤ βy 2 ∀ t ≥ 0, ∀ y ∈ [a b]
Gain of φ: The memoryless sector nonlinearity φ(α, β) satisfies the following inequality:
where γ is the maximum of two-norm input-output gain of the system. It is clear that γ = |β|.
Small gain theorem: Consider the linear stable system H interconnected with a sector nonlinearity
φ(−1, 1) (γ = 1) then the closed loop system is stable if ||H||∞ < 1 (No restriction on the phase of
H but on its magnitude).
Remark: This theorem can be compared with the passivity theorem where the sector nonlinearity is
φ(0, ∞) and the stability condition is H(s) ∈ SPR (No restriction on magnitude of H but on its
phase).
- H1
-
6
H2
Additive transformation:
+ ?
- H1 - - H1 + 1
- -
6 6
+ ? H2
H2
1 − H2
- γ H1 - γH1
- -
6 6
1/γ H2 H2 /γ
Feedback Transformation:
?- H1
- - H1 -
- - 1 + H1
6 6
?
-
H2 H2 − 1
-
- ?
+ H
- H - 1−H
- ⇒ 6
Feedback: 6
?
+
φ(−1, 1) φ(0, 2)
- H - 2H
2 1−H
- - 1−H
6 6
Multiplicative: ⇒
-
-+ ?
1+H
- 2H
- 1−H
- 1−H
⇒ 6
Additive: 6
+? φ(0, ∞)
φ(0, 1)
Positive Real Systems 41
Passivity and Small Gain Theorem
Circle criterion: Consider the sector nonlinearity φ(α, β) in closed-loop with the linear system
H(s). The closed-loop system is stable if one of the following conditions is satisfied:
1. If 0
< α < β , the Nyquist plot of H(jω) does not enter to a disk centered on the real axis which
1 1
passes through (− , − ) (this disk will be called D(α, β)) and encircles it m times in the
α β
counterclockwise direction, where m is the number of RHP poles of H(s) (Nyquist criterion is a
special case where α = β = 1).
2. If 0 = α < β and H(s) is stable and the Nyquist plot of H(jω) lies to the right of the vertical
line defined by Re[s] = −1/β (Passivity theorem is a special case where β goes to infinity).
3. If α
< 0 < β and H(s) is stable and the Nyquist plot of H(jω) lies in the interior of the disk
D(α, β) (Small gain theorem is a special case where α = −1, β = 1).
Remark: Using the three transformations, it can be shown that the stability condition for a closed-loop
1 + βH
system with sector nonlinearity φ(α, β) is ∈ SPR.
1 + αH