Nothing Special   »   [go: up one dir, main page]

Basic Control System With Matlab Examples

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Basic Control System with matlab examples

Written by Chieh Wu

State Space Equations

It has been cutomary to characterize the internal state of a system using di erential equations. Let's take this RC circuit for example.

We can describe this circuit using these equations.


dVc dt

= ? Vc + C I
1 1 RC

Vc =

Vc

For analysis purpose, it is more useful to look at these equations in terms of the state space equation which has this form.

x = Ax + Bu _ y = Cx + Du x characterizes the internal state of the system. In our circuit case, we are describing the voltage across the capacitor. y is the output. (or the variable we are observing) In our circuit, the internal state happens to also be the output.
So here's the break down of the variables.

x= _

dVc dt

A= ? C =1

RC

x = Vc

B=C
1

u=I

y = Vc

D=0

The only di erence between the di erential equation and the state space equation is the way we are writing them. We are not changing anything in terms of the system. Although we can write the di erential equation in many forms, it has been discovered that writing the eqautions in the state space form has many bene ts. First, it is di cult to study systems without a uni ed form. It is much easier for communication sake for everybody to just convert whatever form they have into a common form. (A,B,C,D) Second, it is now possible to characterize ssystems of multiple variables. The example we just looked at is a single input and single ouput system. (SISO) It consists of only a single variable we are attemping to solve. But for a more complicated system that consists of many variables we are observing, A,B,C,D would convert into matrices instead of being just a variable. just one. For our previous example, we could be looking at two internal states instead of
dVc dt di dt

= ? Vc + C I
1 1 RC 1

=? i
RC

Vc =

Vc

matrix form.
c

In this case, we can organize the equation into the state space equation using the

0 dV 1 0 ? 1 1 @ dt A = @ RC ?01 A di
dt

RC

Vc i

+ 0 I
1
C

Vc = ? 1

Vc i

Once we have put the equations into the state space form, we can represent and study the system simply by:

x = Ax + Bu _ y = Cx + Du

Solutions to the state space equation


The solution to the state space equations in the continuous time domain given that x starts at x are
0

R x = eAtx + t eA t ? Bu( ) d
0 ( ) 0

where the zero input response is

x = eAtx

the zero state response is


R x = t eA t ? Bu( ) d
( ) 0

We are able to combine the zero state and zero input responses due to the superposition principle. For all t 0 , The corresponding output trajectory y is given as:

y (t) = Cx(t) + Du(t)


R y(t) = C eAtx + t eA t ? Bu( ) d ] + Du(t)
0 ( ) 0

Linearization
In the above example, we were working with a linear system. What if the system we are working with is some complicated non-linear system.

x = f (x) _
This type of non-linear system would be very di cult to analyze, so it is generally converted into a linear form using linearization. The concept of linearization is di cult to grasp unless an example is provided. Let's say we have a one dimensional function.

f (x) = 3(x ? 1) ? 1
2

According to the Taylor's Theorem, any function can be broken down into this series.
(n) 00 f (x) = f (x ) + f 0(x ) (x ? x ) + f xeq (x ? x ) + : + f nxeq (x ? x )n
( ) eq eq eq 2! eq 2 ( ) ! eq

So if we use our example, it woud become f (x) = 3(x ? 1) ? 1 f 0(x) = 6x ? 6 f 00(x) =6 f 000(x) =0
2

f (x ) f 0(x ) (x ? x )
eq eq eq

= 3(x ? 1) ? 1 = (6x ? 6)(x ? x )


eq 2 eq eq

" f (xeq)
2!

(x - x )
eq

= (x ? x )
6 2! eq

So f (x) according to taylor series could be rewritten as

f (x) = f3(x ? 1) ? 1 g + f(6x ? 6)(x ? x ) g + f (x ? x ) g + 0 + 0 + ..... + 0


eq 2 6 eq eq 2! eq 2

If you don't believe that this function above is exactly the same as

f (x) = 3(x ? 1) ? 1
2

You should try to manipulate the algebra to see if they are the same. You can use any x and the two functions will still be exactly the same. If you go solve the algebra, you will realize that the value of x really doesn't matter because it wil cancel out during the manipulation. Also notice that there are 3 sections to this equation, the term with the rst derivatvie, second derivative, and third derivative. Together they add up to the the function f (x). If we take 2 out of 3 parts and use that, we would end up with another function that is relatively close to f (x) but not exactly alike. During linearization, we rst pick the perfect x so that the rst term would equal to zero. Our goal is to get the taylor expansion into this form.
eq eq eq

f (x) = f(6x ? 6)(x ? x ) g + f (x ? x ) g


6 eq eq 2! eq 2

that

Notice that the rst term is gone because we have picked an equalibrium point such 0 = 3(x ? 1) ? 1
eq 2

The next thing we do is to take the rst term and forget about all other terms so that we end up with

f (x) = f(6x ? 6)(x ? x ) g


eq eq

For the sake of this example, I have calculated ahead of time what the equalibrium point is. x = 1.5774 Let's take a look what we are doing by keeping only 1 term out of 3.
eq

The parabola is the original function while the red line is the linearized version of the function. Notice that around the equalibrium point 1.5774, the error is zero. The error grows larger and larger when we get further apart from the equalibrium point. The straight blue line parallel to the red line is the linearization before shifting the line to the equalibrium point. As you can see, if you try to guess the values near the equalibrium point using the blue line, you will be o . Luckily, in most engine applications, we don't need to have the result to be exact. So a little error is normally tolerable. For our case, if we only care about x from 1 to 2, our equalibrium point will give us a relative small error. The exercise we have just gone through is an example of a single variable linearization. If we are dealing with a multivariable system, the general procedure is still the same. We are still nding the equalibrium point where the rst term becomes zero and we are still using only the second term. The only di erence is that instead of dealing with one numbers, we are now dealing with matrices. Let's look at an example. De ne f : R ! R
2 2

f (x) =

x1 + x2 2 x2 + 1

If we want to put this function into the state space equation, we would need to linearize it. To do so, we would nd the Jacobian of the matrix which is de ned as:

0 @f1 B @x21 @f @f B @x =B 1 @x B : B @ @fn


@x1

@f1 @x2 @f2 @x2

: : : : : :

@f1 @xn @f2 @xn

: :: : :

@xn

1 C C C C C A @fn

The concept of Jacobian is analogous to derivatives for matrices. Instead of just taking the derivative of a veriable, we are nd the partial derivative of each function. So for our example, we would end up with. We rst nd the equalibrium point where

0 = x +x
1

2 2

0 = x +1
2

The equalibrium point here is

x =0 x =0
1 2

And the Jacobian is

A=

2 2x2 0 1

jx1; 2

=0 =

2 0

0 1

The equation for a line looks like this

y = a( x ? x )
eq

or in our case

y = A(x ? x )
eq

You can see the jacobian as the slop of the line and the equalibrium point as how far to shift the linearized line. From our example with the parabola, the blue line would be Ax while the red line is A(x ? x ). In linearization, we look only at Ax for us to t into the state space equation model. So when we solve for x, we are really solving for (x ? x ). It is very important therefore that after we have calculate the x value, it should be shifted (added to) by x :
eq eq eq

Very often x = 0 as we have in this case. When that is the case, we don't need to shift. But when the equalibrium point is not zero, our calculation will be o by x :
eq eq

In summary, when we encounter non-linear system. It is necessary to linearize the system if we want to study the system using the state space equations. Towards this mean, here are the steps for us to linearize the system. 1. We rst expand out the functions using the taylors series (this requires that you use jacobians to nd the derivatives for the matrices.) 2. We nd the equalibrium piont x such that the rst term is cancelled to zero.
eq

3. We cancel out every term except the rst derivative term, and use the rst term as our estimate of the actual system. 4. You can now use the Jacobian to form the state space equation.

Linearization Using Matlab


Let's do a quick example together. Let's say we have this non-linear system.

y=?

? ! x1 x2 = ? ! ?
_ _ 1 0

x + v + yb

x1 x2

? (x + x )
2 1 2 2

x1 + x2

1 1

w+

1 1

1 1

Where w and v are white noise yb is an unknown constant bias u=0 x=


0 0 1

The rst thing we need to do is to write a data .m le which initializes all the parameters. clc; u = -1/10; w = 10; x0 = 0,1]; sysdata = struct('nx',2,'nu',2,'ny',1); sysdata = set eld(sysdata, 'x0', x0); sysdata = set eld(sysdata, 'u', u); sysdata = set eld(sysdata, 'w', w); __________________________________________________________________________________________________ In this previous section, we are basically setting up a structure in matlab called sysdata. The elds we set: nx = number of states 2, because we have x1 and x2 nu = number of inputs, we have u1 and u2 ny = the number of output which we have just y I normally also write the code in this same le that simulates the functions ____________________________________________________________________________________________________ A,B,C,D] = linmod('funct') t,x,y] = sim('funct',10); plot(t,y); hold t,x2,y2] = sim('linearized',10); plot(t,y2,'red'); 9

____________________________________________________________________________________________________ In the code above, we linearized the non-linear system and created the A,B,C,D matrices. Then we simulate the non-linear system and plot the output of the non-linear system. Once we have plotted out the non-linear system, we also simulate the linearized system for 10 seconds. Then we plot out the linearized system. linmod = linearizes the non-linear system we have funct = name of the simulink le we will write sim = actually simulates the function for 10 seconds The next matlab le you need to write is where you actually de ne the function. I happens to call my function.m but you can call it anything you want. ____________________________________________________________________________________________________ function sys,x0,str,ts] = Feed(t,x,u, ag,sysdata) switch ag, case 0, % set initial state,state space dimensions,etc sizes = simsizes; sizes.NumContStates = sysdata.nx; % number of continuous states sizes.NumDiscStates = 0; % number of discrete states sizes.NumOutputs = sysdata.ny; % number of outputs sizes.NumInputs = sysdata.nu; % number of inputs sizes.DirFeedthrough = 0; sizes.NumSampleTimes = 1; sys = simsizes(sizes); x0 = sysdata.x0; % initial state str = ]; ts = 0 0]; case 1, % calculate state derivatives sys = f(x,u,sysdata); % function where xdot is calculated case 3, % calculate outputs sys = 1 0]*x; case f2,4,9 g, % remaining cases sys = ]; otherwise error( 'Unhandled ag = ',num2str( ag)]); end function xdot] = f(x,u,sysdata) mu = sysdata.u; 10

w = sysdata.w; xdot = -mu*x(1) + w*x(2)-x(1).3-x(1)*x(2).2+u(1)+u(2); -w*x(1) - mu*x(2)-x(2)*x(1).2-x(2).3+u(1)+u(2)]; ____________________________________________________________________________________________________ Here's a picture of what the simulink should look like.

When you double click on the S-function block, make sure that the s-function name is the matlab .m le where you de ned the function. In my case, I have de ned it to be function. sysdata is the structure that includes the all the parameters you de ned in the original data le.

Lastly, you also want to simulate the linearized version, so you would need to build another simulink mdl le that looks like this. I called this le linearized.mdl.

Remember from the rst data m le, when I linearized the non-linear system, I used the function linmod which returned A,B,C,D. They will be the input into the matrices when you double click on the state-space block. 11

Here's the command from the data le that simulated this .mdl le A,B,C,D] = linmod('funct'); t,x2,y2] = sim('linearized',10); Once you have nally, build all the les, here is the simulate of both linear and non-linear system. The Blue signal is the non-linear simulation while the red is the linear.

12

Using Kalman Filter for LQR

The purpose of the Linear Quadratic Regulator is to provide a feedback to the system so that the system won't blow up.

This in theory sounds great, but there are many problems we encounter with LQR. First, notice that the output is x instead of y. This means that the output y must equal to x. If you remember from the previous chapter on LQR, one of the assumption we made was that the input u was equal to kx:

u = kx
This immediately gave us a contrain in terms of the systems that we can work with. We can only work with these kind of system.

x = Ax + Bu _ y=x
Unfortunately, the majority of the system that we will run into have the form.

x = Ax + Bw _ y = Cx + Dw
Where w is the Gaussian white noise. This is where Kalman Filter comes in. It basically takes the output y from the system and estimate the best internal state x. Here's a graphical way to look at it. 13

x 0, such that it is almost equal to the internal state x. So the di erence between what we are guessing and the real thing is what we de ne as the error, ". " = x ? x0
We are working with this type of system

The general idea starts with the concept of error. We are trying to nd the best

x = Ax + Bw _ y = Cx + Dw
We will estimate the internal state using this equation eq1

x_ 0 = Ax 0 + L( y ? Cx 0) y 0 = x0
also written as

eq2

x_ 0 = (A ? LC ) x 0 + Ly y 0 = x0
There are several things you should take note of with the above 2 equations.

First: Notice how the input u position is now y. If you look at the diagram above, you would notice that the input into the Kalman lter is the output of the system we are dealing with, y. Second: Notice that the output of the Kalman Filter is the same thing as the estimated x 0. This means that our estimation of x is going to come out of the Kalman lter. Third: 14

Notice that we already know (A,C), they are from the original system. Forth Notice that we don't know L. This is the only term in the Kalman lter we need to nd. Our goal is to nd the optimum L such that we minimize the error between x and x 0.

". Here's what we are trying to solve.


min Cov "]
L

To minimize the error, our goal is then to minimize the variance (covariance) of error

We start with this equation

" = x ? x0 "_ = x ? x_ 0 _
From the system and equation 1, we can replace x and x 0 _ _

"_ = Ax + Bw ? Ax 0 + L( y ? Cx 0)] "_ = A (x ? x 0) + Bw ? Ly + LCx0


Now we can also replace y from the state equation

"_ = A (x ? x 0) + Bw ? L (Cx + Dw) + LCx 0 "_ = A (x ? x 0) ? LC (x ? x 0) + (B ? LD) w


We can replace (x ? x0) with "

"_ = A" ? LC" + (B ? LD) w "_ = (A ? LC )" + (B ? LD) w


From the rst section, we know that after we solve for " we have
R " = e A ? LC t" + t e A ? LC t ? (B ? LC ) w( ) d
( ) 0 ( )( ) 0

Next we solve for the covariance of the error at t = 1 Cov " ] = E "" ] The Gaussian noise is the only non-deterministic term for the expectation. Where W is the covariance of the noise. 15

Rt Rt
0 0

e A ? LC t ? (B ? LC ) E w( )w( ) ](B ? LC ) e A ? LC t ? d d
( )( ) ( ) ( )

Rt Rt
0 0

e A ? LC t ? (B ? LC ) W (B ? LC ) e A ? LC t ? d d
( )( ) ( ) ( )

The solution to this equation is equal to P, the solution for the lyapunov equation (A ? LC ) P + P (A ? LC ) + (B ? LC )W (B ? LC ) = 0 For the sake of this problem, let's assume that W = 1 After rearranging the algebra, we have

AP + PA + BB ? L(CP + DB ) ? (CP + DB ) L + LDD L = 0


Assuming that DD is invertable, we can get the previous form to look like this.

AP + PA + BB ? L ? (CP + DB ) (DD )? ](DD ) L ? (CP + DB ) (DD )? ] ? (CP + DB ) (DD)? (CP + DB ) = 0


1 1 1

From this point, we note this section of the formula

L ? (CP + DB ) (DD )? ](DD ) L ? (CP + DB ) (DD )? ]


1 1

Notice that this value will always be positive, so if our goal is to minimize the value of P, we want this term to be as small as possible. The smallest term possible is zero so we want to nd the L such that

L ? (CP + DB ) (DD )? = 0
1

or

L = (CP + DB ) (DD )?

Once that term is put to zero, the rest of the formula becomes.

AP + PA + BB ? (CP + DB ) (DD)? (CP + DB ) = 0


1

for P.

This is called the Kalman lter Riccati equation. From this point, we can solve

16

Using Kalman Filter to observe the internal state


We have mentioned before that we can use Kalman Filter to ``guess`` the internal state of a system. Just because you can see the output of a system, it doesn't mean that you know what is happening inside the system. The kalman lter allows you to take the output of the system and tell you if what's inside is blowing up or calm down. In this section, we are going to take an example system and add a bias. If you look at the function below, notice that the output y is the combination of x and a constant value yb. So the kalman lter will look at y and tell you what yb as well as x and x are.
1 2

I want to also mention that since we have made up this system, we could do what ever we want with yb. As a matter of fact, I am going to set it as 5. But keep in mind that the Kalman lter we are about to design will have no clue what yb is, but will still guess it. So here's the system.
? ! x1 x2 = ? ! ?
_ _ 1 0

y=?

x + v + yb

x1 x2

? (x + x )
2 1 2 2

x1 + x2

1 1

w+

1 1

1 1

Where w and v are white noise yb is an unknown constant bias u=0 x=


0 0 1

We are going to observe the zero input response. The rst thing we want to do is to linearize it:
x1 _ x2 _

= ? ! ?! ?

x1 x2

+ 1 w + 1 ?11 u 1 1

Since yb is also something we want to track, we need to make it into a state.

0 x_ 1 0 ? ! 0 10 x 1 0 1 1 1 1 1 @ x_2 A = @ ? ! ? 0 A@ x1 A + @ 1 ? 1 1 A 2 ?4 ?4
_ yb 0 0
e yb

0 0

u w

17

0 1 u @ x1 A + (e? e? e? ) w + v y = ( 1 0 1) x2 yb
4 4 4

You might have noticed that we didn't make y_b = 0. This is for the kalman lter sake, we must have matrices with full rank. So instead of having zero, we just use a very small number. By changing this number, it actually determines how quickly we can track the bias. The next thing we need to do is to nd the kalman gain L, we can go through the math by hand, but luckily matlab does it for us with these commands. A = -u w 0; -w -u 0; 0 0 0]; B = 1 1.0000 1.0000 1 -1.0000 1.0000 0 0 2*exp(-1)]; C = 1.0000 0 1]; D = exp(-1) exp(-1) 2*exp(-1)]; sys = ss(A,B,C,D); kesk, L, P] = kalman(sys,1,1); The rst command ss, de nes a system for us, while kalman help us nd the kalman gain. Out of the three outputs kesk, L, and P, we only need the kalman gain L. Once we have the kalman gain, we can nd the output using a state space block with these state space equations. x_ 0 = (A ? LC ) x 0 + Ly

x0 = x0

Here is a picture of what the simulink block should look like.

Take a look at the blocks. We have already covered how S-functions work from previous examples I have written. Notice how we have added a bias of 5 at the output. So here's what the Kalman lter sees. ( Taken from scope ) 18

Now, here's what the kalman lter thinks the state as well as our bias should be.

Take a while guess which line is tracking the bias 5, and which are tracking the other states. Now, remember how we introduced the fake noise before for yb instead of using zero. By controlling that number, it allows us to track it faster or slower. The small the number the slower it tracks. If i make that number something like e? , it would take much longer to congerge to 5 than the picture above.
3

19

You might also like