Nothing Special   »   [go: up one dir, main page]

Control System

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 21

CONTROL SYSTEM

A control system is a device or set of devices to manage, command, direct or regulate the
behavior of other devices or systems.

There are two common classes of control systems, with many variations and
combinations: logic or sequential controls, and feedback or linear controls. There is also fuzzy
logic, which attempts to combine some of the design simplicity of logic with the utility of linear
control. Some devices or systems are inherently not controllable.

OVERVIEW:

The term "control system" may be applied to the essentially manual controls that allow an
operator, for example, to close and open a hydraulic press, perhaps including logic so that it
cannot be moved unless safety guards are in place.

An automatic sequential control system may trigger a series of mechanical actuators in


the correct sequence to perform a task. For example various electric and pneumatic transducers
may fold and glue a cardboard box, fill it with product and then seal it in an automatic packaging
machine.

In the case of linear feedback systems, a control loop, including sensors, control


algorithms and actuators, is arranged in such a fashion as to try to regulate a variable at
a setpoint or reference value. An example of this may increase the fuel supply to a furnace when
a measured temperature drops. PID controllers are common and effective in cases such as this.
Control systems that include some sensing of the results they are trying to achieve are making
use of feedback and so can, to some extent, adapt to varying circumstances. Open-loop control
systems do not make use of feedback, and run only in pre-arranged ways.

LOGIC CONTROL:

Logic control systems for industrial and commercial machinery were historically
implemented at mains voltage using interconnected relays, designed usingladder logic. Today,
most such systems are constructed with programmable logic controllers (PLCs)
or microcontrollers. The notation of ladder logic is still in use as a programming idiom for PLCs.
Logic controllers may respond to switches, light sensors, pressure switches, etc., and can
cause the machinery to start and stop various operations. Logic systems are used to sequence
mechanical operations in many applications. Examples include elevators, washing machines and
other systems with interrelated stop-go operations.

Logic systems are quite easy to design, and can handle very complex operations. Some
aspects of logic system design make use of Boolean logic.

ON–OFF CONTROL:

For example, a thermostat is a simple negative-feedback control: when the temperature


(the "process variable" or PV) goes below a set point (SP), the heater is switched on. Another
example could be a pressure switch on an air compressor: when the pressure (PV) drops below
the threshold (SP), the pump is powered. Refrigerators and vacuum pumps contain similar
mechanisms operating in reverse, but still providing negative feedback to correct errors.

Simple on–off feedback control systems like these are cheap and effective. In some cases,
like the simple compressor example, they may represent a good design choice.

In most applications of on–off feedback control, some consideration needs to be given to


other costs, such as wear and tear of control valves and maybe other start-up costs when power is
reapplied each time the PV drops. Therefore, practical on–off control systems are designed to
include hysteresis, usually in the form of a deadband, a region around the setpoint value in which
no control action occurs. The width of deadband may be adjustable or programmable.

LINEAR CONTROL:

Linear control systems use linear negative feedback to produce a control


signal mathematically based on other variables, with a view to maintaining the controlled process
within an acceptable operating range.

The output from a linear control system into the controlled process may be in the form of
a directly variable signal, such as a valve that may be 0 or 100% open or anywhere in between.
Sometimes this is not feasible and so, after calculating the current required corrective signal, a
linear control system may repeatedly switch an actuator, such as a pump, motor or heater, fully
on and then fully off again, regulating the duty cycle using pulse-width modulation.
PROPORTIONAL CONTROL:

When controlling the temperature of an industrial furnace, it is usually better to control


the opening of the fuel valve in proportion to the current needs of the furnace. This helps avoid
thermal shocks and applies heat more effectively.

Proportional negative-feedback systems are based on the difference between the required
set point (SP) and process value (PV). This difference is called the error. Power is applied in
direct proportion to the current measured error, in the correct sense so as to tend to reduce the
error (and so avoid positive feedback). The amount of corrective action that is applied for a given
error is set by the gain or sensitivity of the control system.

At low gains, only a small corrective action is applied when errors are detected: the
system may be safe and stable, but may be sluggish in response to changing conditions; errors
will remain uncorrected for relatively long periods of time: it is over-damped. If the proportional
gain is increased, such systems become more responsive and errors are dealt with more quickly.
There is an optimal value for the gain setting when the overall system is said to be critically
damped. Increases in loop gain beyond this point will lead to oscillations in the PV; such a
system is under-damped.

UNDER-DAMPED FURNACE EXAMPLE:

In the furnace example, suppose the temperature is increasing towards a set point at
which, say, 50% of the available power will be required for steady-state. At low temperatures,
100% of available power is applied. When the PV is within, say 10° of the SP the heat input
begins to be reduced by the proportional controller. (Note that this implies a 20° "proportional
band" (PB) from full to no power input, evenly spread around the setpoint value). At the setpoint
the controller will be applying 50% power as required, but stray stored heat within the heater
sub-system and in the walls of the furnace will keep the measured temperature rising beyond
what is required. At 10° above SP, we reach the top of the proportional band (PB) and no power
is applied, but the temperature may continue to rise even further before beginning to fall back.
Eventually as the PV falls back into the PB, heat is applied again, but now the heater and the
furnace walls are too cool and the temperature falls too low before its fall is arrested, so that the
oscillations continue.
OVER-DAMPED FURNACE EXAMPLE:

The temperature oscillations that an under-damped furnace control system produces are
unacceptable for many reasons, including the waste of fuel and time (each oscillation cycle may
take many minutes), as well as the likelihood of seriously overheating both the furnace and its
contents.

Suppose that the gain of the control system is reduced drastically and it is restarted. As
the temperature approaches, say 30° below SP (60° proportional band or PB now), the heat input
begins to be reduced, the rate of heating of the furnace has time to slow and, as the heat is still
further reduced, it eventually is brought up to set point, just as 50% power input is reached and
the furnace is operating as required. There was some wasted time while the furnace crept to its
final temperature using only 52% then 51% of available power, but at least no harm was done.
By carefully increasing the gain (i.e. reducing the width of the PB) this over-damped and
sluggish behavior can be improved until the system is critically damped for this SP temperature.
Doing this is known as 'tuning' the control system. A well-tuned proportional furnace
temperature control system will usually be more effective than on-off control, but will still
respond slower than the furnace could under skillful manual control.

PID CONTROL:

Apart from sluggish performance to avoid oscillations, another problem with


proportional-only control is that power application is always in direct proportion to the error. In
the example above we assumed that the set temperature could be maintained with 50% power.
What happens if the furnace is required in a different application where a higher set temperature
will require 80% power to maintain it? If the gain was finally set to a 50° PB, then 80% power
will not be applied unless the furnace is 15° below setpoint, so for this other application the
operators will have to remember always to set the setpoint temperature 15° higher than actually
needed. This 15° figure is not completely constant either: it will depend on the surrounding
ambient temperature, as well as other factors that affect heat loss from or absorption within the
furnace.

To resolve these two problems, many feedback control schemes include mathematical
extensions to improve performance. The most common extensions lead to proportional-integral-
derivative control, or PID control (pronounced pee-eye-dee).
DERIVATIVE ACTION:

The derivative part is concerned with the rate-of-change of the error with time: If the
measured variable approaches the setpoint rapidly, then the actuator is backed off early to allow
it to coast to the required level; conversely if the measured value begins to move rapidly away
from the setpoint, extra effort is applied—in proportion to that rapidity—to try to maintain it.

Derivative action makes a control system behave much more intelligently. On systems
like the temperature of a furnace, or perhaps the motion-control of a heavy item like a gun or
camera on a moving vehicle, the derivative action of a well-tuned PID controller can allow it to
reach and maintain a setpoint better than most skilled human operators could.

If derivative action is over-applied, it can lead to oscillations too. An example would be a


PV that increased rapidly towards SP, then halted early and seemed to "shy away" from the
setpoint before rising towards it again.

Integral action:

The integral term magnifies the effect of long-term steady-state errors, applying ever-
increasing effort until they reduce to zero. In the example of the furnace above working at
various temperatures, if the heat being applied does not bring the furnace up to setpoint, for
whatever reason, integral action increasinglymoves the proportional band relative to the setpoint
until the PV error is reduced to zero and the setpoint is achieved.

OTHER TECHNIQUES:

It is possible to filter the PV or error signal. Doing so can reduce the response of the
system to undesirable frequencies, to help reduce instability or oscillations. Some feedback
systems will oscillate at just one frequency. By filtering out that frequency, more "stiff" feedback
can be applied, making the system more responsive without shaking itself apart.

Feedback systems can be combined. In cascade control, one control loop applies control
algorithms to a measured variable against a setpoint, but then provides a varying setpoint to
another control loop rather than affecting process variables directly. If a system has several
different measured variables to be controlled, separate control systems will be present for each of
them.
Control engineering in many applications produces control systems that are more
complex than PID control. Examples of such fields include fly-by-wire aircraft control systems,
chemical plants, and oil refineries. Model predictive control systems are designed using
specialized computer-aided-design software and empirical mathematical models of the system to
be controlled.

FUZZY LOGIC:

Fuzzy logic is an attempt to get the easy design of logic controllers and yet control
continuously-varying systems. Basically, a measurement in a fuzzy logic system can be partly
true, that is if yes is 1 and no is 0, a fuzzy measurement can be between 0 and 1.

The rules of the system are written in natural language and translated into fuzzy logic.
For example, the design for a furnace would start with: "If the temperature is too high, reduce the
fuel to the furnace. If the temperature is too low, increase the fuel to the furnace."

Measurements from the real world (such as the temperature of a furnace) are converted to values
between 0 and 1 by seeing where they fall on a triangle. Usually the tip of the triangle is the
maximum possible value which translates to "1."

Fuzzy logic, then, modifies Boolean logic to be arithmetical. Usually the "not" operation
is "output = 1 - input," the "and" operation is "output = input.1 multiplied by input.2," and "or" is
"output = 1 - ((1 - input.1) multiplied by (1 - input.2))". This reduces to Boolean arithmetic if
values are restricted to 0 and 1, instead of allowed to range in the unit interval [0,1].

The last step is to "defuzzify" an output. Basically, the fuzzy calculations make a value
between zero and one. That number is used to select a value on a line whose slope and height
converts the fuzzy value to a real-world output number. The number then controls real
machinery.

If the triangles are defined correctly and rules are right the result can be a good control
system.

When a robust fuzzy design is reduced into a single, quick calculation, it begins to
resemble a conventional feedback loop solution and it might appear that the fuzzy design was
unnecessary. However, the fuzzy logic paradigm may provide scalability for large control
systems where conventional methods become unwieldy or costly to derive.
Fuzzy electronics is an electronic technology that uses fuzzy logic instead of the two-
value logic more commonly used in digital electronics.

PHYSICAL IMPLEMENTATIONS:

Since modern small microprocessors are so cheap (often less than $1 US), it's very
common to implement control systems, including feedback loops, with computers, often in
an embedded system. The feedback controls are simulated by having the computer make periodic
measurements and then calculating from this stream of measurements (see digital signal
processing, sampled data systems).

Computers emulate logic devices by making measurements of switch inputs, calculating a


logic function from these measurements and then sending the results out to electronically-
controlled switches.

Logic systems and feedback controllers are usually implemented with programmable


logic controllers which are devices available from electrical supply houses. They include a little
computer and a simplified system for programming. Most often they are programmed with
personal computers.

Logic controllers have also been constructed


from relays, hydraulic and pneumatic devices, and electronics using both transistors and vacuum
tubes (feedback controllers can also be constructed in this manner).
Topical Overview
 
1.  The uncontrolled system, or “plant”:  a system, described by rational polynomial transfer
function, G, which is NOT subject to modification by the designer (shown in blue).  Its input and
output arecontinuous functions of time, i(t) and o(t).
 

Diagram:

            Flow of signals or information is left to right unless otherwise specified.

            Plant Model : a (linear) differential equation.

,           

where the ’s and the ’s are constants. 

            Transfer function:  Using LaPlace transforms, you can transform the model to a rational
polynomial transfer function,

 
 

Note that whether you work with the model in the time (t) domain, or with the transfer function
in the s-domain, knowledge of all of the ’s and ’s implies a complete description of the
linear, time-invariant system.  Obtaining these models or transfer functions for systems of
technological interest is the subject of entire other courses, such as circuit theory,
electromechanics, dynamics, chemical kinetics, etc.

Text Reference: Ch. 1, Ch. 2.

Handout :  Handout 1.1.

Additional help with mechanical system modeling for ECE’s: mechsys.pdf

2.  Control system design:  adding systems to the “plant” in order to control it.  Assuming


you can’t directly modify the output of the “plant”, you can modify the input, and this
modification can make use of knowledge of the output. 

 
            Diagram:  A general representation of the use of additional subsystems, whose design
you can modify (shown in orange) is shown below:

The subsystems, C and F have their own inputs and outputs, and hence their own transfer
functions.   The control signal, r(t),  represents the output desired by the user.  By common
convention, the output ofF is subtracted from r(t) before being fed to C as input. 

Text Reference:  Section 3.1-3.1.8.

Handout 1-2: What can you learn from the transfer function?

3. Sampled-data version of C or F.

Nearly all modern control systems use digital computers to implement the
controllers.  Computers can neither accept continuous functions of time as inputs nor provide
them as outputs to the external (analog) world. All they can do is operate on sequences of
numbers (usually in very rapid succession). In order to interface a computer to the analog world
you must implement a system like the one shown below as a replacement for
subsystems C or F (or sometimes both).
Here the continuous function (solid line in the diagram) r(t), passes through a sample-and-hold
subsystem, S&H.  The relationship between the input and the output of the sample-and-hold is
illustrated below:

Here the white curve is r(t), a continuous time function.  The output of the S&H block is the blue
line with the stair-step structure.  It is continuous everywhere except when it jumps to new
values. Between the jumps, it is constant.  If you listed the vertical co-ordinates of the blue dots
you would have a sequence of numbers.  The digital representations of these numbers are the
output of the analog-to-digital converter, A/D, and constitute the sequence, rk, shown as a dashed
line on the block diagram.

            Once this sequence is formed, the computer has lost all information on the values
of r(t) between sampling points.  From the picture of the input and output of the S&H block, you
can see that the time between successive samples, T,  is the reciprocal of the sampling
frequency, fs.

            The discrete subsystem, D, transforms the input sequence, rk, into another sequence, ok, at
its output.

Discrete system model: The model for the discrete system (a digital filter), is the
difference equation,

            Relationship of rk to r(t):   From the subsystem diagram and the plot, each element of
the sequence is related to a sample taken at the corresponding time as

            Relationship of ok to o(t):   From the subsystem diagram and the plot, each element of
the sequence is related to the constant output of the DAC during the corresponding time interval
as

where u(t) is the unit step function.

Discrete system transfer function:  You can use the z-transform [see Text 8.1-8.2] on
the difference equation to get a discrete transfer function,
.

It is a bit of a stretch to call this a rational polynomial, but it is indeed a rational polynomial in z-
1
.  You can turn in into a rational polynomial in z by multiplying numerator and denominator
by zM if M>N or byzN in M<N.  The result is
 

4.  Loss of information in sampled-data models – modeling a system that includes sampled-


data blocks:  Note that knowledge of all of the a’s and b’s provides a complete description of a
linear system.  Also, solving the difference equation generates only the successive outputs, ok,
without providing any information about what happens between samples. You can write a
differential equation for the system that provides this information about system outputs between
points in the sequence, but that equation will be non-linear.   Therefore, if you design C or F in a
control system with a sampled-data replacement like the one shown above, and you want to
make use of all the mathematical tools available for analyzing linear systems, you then have to
analyze the entire system using a difference, rather than adifferential equation. 

4.1.  Effect of an upstream sample and hold on a system:  It would be convenient to


find the sampled-data system that is “equivalent” to some continuous system under study:
convenient, but impossible.  Sample&hold-ing will always have some effect on a system.  The
effect only vanishes if the sampling frequency becomes infinite, a practical impossibility.
Consider the system below, with only a sample&hold and no other digital subsystems.

            A linear transfer function for this system in the s-domain cannot be written, because of
the non-linear nature of the sample&hold . However, you can write its transfer function in the z-
domain as

where the operation,  , is a mathematical transformation from the s to the z-domain. The input
to this transformation is a rational polynomial in s, while the output is a rational polynomial in z. 

            The key difference between H(s) and H(z) for any system is this:


The  and  coefficients in H(s) depend only on the parameters of the physical
system, while the a and bcoefficients in H(z) depend on those parameters and also on
the sampling frequency.

More details on sampling: read handout H2.

            4.2.  Transfer function, H(z), for an equivalent digital filter, D(z), replacing


subsystem C:  Suppose you replaced subsystem C with a digital filter having a transfer
function, D(z), as shown below. 

 
            You can write the transfer function of this system as

which says that the digital filter transfer function you need to replicate the continuous subsystem
C at a given sampling frequency is

More about Digital Controllers Inside Analog Feedback Loops


 

5. Stability from the transfer function: a stable system produces bounded (finite) outputs from
any bounded input.

            Continuous system:  a continuous system is stable if all the poles of H(s) lie in the left
half of the complex plane.

            Discrete system: a discrete system is stable of all the poles of H(z) lie within a unit circle
whose center is at the origin in the complex plane.

            Alternative criteria for stability exist, but are not needed as long as the poles of the
transfer function can be found.  Pole locations determine stability.

6. Relative settling time from the transfer function:  In general,

            Continuous systems respond faster to step-wise changes in the control signal, r(t), if all


poles of H(s) lie further to the left in the complex plane.
            Sampled-data systems respond faster to step-wise changes in the control signal, r(t), if
all poles of H(z) lie closer to center of a unit circle at the origin in the complex plane.

            This property lets you use pole locations as a rough design tool to determine stability and
speed of response.

7. Frequency domain analysis and design

            For systems whose transfer functions have many poles and zeros, frequency-domain
analysis provides better insight into system performance than does observing pole and zero
locations. You calculate the magnitude, |H|,  and phase, ,  of system transfer functions vs.
angular frequency, f,  as follows:

            Continuous systems:

           

            Sampled-data systems:

where T is the sampling period.  In the above equations, we just made the substitution,  ,

for continuous systems and   for sampled-data systems.

            The objective of frequency-domain design is to achieve a magnitude, |H()|, that


satisfies the conditions illustrated below, and is stable.
Good designs have |H()| =1 within a small tolerance over a wide bandwidth.

8. State-space analysis and design:  The usefulness of state-space analysis in control system


design arises primarily from three of its properties:

1) It lets you deal with systems having more than one feedback loop,

2) It lets you deal with systems having multiple inputs and outputs,

3) It lets you find a set of feedback parameters that place the system poles at any
locations you want, if all the state variables are available for measurement
(observable).

8.1.  Continuous systems:

Time-domain: You can model the dynamics of any linear system having:

P states (state vector x is a column vector, length P),

Q inputs(input vector i is a column vector, length Q), and

R outputs, (output vector o is a column vector, length R).

with a set of simultaneous, first-order linear differential equations in the set of state variables
(termed the state vector), x, in the form,
     , the state equation,

along with a linear output equation,

 .

The components of these equations have the following forms,

        A is a P x P square matrix, called the state matrix,

        B is a P x Q rectangular matrix, called the input matrix

        C is an R x P rectangular matrix, called the output matrix

        D is an R x Q rectangular matrix, called the feedforward matrix.

You design the control of a state-variable system by feeding back a weighted sum of the states to
each system input can control (there may be inputs you cannot control: these are disturbances).

            s-Domain:  You can take the LaPlace transform of the state equation to get

and then solve for the output in terms of the input as

where I   is the square identity matrix.  This equation relates all of the inputs to all of the outputs
in the s-domain, and thus plays the role of a transfer function, even though it can’t be written as a
simple ratio of polynomials.  [For SISO systems (only one input and one output) you get a
rational polynomial transfer function.  To see how state variable models apply to SISO systems,
including some simple examples, read the handout on that subject.]

The poles of this system are the roots of the equation,

            Manipulating pole locations with state feedback:    Start with the state-variable


representation of a system as shown below, with blue representing the uncontrolled system or
“plant” as before:

Assuming that all states are observable, you can feed back a weighted sum of these states to each
input, allowing for the fact that in general, this weighting can be different for each accessible
input.  Now you have the system shown below,

which can be modeled by the state equation


where K is in general a Q x P rectangular matrix.  Obviously you can rearrange this equation so
it has the form of a state equation with a new state matrix, Af, as

whose poles are the roots of

            The state-variable approach to control system design consists of adjusting the elements of
the state feedback matrix, K, to produce desired pole locations consistent with other design
specifications.

8.2. Discrete and sampled-data systems

            Time-domain:  You could also write state equations for any system in the form of a set
of difference equations, giving

for the state equation and

for the output equation.  The dimensions of the matrices and vectors are the same as for the
continuous case. The vectors are now sequences, and the matrices now depend on the
sampling frequency as well as on the parameters of the system.

            z-Domain:  Taking the z-transform of the state equation yields

from which you can get the transfer function-like matrix relationship,

,
The poles of the uncontrolled system are the roots of

Manipulating pole locations with state feedback:  Similarly to state feedback for a


continuous system, you can choose a feedback matrix, K, to adjust pole locations.  K has the
same dimensions as for the continuous case, and the new pole locations with feedback are the
roots of

8.3. Frequency-domain design for state-variable systems

            There is a transfer-function relationship between each output and each input which you
can evaluate in the frequency domain.  Write the matrix input-output relations for both cases as

, and substitute  , OR

 and substitute  .

You can use transfer functions you get from these operations to design a control system with
multiple inputs and outputs.

You might also like