1989 Computing and Chemical Engineering
1989 Computing and Chemical Engineering
1989 Computing and Chemical Engineering
R.W.H. Sargent
INTRODUCTION
Computing now permeates all science and engineering, and it would be impossible to
review the whole field of applications of computing in chemical engineering. Instead, I will
try to isolate problem areas in chemical engineering where more extended use of computing
techniques is having an influence, and to consider the impact of developments in computing
on chemical engineering.
Chemical engineering as a profession serves the process industries, and the right
place to start is a look at what has been happening to these industries, and what problems
and challenges they face today.
Major changes are taking place in the process industries. The successive oil crises,
changing world economic conditions, and increased competition from developing countries
have altered the profitability of many traditional products and processes, whilst new
technology and changes in society have created new opportunities and new markets.
The new expanding markets are in pharmaceuticals, health care and agricultural
products, where new knowledge and understanding are producing drugs with specifically
targeted action, and more specific weedkillers, fungicides and pesticides. There is
considerable growth in food processing and packaging, and in the development of
"convenience foods" with a widening range of additives for improving texture, flavour and
preservation. All these products are complex organic compounds, expensive to produce and
2
requiring close control to achieve the necessary selectivity and product purity. They are
usually made in batch or semicontinuous plant, and the subsequent weighing and
packaging operations add further complexity to the production process.
Modern processes involve fewer process steps with closer integration of functions,
smaller inventories of materials, and more extensive heat integration. This means plants
with livelier response and stronger interactions which, coupled with the need for flexible
operation over a wide range of conditions and tighter control to achieve higher
performance, put great demands on the process control system. Indeed, to meet the
exacting performance standards of the speciality chemicals sector, we shall see much more
detailed control of what is going on in the plant, using more sophisticated sensors and more
on—line computation based on more detailed process models. Control issues need to be
considered much more carefully at the design stage, and all these developments are having
as much impact on design as on the process control and management systems.
But perhaps the most significant change in the last year or so, has been a growing
realization by top management in industry that product development is not enough, and
that success, or even survival, is vitally dependent on exploiting to the full these new
sophisticated design, control and management techniques.
Chemical engineers were among the first to exploit digital computers for their
calculations, particularly in the area of process design, and programmes for specific
calculations and the design of individual units rapidly proliferated in the fifties and sixties.
The first steps were taken towards an integrated approach to design with the
development of flowsheeting packages in the early sixties. Originally, these were merely
concerned with steady state simulation, but it was not long before their scope was extended
to deal with steady state design. Soon chemical engineers were tackling the problem of
synthesizing processes and flowsheets, and of efficient energy integration, .and these efforts
dominated research in the seventies and early eighties. At the same time, engineers in
industry were concerned with linking process design to downstream design activities -
equipment design, plant layout, pipework design, instrumentation etc. — and started to
grapple with the problem of managing the vast amounts of data associated with
engineering construction projects.
In the late seventies, the emphasis shifted towards a broader view of design
objectives, a recognition that plants must be designed to operate satisfactorily over a range
of conditions. This is not only a question of covering a range of feasible steady states
(flexibility), but also a matter of good control characteristics over this range, and smooth
and rapid transition from one regime to another (operability). Thus chemical engineers
became concerned with the dynamic behaviour of processes, the development of dynamic
3
simulation techniques, and the integrated design of processes together with their control
systems.
With the recession in the process industries in the early eighties, and the resulting
fall in demand for new plants, the emphasis shifted even more to efficient operation and use
of existing plant, and their extension and adaptation by retrofitting. As techniques and
hardware have improved, so management have become more confident in the use of more
sophisticated control, in on—line optimization, and in the upwards extension of
computer—based techniques into the higher levels of operations management.
Indeed, the use of computer—based methods has been the key to systematic
development of all these techniques, and "process systems engineering" has developed hand
in hand with computer technology, taking advantage of developments in both hardware
and software. Chemical engineers have been equally involved in the development of the
underlying numerical methods, and to a large extent have led the way in extending these to
deal with the large scale nonlinear systems which dominate the process sector.
In the same way they have taken up the ideas of artificial intelligence with
enthusiasm, seeing this as a means of systematizing the use of qualitative information and
qualitative reasoning, and applying the techniques in such widely different areas as process
synthesis and design, hazard analysis, fault detection and control, and the generation of
operator aids and operator training.
In both design and control, robustness has become the key issue, and this seems to
have given rise to renewed interest in the synthesis of heat exchanger networks with a
reversion to algorithmically based techniques to produce an optimal robust (flexible,
resilient) design (1, 2, 3, 4). Gunderson and Naess (4) give a good review of recent work
and outstanding problems, and the other papers cited describe contrasting approaches, each
containing interesting ideas but without so far producing a complete solution. On the
other hand, Paules and Floudas (5) have applied the same approach successfully to the
synthesis of flexible distillation systems, though admittedly they have used the rather
artificial formulation of successive sharp splits. Kaklu and Flower (6) describe a technique
for generating superstructures allowing for more complex configurations, then solve a MILP
to obtain the optimal solution, but do not allow for flexibility in this solution, and there is
clearly scope for combining these two algorithms.
The growing interest in batch processes has revived interest in some old problems,
such as the dynamic simulation of batch stills (10) and optimum operating policies for
them (11), but there is probably more scope for profit in better scheduling and on—line
management techniques. Kondili et al (12) suggest a new representation for multipurpose
batch processing systems and an optimal scheduling algorithm which removes many of the
restrictive assumptions associated with earlier algorithms, while Cott and Macchietto (13)
describe a general package for on—line management and control. The same techniques are
applicable to higher levels of operations management in both batch and continuous plant,
and we can expect a large growth of activity in this area.
4
The second disadvantage of this approach is that it is based on a linear model, but
the use of "linearizing feedback transformations" offers the promise of extension to at least
some nonlinear systems. The idea here is to find a nonlinear transformation of the variables
which produces a linear system in the transformed variables, and as might be expected
there are stringent conditions for such transformations to exist. Nevertheless, there are
some process systems which satisfy the conditions, and a succinct review of the theory and
possibilities is given by Kantor (17).
A direct attack on the nonlinear control design problem is described by Mayne (18),
who formulates the problem as a semi infinite mathematical programme. This allows
flexibility in specification of design objectives and the simultaneous satisfaction of
constraints on stability and performance in either the time domain (such as the shape of a
step or pulse response curve) or the frequency domain (such as the shape of the structured
singular value curve as a function of frequency), thus providing a very general design tool.
Meanwhile, the optimal control approach to multivariable control has again become
popular, following its dramatic relaunch under the banner of "Dynamic Matrix Control
(DMC)" by Shell at the last Chemical Process Control Conference, CPC III (19, 20). The
main advantage claimed by Shell for DMC is its ability to handle operating constraints
directly, but as on—line computer power increases and optimal control algorithms improve,
the approach becomes increasingly attractive, since it also provides on—line optimization,
using an economic objective function. If the techniques for dealing with semi—infinite
constraints can be sufficiently improved, we have the intriguing possibility of an on—line
version of the Mayne—Polak programme described above, incorporating dynamic control
performance constraints in an on—line optimal control algorithm
Thus steady—state simulation gives rise to very large sets of nonlinear algebraic
equations. Here the work of Pantelides (21) has established the clear supremacy of
Newton's method, and his techniques for the efficient automatic generation of code for
derivatives from the code for the functions, combined with quasi—Newton methods when
the latter is not accessible, has at last given us a really robust and efficient package for
solution of these problems. And Wayburn and Seader (22, 23) have continued their work
on continuation methods, which provide a systematic means of finding all solutions to a
problem when multiple solutions arise.
There is still much to do on specific techniques, but as our problems increase in size
and complexity we face more stringent requirements on the software implementation of
these techniques. In the last two or three years we have rapidly graduated from "large"
problems involving a few hundred variables to problems involving several tens of
thousands, and industry already has in view problems implying hundreds of thousands.
Certainly this has put the spotlight on the issues of error propagation and the exploitation
of "structure" in terms of the pattern of occurrences of variables in the equations, but it
has also emphasized the issues of input description and error checking, and the selective
presentation of results and error diagnostics in a form which is helpful to the user. The
greater the flexibility provided for the user in formulating his problems, the greater the
scope for ill—posed problems, and the greater the need for the system to provide help and
guidance. At the same time, the greater the degree of integration and automation of the
techniques, the greater is the need for automatic diagnostic and error—correction
procedures. As we have seen above, the wide variety of types of behaviour encountered in
large complex systems is indeed giving rise to self—learning, adaptive techniques, which
choose options or adjust parameters to suit the problem characteristics as they evolve
during the solution, and modern software packages for design and simulation (like
SPEEDUP, 21) contain sophisticated diagnostic and user—support systems.
However, the major new development in numerical methods at the present time is
the study of ways of exploiting parallel computer architectures. Most of the basic
algorithms used in simulation and design exhibit some parallelism, and reformulation to
exploit this structure is forcing a fundamental re—examination of all our techniques. The
fruits of these efforts will be a vastly improved capability of handling large complex
problems, and hence the possibility of using more detailed and realistic models.
dealing with the large structured databases for mixed types of information, which must be
at the core of any kind of comprehensive integrated design system.
Beyond the algorithms and procedures exhibiting true parallelism, there are many
more with broadly parallel structures, but with various kinds of limited interaction
between the branches, and there is currently much vigorous research on the development of
methodologies for efficiently implementing these on parallel computer systems.
This type of structure represents quite closely that of a design team cooperating on
the design of a process, especially when the work is being carried out on a network of
work—stations linked to a common database. We can therefore expect research in this area
to produce benefits in terms of methodologies for better and more effective coordination of
cooperative activities of this kind.
The hype now seems to be dying down, and people have a better measure of the
possibilities and limitations of the technique. Simple rule—based expert systems lend
themselves well to tasks such as equipment selection, or operator instructions. It would be
unfair to categorize such systems as merely computerized catalogues or operator manuals,
for the possibility of incorporating guidelines and logic make these systems much more
useful. There are now many expert systems of this type in regular use in industry.
There is indeed a general trend away from purel qualitative rule—based systems
towards the use of a combination of empirical knowledge so—called "compiled knowledge")
and quantitative simulation based on mechanistic mode s ("deep knowledge"). For me,
this is a reassuring trend, for I believe that real progress must be based on a clear
formulation of the problems, and on improved understanding of the mechanisms involved
in the systems with which we have to deal; there can be no short—cut by blindly following
half—understood precepts gleaned from "experts" or by using purely qualitative reasoning.
I wholly endorse the words of Stephanopoulos (34), when he says "one should strive to
articulate, represent and utilize all forms of available knowledge".
Today, much can be achieved by simulation, comparing results from a given model
with those from various simplifications of it, as illustrated for example by the work of
Pirkle et al. (35) on the comparison of one and two dimensional models of fixed—bed
reactors. At some point, the model behaviour has of course to be compared with
experiment, and Rippin (36) has recently given a good review of the state of the art in
statistical experiment planning and model assessment techniques.
However, before these techniques can be applied, the model has to be generated.
Leaving aside the simple general—purpose linear models used in control (state—space
models, ARMA models), this requires a consideration of the physical processes involved,
and the issues are discussed by Hofmann (37) and in my own contribution to the PSE'82
Symposium in Kyoto (38).
Now in recent years, as the scope of computer—based techniques has expanded, and
the techniques themselves have become more sophisticated, there has emerged the concept
of the "process systems engineer", specialized in these techniques. Many would argue that
such a person is concerned only with the use of models in design, control, operations
planning, etc., and that the generation of such models is the province of the traditional
chemical engineer, aided possibly by physicists and chemists. Some might go a little
further, and admit to an interest in the general methodology of modelling, such as
statistical parameter fitting, the design of experiments for optimum parameter
determination or model discrimination, and model reduction techniques.
However, the methodology and the individual studies are inextricably mixed, and it
is impossible to carry out the one without an understanding of the other. Indeed, I would
go further, and claim that the modelling process is nowadays the essential route by which
we acquire our understanding of the physical and chemical processes we study. We first
analyse the process into component mechanisms which are fully understood, and hence for
which we already have adequate models, and then ensure that their interconnections are
correctly represented in the model. We are then in a position to simulate the behaviour of
the complete system and compare the results with experiment; significant discrepancies
will lead us back to a reexamination of the elementary mechanisms, or of the modelling of
their interactions.
8